All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 11:29 ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Hi all.

(Sorry, I screwd up with CC list in previous mails, so I'm doing this resend).

This patch set introduces address sanitizer for linux kernel (kasan).
Address sanitizer is dynamic memory error detector. It detects:
 - Use after free bugs.
 - Out of bounds reads/writes in kmalloc

It is possible, but not implemented yet or not included into this patch series:
 - Global buffer overflow
 - Stack buffer overflow
 - Use after return

In this patches contains kasan for x86/x86_64/arm architectures, for buddy and SLUB allocator.

Patches are base on next-20140704 and also available in git:
	git://github.com/aryabinin/linux.git --branch=kasan/kasan_v1

The main idea was borrowed from https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel.
The original implementation (only x88_64 and only for SLAB) by Andrey Konovalov could be
found here http://github.com/xairy/linux. Some of code in this patches was stolen from there.

To use this feature you need pretty fresh GCC (revision r211699 from 2014-06-16 or
above).

To enable kasan configure kernel with:
     CONFIG_KASAN = y
and
     CONFIG_KASAN_SANTIZE_ALL = y

Currently KASAN works only with SLUB allocator. It is highly recommended to run KASAN with
CONFIG_SLUB_DEBUG=y and use 'slub_debug=U' in boot cmdline to enable user tracking
(free and alloc stacktraces).

Basic concept of kasan:

The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
		return ((addr) >> KASAN_SHADOW_SCALE_SHIFT)
       	             + kasan_shadow_start - (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
     }

where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.


TODO:
 - Optimizations: __asan_load*/__asan_store* are called for every memory access, so it's
        important to make them as fast as possible.
        In this patch set introduced only reference design of memory checking algorithm. It's
        slow but very simple, so anyone could easily understand basic concept.
        In future versions I'll try bring optimized versions with some numbers.

 - It seems like guard page introduced in c0a32f (mm: more intensive memory corruption debugging)
       could be easily reused for kasan as well.

 - get rid of kasan_disable_local()/kasan_enable_local() functions. kasan_enable/kasan_disable are
       used in some rare cases when we need validly access poisoned areas. This functions might be a
       stopping gap for inline instrumentation (see below).

TODO probably not for these series:
 - Quarantine for slub. For more strong use after free detection we need to delay reusing of freed
      slabs. So we need a something similar to guard pages in buddy allocator. Such quarantine might
      be useful even without kasan.

 - Inline instrumentation. Inline instrumentation means that fast patch of __asan_load* __asan_store* calls
    will be implemented in compiler, and instead of inserting function calls compiler will actually insert
    this fast path. To be able to do this we need (at least):
       a) get rid of kasan_disable()/kasan_enable() (see above)
       b) get rid of kasan_initialized flag. The main reason why we have this flag now is because we don't
       	  have any shadow on early stages of boot.

	  Konstantin Khlebnikov suggested a way to solve this issue:
               We could reserve virtual address space for shadow and map pages on very early stage of
               boot process (for x86_64 I think it should be done somewhere in x86_64_start_kernel).
               So we will have shadow all the time an flag kasan_initialized will no longer required.

 - Stack instrumentation (currently doesn't supported in mainline GCC though it is possible)
 - Global variables instrumentation
 - Use after return



[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

List of already fixed bugs found by address sanitizer:

aab515d (fib_trie: remove potential out of bound access)
984f173 ([SCSI] sd: Fix potential out-of-bounds access)
5e9ae2e (aio: fix use-after-free in aio_migratepage)
2811eba (ipv6: udp packets following an UFO enqueued packet need also be handled by UFO)
057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
9709674 (ipv4: fix a race in ip4_datagram_release_cb())
4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
624483f (mm: rmap: fix use-after-free in __put_anon_vma)

Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Alexey Preobrazhensky <preobr@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-kbuild@vger.kernel.org>
Cc: <linux-arm-kernel@lists.infradead.org>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>

Andrey Ryabinin (21):
  Add kernel address sanitizer infrastructure.
  init: main: initialize kasan's shadow area on boot
  x86: add kasan hooks fort memcpy/memmove/memset functions
  x86: boot: vdso: disable instrumentation for code not linked with
    kernel
  x86: cpu: don't sanitize early stages of a secondary CPU boot
  x86: mm: init: allocate shadow memory for kasan
  x86: Kconfig: enable kernel address sanitizer
  mm: page_alloc: add kasan hooks on alloc and free pathes
  mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
  mm: slab: share virt_to_cache() between slab and slub
  mm: slub: share slab_err and object_err functions
  mm: util: move krealloc/kzfree to slab_common.c
  mm: slub: add allocation size field to struct kmem_cache
  mm: slub: kasan: disable kasan when touching unaccessible memory
  mm: slub: add kernel address sanitizer hooks to slub allocator
  arm: boot: compressed: disable kasan's instrumentation
  arm: add kasan hooks fort memcpy/memmove/memset functions
  arm: mm: reserve shadow memory for kasan
  arm: Kconfig: enable kernel address sanitizer
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  lib: add kmalloc_bug_test module

 Documentation/kasan.txt           | 224 ++++++++++++++++++++
 Makefile                          |   8 +-
 arch/arm/Kconfig                  |   1 +
 arch/arm/boot/compressed/Makefile |   2 +
 arch/arm/include/asm/string.h     |  30 +++
 arch/arm/mm/init.c                |   3 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/string_32.h  |  28 +++
 arch/x86/include/asm/string_64.h  |  24 +++
 arch/x86/kernel/cpu/Makefile      |   3 +
 arch/x86/lib/Makefile             |   2 +
 arch/x86/mm/init.c                |   3 +
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 commit                            |   3 +
 fs/dcache.c                       |   3 +
 include/linux/kasan.h             |  61 ++++++
 include/linux/sched.h             |   4 +
 include/linux/slab.h              |  19 +-
 include/linux/slub_def.h          |   5 +
 init/main.c                       |   3 +-
 lib/Kconfig.debug                 |  10 +
 lib/Kconfig.kasan                 |  22 ++
 lib/Makefile                      |   1 +
 lib/test_kmalloc_bugs.c           | 254 +++++++++++++++++++++++
 mm/Makefile                       |   5 +
 mm/kasan/Makefile                 |   3 +
 mm/kasan/kasan.c                  | 420 ++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                  |  42 ++++
 mm/kasan/report.c                 | 187 +++++++++++++++++
 mm/page_alloc.c                   |   4 +
 mm/slab.c                         |   6 -
 mm/slab.h                         |  25 ++-
 mm/slab_common.c                  |  96 +++++++++
 mm/slub.c                         |  50 ++++-
 mm/util.c                         |  91 ---------
 scripts/Makefile.lib              |  10 +
 40 files changed, 1550 insertions(+), 111 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 commit
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kmalloc_bugs.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
1.8.5.5


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 11:29 ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Hi all.

(Sorry, I screwd up with CC list in previous mails, so I'm doing this resend).

This patch set introduces address sanitizer for linux kernel (kasan).
Address sanitizer is dynamic memory error detector. It detects:
 - Use after free bugs.
 - Out of bounds reads/writes in kmalloc

It is possible, but not implemented yet or not included into this patch series:
 - Global buffer overflow
 - Stack buffer overflow
 - Use after return

In this patches contains kasan for x86/x86_64/arm architectures, for buddy and SLUB allocator.

Patches are base on next-20140704 and also available in git:
	git://github.com/aryabinin/linux.git --branch=kasan/kasan_v1

The main idea was borrowed from https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel.
The original implementation (only x88_64 and only for SLAB) by Andrey Konovalov could be
found here http://github.com/xairy/linux. Some of code in this patches was stolen from there.

To use this feature you need pretty fresh GCC (revision r211699 from 2014-06-16 or
above).

To enable kasan configure kernel with:
     CONFIG_KASAN = y
and
     CONFIG_KASAN_SANTIZE_ALL = y

Currently KASAN works only with SLUB allocator. It is highly recommended to run KASAN with
CONFIG_SLUB_DEBUG=y and use 'slub_debug=U' in boot cmdline to enable user tracking
(free and alloc stacktraces).

Basic concept of kasan:

The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
		return ((addr) >> KASAN_SHADOW_SCALE_SHIFT)
       	             + kasan_shadow_start - (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
     }

where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.


TODO:
 - Optimizations: __asan_load*/__asan_store* are called for every memory access, so it's
        important to make them as fast as possible.
        In this patch set introduced only reference design of memory checking algorithm. It's
        slow but very simple, so anyone could easily understand basic concept.
        In future versions I'll try bring optimized versions with some numbers.

 - It seems like guard page introduced in c0a32f (mm: more intensive memory corruption debugging)
       could be easily reused for kasan as well.

 - get rid of kasan_disable_local()/kasan_enable_local() functions. kasan_enable/kasan_disable are
       used in some rare cases when we need validly access poisoned areas. This functions might be a
       stopping gap for inline instrumentation (see below).

TODO probably not for these series:
 - Quarantine for slub. For more strong use after free detection we need to delay reusing of freed
      slabs. So we need a something similar to guard pages in buddy allocator. Such quarantine might
      be useful even without kasan.

 - Inline instrumentation. Inline instrumentation means that fast patch of __asan_load* __asan_store* calls
    will be implemented in compiler, and instead of inserting function calls compiler will actually insert
    this fast path. To be able to do this we need (at least):
       a) get rid of kasan_disable()/kasan_enable() (see above)
       b) get rid of kasan_initialized flag. The main reason why we have this flag now is because we don't
       	  have any shadow on early stages of boot.

	  Konstantin Khlebnikov suggested a way to solve this issue:
               We could reserve virtual address space for shadow and map pages on very early stage of
               boot process (for x86_64 I think it should be done somewhere in x86_64_start_kernel).
               So we will have shadow all the time an flag kasan_initialized will no longer required.

 - Stack instrumentation (currently doesn't supported in mainline GCC though it is possible)
 - Global variables instrumentation
 - Use after return



[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

List of already fixed bugs found by address sanitizer:

aab515d (fib_trie: remove potential out of bound access)
984f173 ([SCSI] sd: Fix potential out-of-bounds access)
5e9ae2e (aio: fix use-after-free in aio_migratepage)
2811eba (ipv6: udp packets following an UFO enqueued packet need also be handled by UFO)
057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
9709674 (ipv4: fix a race in ip4_datagram_release_cb())
4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
624483f (mm: rmap: fix use-after-free in __put_anon_vma)

Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Alexey Preobrazhensky <preobr@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-kbuild@vger.kernel.org>
Cc: <linux-arm-kernel@lists.infradead.org>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>

Andrey Ryabinin (21):
  Add kernel address sanitizer infrastructure.
  init: main: initialize kasan's shadow area on boot
  x86: add kasan hooks fort memcpy/memmove/memset functions
  x86: boot: vdso: disable instrumentation for code not linked with
    kernel
  x86: cpu: don't sanitize early stages of a secondary CPU boot
  x86: mm: init: allocate shadow memory for kasan
  x86: Kconfig: enable kernel address sanitizer
  mm: page_alloc: add kasan hooks on alloc and free pathes
  mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
  mm: slab: share virt_to_cache() between slab and slub
  mm: slub: share slab_err and object_err functions
  mm: util: move krealloc/kzfree to slab_common.c
  mm: slub: add allocation size field to struct kmem_cache
  mm: slub: kasan: disable kasan when touching unaccessible memory
  mm: slub: add kernel address sanitizer hooks to slub allocator
  arm: boot: compressed: disable kasan's instrumentation
  arm: add kasan hooks fort memcpy/memmove/memset functions
  arm: mm: reserve shadow memory for kasan
  arm: Kconfig: enable kernel address sanitizer
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  lib: add kmalloc_bug_test module

 Documentation/kasan.txt           | 224 ++++++++++++++++++++
 Makefile                          |   8 +-
 arch/arm/Kconfig                  |   1 +
 arch/arm/boot/compressed/Makefile |   2 +
 arch/arm/include/asm/string.h     |  30 +++
 arch/arm/mm/init.c                |   3 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/string_32.h  |  28 +++
 arch/x86/include/asm/string_64.h  |  24 +++
 arch/x86/kernel/cpu/Makefile      |   3 +
 arch/x86/lib/Makefile             |   2 +
 arch/x86/mm/init.c                |   3 +
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 commit                            |   3 +
 fs/dcache.c                       |   3 +
 include/linux/kasan.h             |  61 ++++++
 include/linux/sched.h             |   4 +
 include/linux/slab.h              |  19 +-
 include/linux/slub_def.h          |   5 +
 init/main.c                       |   3 +-
 lib/Kconfig.debug                 |  10 +
 lib/Kconfig.kasan                 |  22 ++
 lib/Makefile                      |   1 +
 lib/test_kmalloc_bugs.c           | 254 +++++++++++++++++++++++
 mm/Makefile                       |   5 +
 mm/kasan/Makefile                 |   3 +
 mm/kasan/kasan.c                  | 420 ++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                  |  42 ++++
 mm/kasan/report.c                 | 187 +++++++++++++++++
 mm/page_alloc.c                   |   4 +
 mm/slab.c                         |   6 -
 mm/slab.h                         |  25 ++-
 mm/slab_common.c                  |  96 +++++++++
 mm/slub.c                         |  50 ++++-
 mm/util.c                         |  91 ---------
 scripts/Makefile.lib              |  10 +
 40 files changed, 1550 insertions(+), 111 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 commit
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kmalloc_bugs.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 11:29 ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all.

(Sorry, I screwd up with CC list in previous mails, so I'm doing this resend).

This patch set introduces address sanitizer for linux kernel (kasan).
Address sanitizer is dynamic memory error detector. It detects:
 - Use after free bugs.
 - Out of bounds reads/writes in kmalloc

It is possible, but not implemented yet or not included into this patch series:
 - Global buffer overflow
 - Stack buffer overflow
 - Use after return

In this patches contains kasan for x86/x86_64/arm architectures, for buddy and SLUB allocator.

Patches are base on next-20140704 and also available in git:
	git://github.com/aryabinin/linux.git --branch=kasan/kasan_v1

The main idea was borrowed from https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel.
The original implementation (only x88_64 and only for SLAB) by Andrey Konovalov could be
found here http://github.com/xairy/linux. Some of code in this patches was stolen from there.

To use this feature you need pretty fresh GCC (revision r211699 from 2014-06-16 or
above).

To enable kasan configure kernel with:
     CONFIG_KASAN = y
and
     CONFIG_KASAN_SANTIZE_ALL = y

Currently KASAN works only with SLUB allocator. It is highly recommended to run KASAN with
CONFIG_SLUB_DEBUG=y and use 'slub_debug=U' in boot cmdline to enable user tracking
(free and alloc stacktraces).

Basic concept of kasan:

The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
		return ((addr) >> KASAN_SHADOW_SCALE_SHIFT)
       	             + kasan_shadow_start - (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
     }

where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.


TODO:
 - Optimizations: __asan_load*/__asan_store* are called for every memory access, so it's
        important to make them as fast as possible.
        In this patch set introduced only reference design of memory checking algorithm. It's
        slow but very simple, so anyone could easily understand basic concept.
        In future versions I'll try bring optimized versions with some numbers.

 - It seems like guard page introduced in c0a32f (mm: more intensive memory corruption debugging)
       could be easily reused for kasan as well.

 - get rid of kasan_disable_local()/kasan_enable_local() functions. kasan_enable/kasan_disable are
       used in some rare cases when we need validly access poisoned areas. This functions might be a
       stopping gap for inline instrumentation (see below).

TODO probably not for these series:
 - Quarantine for slub. For more strong use after free detection we need to delay reusing of freed
      slabs. So we need a something similar to guard pages in buddy allocator. Such quarantine might
      be useful even without kasan.

 - Inline instrumentation. Inline instrumentation means that fast patch of __asan_load* __asan_store* calls
    will be implemented in compiler, and instead of inserting function calls compiler will actually insert
    this fast path. To be able to do this we need (at least):
       a) get rid of kasan_disable()/kasan_enable() (see above)
       b) get rid of kasan_initialized flag. The main reason why we have this flag now is because we don't
       	  have any shadow on early stages of boot.

	  Konstantin Khlebnikov suggested a way to solve this issue:
               We could reserve virtual address space for shadow and map pages on very early stage of
               boot process (for x86_64 I think it should be done somewhere in x86_64_start_kernel).
               So we will have shadow all the time an flag kasan_initialized will no longer required.

 - Stack instrumentation (currently doesn't supported in mainline GCC though it is possible)
 - Global variables instrumentation
 - Use after return



[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

List of already fixed bugs found by address sanitizer:

aab515d (fib_trie: remove potential out of bound access)
984f173 ([SCSI] sd: Fix potential out-of-bounds access)
5e9ae2e (aio: fix use-after-free in aio_migratepage)
2811eba (ipv6: udp packets following an UFO enqueued packet need also be handled by UFO)
057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
9709674 (ipv4: fix a race in ip4_datagram_release_cb())
4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
624483f (mm: rmap: fix use-after-free in __put_anon_vma)

Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Alexey Preobrazhensky <preobr@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-kbuild@vger.kernel.org>
Cc: <linux-arm-kernel@lists.infradead.org>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>

Andrey Ryabinin (21):
  Add kernel address sanitizer infrastructure.
  init: main: initialize kasan's shadow area on boot
  x86: add kasan hooks fort memcpy/memmove/memset functions
  x86: boot: vdso: disable instrumentation for code not linked with
    kernel
  x86: cpu: don't sanitize early stages of a secondary CPU boot
  x86: mm: init: allocate shadow memory for kasan
  x86: Kconfig: enable kernel address sanitizer
  mm: page_alloc: add kasan hooks on alloc and free pathes
  mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
  mm: slab: share virt_to_cache() between slab and slub
  mm: slub: share slab_err and object_err functions
  mm: util: move krealloc/kzfree to slab_common.c
  mm: slub: add allocation size field to struct kmem_cache
  mm: slub: kasan: disable kasan when touching unaccessible memory
  mm: slub: add kernel address sanitizer hooks to slub allocator
  arm: boot: compressed: disable kasan's instrumentation
  arm: add kasan hooks fort memcpy/memmove/memset functions
  arm: mm: reserve shadow memory for kasan
  arm: Kconfig: enable kernel address sanitizer
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  lib: add kmalloc_bug_test module

 Documentation/kasan.txt           | 224 ++++++++++++++++++++
 Makefile                          |   8 +-
 arch/arm/Kconfig                  |   1 +
 arch/arm/boot/compressed/Makefile |   2 +
 arch/arm/include/asm/string.h     |  30 +++
 arch/arm/mm/init.c                |   3 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/string_32.h  |  28 +++
 arch/x86/include/asm/string_64.h  |  24 +++
 arch/x86/kernel/cpu/Makefile      |   3 +
 arch/x86/lib/Makefile             |   2 +
 arch/x86/mm/init.c                |   3 +
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 commit                            |   3 +
 fs/dcache.c                       |   3 +
 include/linux/kasan.h             |  61 ++++++
 include/linux/sched.h             |   4 +
 include/linux/slab.h              |  19 +-
 include/linux/slub_def.h          |   5 +
 init/main.c                       |   3 +-
 lib/Kconfig.debug                 |  10 +
 lib/Kconfig.kasan                 |  22 ++
 lib/Makefile                      |   1 +
 lib/test_kmalloc_bugs.c           | 254 +++++++++++++++++++++++
 mm/Makefile                       |   5 +
 mm/kasan/Makefile                 |   3 +
 mm/kasan/kasan.c                  | 420 ++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                  |  42 ++++
 mm/kasan/report.c                 | 187 +++++++++++++++++
 mm/page_alloc.c                   |   4 +
 mm/slab.c                         |   6 -
 mm/slab.h                         |  25 ++-
 mm/slab_common.c                  |  96 +++++++++
 mm/slub.c                         |  50 ++++-
 mm/util.c                         |  91 ---------
 scripts/Makefile.lib              |  10 +
 40 files changed, 1550 insertions(+), 111 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 commit
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kmalloc_bugs.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
1.8.5.5

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:29   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Address sanitizer for kernel (kasan) is a dynamic memory error detector.

The main features of kasan is:
 - is based on compiler instrumentation (fast),
 - detects out of bounds for both writes and reads,
 - provides use after free detection,

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
latter).

Implementation details:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
                             + kasan_shadow_start;
     }

where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 224 +++++++++++++++++++++++++++++++++++++
 Makefile                |   8 +-
 commit                  |   3 +
 include/linux/kasan.h   |  33 ++++++
 include/linux/sched.h   |   4 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  20 ++++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 292 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  36 ++++++
 mm/kasan/report.c       | 157 ++++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 ++
 13 files changed, 792 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 commit
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..141391ba
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,224 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
+fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
+ - is based on compiler instrumentation (fast),
+ - detects OOB for both writes and reads,
+ - provides UAF detection,
+ - prints informative reports.
+
+KASAN uses compiler instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 4.10.0.
+
+Currently KASAN supported on x86/x86_64/arm architectures and requires kernel
+to be build with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 4.10.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+to instrument entire kernel:
+
+	  CONFIG_KASAN_SANTIZE_ALL = y
+
+Currently KASAN works only with SLUB. It is highly recommended to run KASAN with
+CONFIG_SLUB_DEBUG=y and 'slub_debug=U'. This enables user tracking (free and alloc traces).
+There is no need to enable redzoning since KASAN detects access to user tracking structs
+so they actually act like redzones.
+
+To enable instrumentation for only specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := y
+
+        For all files in one directory:
+                KASAN_SANITIZE := y
+
+To exclude files from being profiled even when CONFIG_GCOV_PROFILE_ALL
+is specified, use:
+
+                KASAN_SANITIZE_main.o := n
+        and:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical buffer overflow report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+2.1. Shadow memory
+==================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use instrumentation to check the shadow memory on each memory
+access.
+
+AddressSanitizer dedicates one-eighth of the low memory to its shadow
+memory and uses direct mapping with a scale and offset to translate a memory
+address to its corresponding shadow address.
+
+Here is function witch translate address to corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_START;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+The figure below shows the address space layout. The memory is split
+into two parts (low and high) which map to the corresponding shadow regions.
+Applying the shadow mapping to addresses in the shadow region gives us
+addresses in the Bad region.
+
+|--------|        |--------|
+| Memory |----    | Memory |
+|--------|    \   |--------|
+| Shadow |--   -->| Shadow |
+|--------|  \     |--------|
+|   Bad  |   ---->|  Bad   |
+|--------|  /     |--------|
+| Shadow |--   -->| Shadow |
+|--------|    /   |--------|
+| Memory |----    | Memory |
+|--------|        |--------|
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
+
+2.2. Instrumentation
+====================
+
+Since some functions (such as memset, memmove, memcpy) wich do memory accesses
+are written in assembly, compiler can't instrument them.
+Therefore we replace these functions with our own instrumented functions
+(kasan_memset, kasan_memcpy, kasan_memove).
+In some circumstances you may need to use the original functions,
+in such case insert #undef KASAN_HOOKS before includes.
+
diff --git a/Makefile b/Makefile
index 64ab7b3..08a07f2 100644
--- a/Makefile
+++ b/Makefile
@@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
+CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
+			--param asan-use-after-return=0 \
+			--param asan-globals=0 \
+			--param asan-memintrin=0 \
+			--param asan-instrumentation-with-call-threshold=0 \
+			-DKASAN_HOOKS
 
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
@@ -428,7 +434,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
diff --git a/commit b/commit
new file mode 100644
index 0000000..134f4dd
--- /dev/null
+++ b/commit
@@ -0,0 +1,3 @@
+
+I'm working on address sanitizer for kernel.
+fuck this bloody.
\ No newline at end of file
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..7efc3eb
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,33 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+void unpoison_shadow(const void *address, size_t size);
+
+void kasan_enable_local(void);
+void kasan_disable_local(void);
+
+/* Reserves shadow memory. */
+void kasan_alloc_shadow(void);
+void kasan_init_shadow(void);
+
+#else /* CONFIG_KASAN */
+
+static inline void unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+/* Reserves shadow memory. */
+static inline void kasan_init_shadow(void) {}
+static inline void kasan_alloc_shadow(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 322d4fc..286650a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1471,6 +1471,10 @@ struct task_struct {
 	gfp_t lockdep_reclaim_gfp;
 #endif
 
+#ifdef CONFIG_KASAN
+	int kasan_depth;
+#endif
+
 /* journalling filesystem info */
 	void *journal_info;
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index cf9cf82..67a4dfc 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -611,6 +611,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..2bfff78
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,20 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: dynamic memory error detector"
+	default n
+	help
+	  Enables AddressSanitizer - dynamic memory error detector,
+	  that finds out-of-bounds and use-after-free bugs.
+
+config KASAN_SANITIZE_ALL
+	bool "Instrument entire kernel"
+	depends on KASAN
+	default y
+	help
+	  This enables compiler intrumentation for entire kernel
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index e4a97bd..dbe9a22 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -64,3 +64,4 @@ obj-$(CONFIG_ZPOOL)	+= zpool.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..e2cd345
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,292 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+static bool __read_mostly kasan_initialized;
+
+unsigned long kasan_shadow_start;
+unsigned long kasan_shadow_end;
+
+/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
+unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
+
+
+static inline bool addr_is_in_mem(unsigned long addr)
+{
+	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
+}
+
+void kasan_enable_local(void)
+{
+	if (likely(kasan_initialized))
+		current->kasan_depth--;
+}
+
+void kasan_disable_local(void)
+{
+	if (likely(kasan_initialized))
+		current->kasan_depth++;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return likely(kasan_initialized
+		&& !current->kasan_depth);
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void unpoison_shadow(const void *address, size_t size)
+{
+	poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool address_is_poisoned(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (shadow_value != 0) {
+		s8 last_byte = addr & KASAN_SHADOW_MASK;
+		return last_byte >= shadow_value;
+	}
+	return false;
+}
+
+static __always_inline unsigned long memory_is_poisoned(unsigned long addr,
+							size_t size)
+{
+	unsigned long end = addr + size;
+	for (; addr < end; addr++)
+		if (unlikely(address_is_poisoned(addr)))
+			return addr;
+	return 0;
+}
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	unsigned long access_addr;
+	struct access_info info;
+
+	if (!kasan_enabled())
+		return;
+
+	if (unlikely(addr < TASK_SIZE)) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (!addr_is_in_mem(addr))
+		return;
+
+	access_addr = memory_is_poisoned(addr, size);
+	if (likely(access_addr == 0))
+		return;
+
+	info.access_addr = access_addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __init kasan_alloc_shadow(void)
+{
+	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
+	unsigned long shadow_size;
+	phys_addr_t shadow_phys_start;
+
+	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
+
+	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
+	if (!shadow_phys_start) {
+		pr_err("Unable to reserve shadow memory\n");
+		return;
+	}
+
+	kasan_shadow_start = (unsigned long)phys_to_virt(shadow_phys_start);
+	kasan_shadow_end = kasan_shadow_start + shadow_size;
+
+	pr_info("reserved shadow memory: [0x%lx - 0x%lx]\n",
+		kasan_shadow_start, kasan_shadow_end);
+	kasan_shadow_offset = kasan_shadow_start -
+		(PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
+}
+
+void __init kasan_init_shadow(void)
+{
+	if (kasan_shadow_start) {
+		unpoison_shadow((void *)PAGE_OFFSET,
+				(size_t)(kasan_shadow_start - PAGE_OFFSET));
+		poison_shadow((void *)kasan_shadow_start,
+			kasan_shadow_end - kasan_shadow_start,
+			KASAN_SHADOW_GAP);
+		unpoison_shadow((void *)kasan_shadow_end,
+				(size_t)(high_memory - kasan_shadow_end));
+		kasan_initialized = true;
+		pr_info("shadow memory initialized\n");
+	}
+}
+
+void *kasan_memcpy(void *dst, const void *src, size_t len)
+{
+	if (unlikely(len == 0))
+		return dst;
+
+	check_memory_region((unsigned long)src, len, false);
+	check_memory_region((unsigned long)dst, len, true);
+
+	return memcpy(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memcpy);
+
+void *kasan_memset(void *ptr, int val, size_t len)
+{
+	if (unlikely(len == 0))
+		return ptr;
+
+	check_memory_region((unsigned long)ptr, len, true);
+
+	return memset(ptr, val, len);
+}
+EXPORT_SYMBOL(kasan_memset);
+
+void *kasan_memmove(void *dst, const void *src, size_t len)
+{
+	if (unlikely(len == 0))
+		return dst;
+
+	check_memory_region((unsigned long)src, len, false);
+	check_memory_region((unsigned long)dst, len, true);
+
+	return memmove(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memmove);
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complains */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..711ae4f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,36 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ kasan_shadow_offset;
+}
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return ((shadow_addr - kasan_shadow_start)
+		<< KASAN_SHADOW_SCALE_SHIFT) + PAGE_OFFSET;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..2430e05
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,157 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <andreyknvl@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h> /* for ../slab.h */
+
+#include "kasan.h"
+#include "../slab.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
+{
+	return x - ((x - slab_start) % s->size);
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "buffer overflow";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	void *object;
+	struct kmem_cache *cache;
+	void *slab_start;
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+	page = virt_to_page(info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static void print_shadow_pointer(unsigned long row, unsigned long shadow,
+				 char *output)
+{
+	/* The length of ">ff00ff00ff00ff00: " is 3 + (BITS_PER_LONG/8)*2 chars. */
+	unsigned long space_count = 3 + (BITS_PER_LONG >> 2) + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK;
+	unsigned long i;
+
+	for (i = 0; i < space_count; i++)
+		output[i] = ' ';
+	output[space_count] = '^';
+	output[space_count + 1] = '\0';
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[100];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+
+		if (row_is_guilty(aligned_shadow, shadow)) {
+			print_shadow_pointer(aligned_shadow, shadow, buffer);
+			pr_err("%s\n", buffer);
+		}
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+void kasan_report_error(struct access_info *info)
+{
+	kasan_disable_local();
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->access_addr);
+	pr_err("================================="
+		"=================================\n");
+	kasan_enable_local();
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	kasan_disable_local();
+	pr_err("================================="
+		"=================================\n");
+        pr_err("AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+        pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+               info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	kasan_enable_local();
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 260bf8a..2bec69e 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN_SANITIZE_ALL)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Address sanitizer for kernel (kasan) is a dynamic memory error detector.

The main features of kasan is:
 - is based on compiler instrumentation (fast),
 - detects out of bounds for both writes and reads,
 - provides use after free detection,

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
latter).

Implementation details:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
                             + kasan_shadow_start;
     }

where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 224 +++++++++++++++++++++++++++++++++++++
 Makefile                |   8 +-
 commit                  |   3 +
 include/linux/kasan.h   |  33 ++++++
 include/linux/sched.h   |   4 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  20 ++++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 292 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  36 ++++++
 mm/kasan/report.c       | 157 ++++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 ++
 13 files changed, 792 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 commit
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..141391ba
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,224 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
+fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
+ - is based on compiler instrumentation (fast),
+ - detects OOB for both writes and reads,
+ - provides UAF detection,
+ - prints informative reports.
+
+KASAN uses compiler instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 4.10.0.
+
+Currently KASAN supported on x86/x86_64/arm architectures and requires kernel
+to be build with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 4.10.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+to instrument entire kernel:
+
+	  CONFIG_KASAN_SANTIZE_ALL = y
+
+Currently KASAN works only with SLUB. It is highly recommended to run KASAN with
+CONFIG_SLUB_DEBUG=y and 'slub_debug=U'. This enables user tracking (free and alloc traces).
+There is no need to enable redzoning since KASAN detects access to user tracking structs
+so they actually act like redzones.
+
+To enable instrumentation for only specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := y
+
+        For all files in one directory:
+                KASAN_SANITIZE := y
+
+To exclude files from being profiled even when CONFIG_GCOV_PROFILE_ALL
+is specified, use:
+
+                KASAN_SANITIZE_main.o := n
+        and:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical buffer overflow report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+2.1. Shadow memory
+==================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use instrumentation to check the shadow memory on each memory
+access.
+
+AddressSanitizer dedicates one-eighth of the low memory to its shadow
+memory and uses direct mapping with a scale and offset to translate a memory
+address to its corresponding shadow address.
+
+Here is function witch translate address to corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_START;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+The figure below shows the address space layout. The memory is split
+into two parts (low and high) which map to the corresponding shadow regions.
+Applying the shadow mapping to addresses in the shadow region gives us
+addresses in the Bad region.
+
+|--------|        |--------|
+| Memory |----    | Memory |
+|--------|    \   |--------|
+| Shadow |--   -->| Shadow |
+|--------|  \     |--------|
+|   Bad  |   ---->|  Bad   |
+|--------|  /     |--------|
+| Shadow |--   -->| Shadow |
+|--------|    /   |--------|
+| Memory |----    | Memory |
+|--------|        |--------|
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
+
+2.2. Instrumentation
+====================
+
+Since some functions (such as memset, memmove, memcpy) wich do memory accesses
+are written in assembly, compiler can't instrument them.
+Therefore we replace these functions with our own instrumented functions
+(kasan_memset, kasan_memcpy, kasan_memove).
+In some circumstances you may need to use the original functions,
+in such case insert #undef KASAN_HOOKS before includes.
+
diff --git a/Makefile b/Makefile
index 64ab7b3..08a07f2 100644
--- a/Makefile
+++ b/Makefile
@@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
+CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
+			--param asan-use-after-return=0 \
+			--param asan-globals=0 \
+			--param asan-memintrin=0 \
+			--param asan-instrumentation-with-call-threshold=0 \
+			-DKASAN_HOOKS
 
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
@@ -428,7 +434,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
diff --git a/commit b/commit
new file mode 100644
index 0000000..134f4dd
--- /dev/null
+++ b/commit
@@ -0,0 +1,3 @@
+
+I'm working on address sanitizer for kernel.
+fuck this bloody.
\ No newline at end of file
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..7efc3eb
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,33 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+void unpoison_shadow(const void *address, size_t size);
+
+void kasan_enable_local(void);
+void kasan_disable_local(void);
+
+/* Reserves shadow memory. */
+void kasan_alloc_shadow(void);
+void kasan_init_shadow(void);
+
+#else /* CONFIG_KASAN */
+
+static inline void unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+/* Reserves shadow memory. */
+static inline void kasan_init_shadow(void) {}
+static inline void kasan_alloc_shadow(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 322d4fc..286650a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1471,6 +1471,10 @@ struct task_struct {
 	gfp_t lockdep_reclaim_gfp;
 #endif
 
+#ifdef CONFIG_KASAN
+	int kasan_depth;
+#endif
+
 /* journalling filesystem info */
 	void *journal_info;
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index cf9cf82..67a4dfc 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -611,6 +611,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..2bfff78
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,20 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: dynamic memory error detector"
+	default n
+	help
+	  Enables AddressSanitizer - dynamic memory error detector,
+	  that finds out-of-bounds and use-after-free bugs.
+
+config KASAN_SANITIZE_ALL
+	bool "Instrument entire kernel"
+	depends on KASAN
+	default y
+	help
+	  This enables compiler intrumentation for entire kernel
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index e4a97bd..dbe9a22 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -64,3 +64,4 @@ obj-$(CONFIG_ZPOOL)	+= zpool.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..e2cd345
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,292 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+static bool __read_mostly kasan_initialized;
+
+unsigned long kasan_shadow_start;
+unsigned long kasan_shadow_end;
+
+/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
+unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
+
+
+static inline bool addr_is_in_mem(unsigned long addr)
+{
+	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
+}
+
+void kasan_enable_local(void)
+{
+	if (likely(kasan_initialized))
+		current->kasan_depth--;
+}
+
+void kasan_disable_local(void)
+{
+	if (likely(kasan_initialized))
+		current->kasan_depth++;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return likely(kasan_initialized
+		&& !current->kasan_depth);
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void unpoison_shadow(const void *address, size_t size)
+{
+	poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool address_is_poisoned(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (shadow_value != 0) {
+		s8 last_byte = addr & KASAN_SHADOW_MASK;
+		return last_byte >= shadow_value;
+	}
+	return false;
+}
+
+static __always_inline unsigned long memory_is_poisoned(unsigned long addr,
+							size_t size)
+{
+	unsigned long end = addr + size;
+	for (; addr < end; addr++)
+		if (unlikely(address_is_poisoned(addr)))
+			return addr;
+	return 0;
+}
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	unsigned long access_addr;
+	struct access_info info;
+
+	if (!kasan_enabled())
+		return;
+
+	if (unlikely(addr < TASK_SIZE)) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (!addr_is_in_mem(addr))
+		return;
+
+	access_addr = memory_is_poisoned(addr, size);
+	if (likely(access_addr == 0))
+		return;
+
+	info.access_addr = access_addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __init kasan_alloc_shadow(void)
+{
+	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
+	unsigned long shadow_size;
+	phys_addr_t shadow_phys_start;
+
+	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
+
+	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
+	if (!shadow_phys_start) {
+		pr_err("Unable to reserve shadow memory\n");
+		return;
+	}
+
+	kasan_shadow_start = (unsigned long)phys_to_virt(shadow_phys_start);
+	kasan_shadow_end = kasan_shadow_start + shadow_size;
+
+	pr_info("reserved shadow memory: [0x%lx - 0x%lx]\n",
+		kasan_shadow_start, kasan_shadow_end);
+	kasan_shadow_offset = kasan_shadow_start -
+		(PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
+}
+
+void __init kasan_init_shadow(void)
+{
+	if (kasan_shadow_start) {
+		unpoison_shadow((void *)PAGE_OFFSET,
+				(size_t)(kasan_shadow_start - PAGE_OFFSET));
+		poison_shadow((void *)kasan_shadow_start,
+			kasan_shadow_end - kasan_shadow_start,
+			KASAN_SHADOW_GAP);
+		unpoison_shadow((void *)kasan_shadow_end,
+				(size_t)(high_memory - kasan_shadow_end));
+		kasan_initialized = true;
+		pr_info("shadow memory initialized\n");
+	}
+}
+
+void *kasan_memcpy(void *dst, const void *src, size_t len)
+{
+	if (unlikely(len == 0))
+		return dst;
+
+	check_memory_region((unsigned long)src, len, false);
+	check_memory_region((unsigned long)dst, len, true);
+
+	return memcpy(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memcpy);
+
+void *kasan_memset(void *ptr, int val, size_t len)
+{
+	if (unlikely(len == 0))
+		return ptr;
+
+	check_memory_region((unsigned long)ptr, len, true);
+
+	return memset(ptr, val, len);
+}
+EXPORT_SYMBOL(kasan_memset);
+
+void *kasan_memmove(void *dst, const void *src, size_t len)
+{
+	if (unlikely(len == 0))
+		return dst;
+
+	check_memory_region((unsigned long)src, len, false);
+	check_memory_region((unsigned long)dst, len, true);
+
+	return memmove(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memmove);
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complains */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..711ae4f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,36 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ kasan_shadow_offset;
+}
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return ((shadow_addr - kasan_shadow_start)
+		<< KASAN_SHADOW_SCALE_SHIFT) + PAGE_OFFSET;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..2430e05
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,157 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <andreyknvl@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h> /* for ../slab.h */
+
+#include "kasan.h"
+#include "../slab.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
+{
+	return x - ((x - slab_start) % s->size);
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "buffer overflow";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	void *object;
+	struct kmem_cache *cache;
+	void *slab_start;
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+	page = virt_to_page(info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static void print_shadow_pointer(unsigned long row, unsigned long shadow,
+				 char *output)
+{
+	/* The length of ">ff00ff00ff00ff00: " is 3 + (BITS_PER_LONG/8)*2 chars. */
+	unsigned long space_count = 3 + (BITS_PER_LONG >> 2) + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK;
+	unsigned long i;
+
+	for (i = 0; i < space_count; i++)
+		output[i] = ' ';
+	output[space_count] = '^';
+	output[space_count + 1] = '\0';
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[100];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+
+		if (row_is_guilty(aligned_shadow, shadow)) {
+			print_shadow_pointer(aligned_shadow, shadow, buffer);
+			pr_err("%s\n", buffer);
+		}
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+void kasan_report_error(struct access_info *info)
+{
+	kasan_disable_local();
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->access_addr);
+	pr_err("================================="
+		"=================================\n");
+	kasan_enable_local();
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	kasan_disable_local();
+	pr_err("================================="
+		"=================================\n");
+        pr_err("AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+        pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+               info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	kasan_enable_local();
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 260bf8a..2bec69e 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN_SANITIZE_ALL)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-arm-kernel

Address sanitizer for kernel (kasan) is a dynamic memory error detector.

The main features of kasan is:
 - is based on compiler instrumentation (fast),
 - detects out of bounds for both writes and reads,
 - provides use after free detection,

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
latter).

Implementation details:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
                             + kasan_shadow_start;
     }

where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 224 +++++++++++++++++++++++++++++++++++++
 Makefile                |   8 +-
 commit                  |   3 +
 include/linux/kasan.h   |  33 ++++++
 include/linux/sched.h   |   4 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  20 ++++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 292 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  36 ++++++
 mm/kasan/report.c       | 157 ++++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 ++
 13 files changed, 792 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 commit
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..141391ba
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,224 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
+fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
+ - is based on compiler instrumentation (fast),
+ - detects OOB for both writes and reads,
+ - provides UAF detection,
+ - prints informative reports.
+
+KASAN uses compiler instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 4.10.0.
+
+Currently KASAN supported on x86/x86_64/arm architectures and requires kernel
+to be build with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 4.10.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+to instrument entire kernel:
+
+	  CONFIG_KASAN_SANTIZE_ALL = y
+
+Currently KASAN works only with SLUB. It is highly recommended to run KASAN with
+CONFIG_SLUB_DEBUG=y and 'slub_debug=U'. This enables user tracking (free and alloc traces).
+There is no need to enable redzoning since KASAN detects access to user tracking structs
+so they actually act like redzones.
+
+To enable instrumentation for only specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := y
+
+        For all files in one directory:
+                KASAN_SANITIZE := y
+
+To exclude files from being profiled even when CONFIG_GCOV_PROFILE_ALL
+is specified, use:
+
+                KASAN_SANITIZE_main.o := n
+        and:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical buffer overflow report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+2.1. Shadow memory
+==================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use instrumentation to check the shadow memory on each memory
+access.
+
+AddressSanitizer dedicates one-eighth of the low memory to its shadow
+memory and uses direct mapping with a scale and offset to translate a memory
+address to its corresponding shadow address.
+
+Here is function witch translate address to corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_START;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+The figure below shows the address space layout. The memory is split
+into two parts (low and high) which map to the corresponding shadow regions.
+Applying the shadow mapping to addresses in the shadow region gives us
+addresses in the Bad region.
+
+|--------|        |--------|
+| Memory |----    | Memory |
+|--------|    \   |--------|
+| Shadow |--   -->| Shadow |
+|--------|  \     |--------|
+|   Bad  |   ---->|  Bad   |
+|--------|  /     |--------|
+| Shadow |--   -->| Shadow |
+|--------|    /   |--------|
+| Memory |----    | Memory |
+|--------|        |--------|
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
+
+2.2. Instrumentation
+====================
+
+Since some functions (such as memset, memmove, memcpy) wich do memory accesses
+are written in assembly, compiler can't instrument them.
+Therefore we replace these functions with our own instrumented functions
+(kasan_memset, kasan_memcpy, kasan_memove).
+In some circumstances you may need to use the original functions,
+in such case insert #undef KASAN_HOOKS before includes.
+
diff --git a/Makefile b/Makefile
index 64ab7b3..08a07f2 100644
--- a/Makefile
+++ b/Makefile
@@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
+CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
+			--param asan-use-after-return=0 \
+			--param asan-globals=0 \
+			--param asan-memintrin=0 \
+			--param asan-instrumentation-with-call-threshold=0 \
+			-DKASAN_HOOKS
 
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
@@ -428,7 +434,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
diff --git a/commit b/commit
new file mode 100644
index 0000000..134f4dd
--- /dev/null
+++ b/commit
@@ -0,0 +1,3 @@
+
+I'm working on address sanitizer for kernel.
+fuck this bloody.
\ No newline@end of file
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..7efc3eb
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,33 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+void unpoison_shadow(const void *address, size_t size);
+
+void kasan_enable_local(void);
+void kasan_disable_local(void);
+
+/* Reserves shadow memory. */
+void kasan_alloc_shadow(void);
+void kasan_init_shadow(void);
+
+#else /* CONFIG_KASAN */
+
+static inline void unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+/* Reserves shadow memory. */
+static inline void kasan_init_shadow(void) {}
+static inline void kasan_alloc_shadow(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 322d4fc..286650a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1471,6 +1471,10 @@ struct task_struct {
 	gfp_t lockdep_reclaim_gfp;
 #endif
 
+#ifdef CONFIG_KASAN
+	int kasan_depth;
+#endif
+
 /* journalling filesystem info */
 	void *journal_info;
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index cf9cf82..67a4dfc 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -611,6 +611,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..2bfff78
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,20 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: dynamic memory error detector"
+	default n
+	help
+	  Enables AddressSanitizer - dynamic memory error detector,
+	  that finds out-of-bounds and use-after-free bugs.
+
+config KASAN_SANITIZE_ALL
+	bool "Instrument entire kernel"
+	depends on KASAN
+	default y
+	help
+	  This enables compiler intrumentation for entire kernel
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index e4a97bd..dbe9a22 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -64,3 +64,4 @@ obj-$(CONFIG_ZPOOL)	+= zpool.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..e2cd345
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,292 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+static bool __read_mostly kasan_initialized;
+
+unsigned long kasan_shadow_start;
+unsigned long kasan_shadow_end;
+
+/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
+unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
+
+
+static inline bool addr_is_in_mem(unsigned long addr)
+{
+	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
+}
+
+void kasan_enable_local(void)
+{
+	if (likely(kasan_initialized))
+		current->kasan_depth--;
+}
+
+void kasan_disable_local(void)
+{
+	if (likely(kasan_initialized))
+		current->kasan_depth++;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return likely(kasan_initialized
+		&& !current->kasan_depth);
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void unpoison_shadow(const void *address, size_t size)
+{
+	poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool address_is_poisoned(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (shadow_value != 0) {
+		s8 last_byte = addr & KASAN_SHADOW_MASK;
+		return last_byte >= shadow_value;
+	}
+	return false;
+}
+
+static __always_inline unsigned long memory_is_poisoned(unsigned long addr,
+							size_t size)
+{
+	unsigned long end = addr + size;
+	for (; addr < end; addr++)
+		if (unlikely(address_is_poisoned(addr)))
+			return addr;
+	return 0;
+}
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	unsigned long access_addr;
+	struct access_info info;
+
+	if (!kasan_enabled())
+		return;
+
+	if (unlikely(addr < TASK_SIZE)) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (!addr_is_in_mem(addr))
+		return;
+
+	access_addr = memory_is_poisoned(addr, size);
+	if (likely(access_addr == 0))
+		return;
+
+	info.access_addr = access_addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __init kasan_alloc_shadow(void)
+{
+	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
+	unsigned long shadow_size;
+	phys_addr_t shadow_phys_start;
+
+	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
+
+	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
+	if (!shadow_phys_start) {
+		pr_err("Unable to reserve shadow memory\n");
+		return;
+	}
+
+	kasan_shadow_start = (unsigned long)phys_to_virt(shadow_phys_start);
+	kasan_shadow_end = kasan_shadow_start + shadow_size;
+
+	pr_info("reserved shadow memory: [0x%lx - 0x%lx]\n",
+		kasan_shadow_start, kasan_shadow_end);
+	kasan_shadow_offset = kasan_shadow_start -
+		(PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
+}
+
+void __init kasan_init_shadow(void)
+{
+	if (kasan_shadow_start) {
+		unpoison_shadow((void *)PAGE_OFFSET,
+				(size_t)(kasan_shadow_start - PAGE_OFFSET));
+		poison_shadow((void *)kasan_shadow_start,
+			kasan_shadow_end - kasan_shadow_start,
+			KASAN_SHADOW_GAP);
+		unpoison_shadow((void *)kasan_shadow_end,
+				(size_t)(high_memory - kasan_shadow_end));
+		kasan_initialized = true;
+		pr_info("shadow memory initialized\n");
+	}
+}
+
+void *kasan_memcpy(void *dst, const void *src, size_t len)
+{
+	if (unlikely(len == 0))
+		return dst;
+
+	check_memory_region((unsigned long)src, len, false);
+	check_memory_region((unsigned long)dst, len, true);
+
+	return memcpy(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memcpy);
+
+void *kasan_memset(void *ptr, int val, size_t len)
+{
+	if (unlikely(len == 0))
+		return ptr;
+
+	check_memory_region((unsigned long)ptr, len, true);
+
+	return memset(ptr, val, len);
+}
+EXPORT_SYMBOL(kasan_memset);
+
+void *kasan_memmove(void *dst, const void *src, size_t len)
+{
+	if (unlikely(len == 0))
+		return dst;
+
+	check_memory_region((unsigned long)src, len, false);
+	check_memory_region((unsigned long)dst, len, true);
+
+	return memmove(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memmove);
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complains */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..711ae4f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,36 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ kasan_shadow_offset;
+}
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return ((shadow_addr - kasan_shadow_start)
+		<< KASAN_SHADOW_SCALE_SHIFT) + PAGE_OFFSET;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..2430e05
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,157 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <andreyknvl@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h> /* for ../slab.h */
+
+#include "kasan.h"
+#include "../slab.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
+{
+	return x - ((x - slab_start) % s->size);
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "buffer overflow";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	void *object;
+	struct kmem_cache *cache;
+	void *slab_start;
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+	page = virt_to_page(info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static void print_shadow_pointer(unsigned long row, unsigned long shadow,
+				 char *output)
+{
+	/* The length of ">ff00ff00ff00ff00: " is 3 + (BITS_PER_LONG/8)*2 chars. */
+	unsigned long space_count = 3 + (BITS_PER_LONG >> 2) + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK;
+	unsigned long i;
+
+	for (i = 0; i < space_count; i++)
+		output[i] = ' ';
+	output[space_count] = '^';
+	output[space_count + 1] = '\0';
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[100];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+
+		if (row_is_guilty(aligned_shadow, shadow)) {
+			print_shadow_pointer(aligned_shadow, shadow, buffer);
+			pr_err("%s\n", buffer);
+		}
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+void kasan_report_error(struct access_info *info)
+{
+	kasan_disable_local();
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->access_addr);
+	pr_err("================================="
+		"=================================\n");
+	kasan_enable_local();
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	kasan_disable_local();
+	pr_err("================================="
+		"=================================\n");
+        pr_err("AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+        pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+               info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	kasan_enable_local();
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 260bf8a..2bec69e 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN_SANITIZE_ALL)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 02/21] init: main: initialize kasan's shadow area on boot
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:29   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

This patch initializes shadow area after it was allocated by arch code.
All low memory marked as accessible except shadow area itself.
Later free_all_bootmem() will release pages to buddy allocator
and these pages will be marked as unaccessible, untill somebody
will allocate them.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 init/main.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/init/main.c b/init/main.c
index bb1aed9..d06a636 100644
--- a/init/main.c
+++ b/init/main.c
@@ -78,6 +78,7 @@
 #include <linux/context_tracking.h>
 #include <linux/random.h>
 #include <linux/list.h>
+#include <linux/kasan.h>
 
 #include <asm/io.h>
 #include <asm/bugs.h>
@@ -549,7 +550,7 @@ asmlinkage __visible void __init start_kernel(void)
 			   set_init_arg);
 
 	jump_label_init();
-
+	kasan_init_shadow();
 	/*
 	 * These use large bootmem allocations and must precede
 	 * kmem_cache_init()
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 02/21] init: main: initialize kasan's shadow area on boot
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

This patch initializes shadow area after it was allocated by arch code.
All low memory marked as accessible except shadow area itself.
Later free_all_bootmem() will release pages to buddy allocator
and these pages will be marked as unaccessible, untill somebody
will allocate them.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 init/main.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/init/main.c b/init/main.c
index bb1aed9..d06a636 100644
--- a/init/main.c
+++ b/init/main.c
@@ -78,6 +78,7 @@
 #include <linux/context_tracking.h>
 #include <linux/random.h>
 #include <linux/list.h>
+#include <linux/kasan.h>
 
 #include <asm/io.h>
 #include <asm/bugs.h>
@@ -549,7 +550,7 @@ asmlinkage __visible void __init start_kernel(void)
 			   set_init_arg);
 
 	jump_label_init();
-
+	kasan_init_shadow();
 	/*
 	 * These use large bootmem allocations and must precede
 	 * kmem_cache_init()
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 02/21] init: main: initialize kasan's shadow area on boot
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-arm-kernel

This patch initializes shadow area after it was allocated by arch code.
All low memory marked as accessible except shadow area itself.
Later free_all_bootmem() will release pages to buddy allocator
and these pages will be marked as unaccessible, untill somebody
will allocate them.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 init/main.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/init/main.c b/init/main.c
index bb1aed9..d06a636 100644
--- a/init/main.c
+++ b/init/main.c
@@ -78,6 +78,7 @@
 #include <linux/context_tracking.h>
 #include <linux/random.h>
 #include <linux/list.h>
+#include <linux/kasan.h>
 
 #include <asm/io.h>
 #include <asm/bugs.h>
@@ -549,7 +550,7 @@ asmlinkage __visible void __init start_kernel(void)
 			   set_init_arg);
 
 	jump_label_init();
-
+	kasan_init_shadow();
 	/*
 	 * These use large bootmem allocations and must precede
 	 * kmem_cache_init()
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:29   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.

This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y

In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/string_32.h | 28 ++++++++++++++++++++++++++++
 arch/x86/include/asm/string_64.h | 24 ++++++++++++++++++++++++
 arch/x86/lib/Makefile            |  2 ++
 3 files changed, 54 insertions(+)

diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
index 3d3e835..a86615a 100644
--- a/arch/x86/include/asm/string_32.h
+++ b/arch/x86/include/asm/string_32.h
@@ -321,6 +321,32 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
 	 : __memset_generic((s), (c), (count)))
 
 #define __HAVE_ARCH_MEMSET
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+#undef memcpy
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#else /* CONFIG_KASAN && KASAN_HOOKS */
+
 #if (__GNUC__ >= 4)
 #define memset(s, c, count) __builtin_memset(s, c, count)
 #else
@@ -331,6 +357,8 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
 	 : __memset((s), (c), (count)))
 #endif
 
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 /*
  * find the first occurrence of byte 'c', or 1 past the area if none
  */
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..2af2dbe 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -63,6 +63,30 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index 4d4f96a..d82bc35 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -2,6 +2,8 @@
 # Makefile for x86 specific library files.
 #
 
+KASAN_SANITIZE_memcpy_32.o := n
+
 inat_tables_script = $(srctree)/arch/x86/tools/gen-insn-attr-x86.awk
 inat_tables_maps = $(srctree)/arch/x86/lib/x86-opcode-map.txt
 quiet_cmd_inat_tables = GEN     $@
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.

This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y

In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/string_32.h | 28 ++++++++++++++++++++++++++++
 arch/x86/include/asm/string_64.h | 24 ++++++++++++++++++++++++
 arch/x86/lib/Makefile            |  2 ++
 3 files changed, 54 insertions(+)

diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
index 3d3e835..a86615a 100644
--- a/arch/x86/include/asm/string_32.h
+++ b/arch/x86/include/asm/string_32.h
@@ -321,6 +321,32 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
 	 : __memset_generic((s), (c), (count)))
 
 #define __HAVE_ARCH_MEMSET
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+#undef memcpy
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#else /* CONFIG_KASAN && KASAN_HOOKS */
+
 #if (__GNUC__ >= 4)
 #define memset(s, c, count) __builtin_memset(s, c, count)
 #else
@@ -331,6 +357,8 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
 	 : __memset((s), (c), (count)))
 #endif
 
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 /*
  * find the first occurrence of byte 'c', or 1 past the area if none
  */
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..2af2dbe 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -63,6 +63,30 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index 4d4f96a..d82bc35 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -2,6 +2,8 @@
 # Makefile for x86 specific library files.
 #
 
+KASAN_SANITIZE_memcpy_32.o := n
+
 inat_tables_script = $(srctree)/arch/x86/tools/gen-insn-attr-x86.awk
 inat_tables_maps = $(srctree)/arch/x86/lib/x86-opcode-map.txt
 quiet_cmd_inat_tables = GEN     $@
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-arm-kernel

Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.

This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y

In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/string_32.h | 28 ++++++++++++++++++++++++++++
 arch/x86/include/asm/string_64.h | 24 ++++++++++++++++++++++++
 arch/x86/lib/Makefile            |  2 ++
 3 files changed, 54 insertions(+)

diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
index 3d3e835..a86615a 100644
--- a/arch/x86/include/asm/string_32.h
+++ b/arch/x86/include/asm/string_32.h
@@ -321,6 +321,32 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
 	 : __memset_generic((s), (c), (count)))
 
 #define __HAVE_ARCH_MEMSET
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+#undef memcpy
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#else /* CONFIG_KASAN && KASAN_HOOKS */
+
 #if (__GNUC__ >= 4)
 #define memset(s, c, count) __builtin_memset(s, c, count)
 #else
@@ -331,6 +357,8 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
 	 : __memset((s), (c), (count)))
 #endif
 
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 /*
  * find the first occurrence of byte 'c', or 1 past the area if none
  */
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..2af2dbe 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -63,6 +63,30 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index 4d4f96a..d82bc35 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -2,6 +2,8 @@
 # Makefile for x86 specific library files.
 #
 
+KASAN_SANITIZE_memcpy_32.o := n
+
 inat_tables_script = $(srctree)/arch/x86/tools/gen-insn-attr-x86.awk
 inat_tables_maps = $(srctree)/arch/x86/lib/x86-opcode-map.txt
 quiet_cmd_inat_tables = GEN     $@
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 04/21] x86: boot: vdso: disable instrumentation for code not linked with kernel
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:29   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

To avoid build errors, compiler's instrumentation must be disabled
for code not linked with kernel image.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/Makefile            | 2 ++
 arch/x86/boot/compressed/Makefile | 2 ++
 arch/x86/realmode/Makefile        | 2 +-
 arch/x86/realmode/rm/Makefile     | 1 +
 arch/x86/vdso/Makefile            | 1 +
 5 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 0fcd913..64a92b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 61b04fe..90daad6 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 04/21] x86: boot: vdso: disable instrumentation for code not linked with kernel
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

To avoid build errors, compiler's instrumentation must be disabled
for code not linked with kernel image.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/Makefile            | 2 ++
 arch/x86/boot/compressed/Makefile | 2 ++
 arch/x86/realmode/Makefile        | 2 +-
 arch/x86/realmode/rm/Makefile     | 1 +
 arch/x86/vdso/Makefile            | 1 +
 5 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 0fcd913..64a92b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 61b04fe..90daad6 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 04/21] x86: boot: vdso: disable instrumentation for code not linked with kernel
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-arm-kernel

To avoid build errors, compiler's instrumentation must be disabled
for code not linked with kernel image.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/Makefile            | 2 ++
 arch/x86/boot/compressed/Makefile | 2 ++
 arch/x86/realmode/Makefile        | 2 +-
 arch/x86/realmode/rm/Makefile     | 1 +
 arch/x86/vdso/Makefile            | 1 +
 5 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 0fcd913..64a92b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 61b04fe..90daad6 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:29   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Instrumentation of this files may result in unbootable machine.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 7fd54f0..a7bb360 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -8,6 +8,9 @@ CFLAGS_REMOVE_common.o = -pg
 CFLAGS_REMOVE_perf_event.o = -pg
 endif
 
+KASAN_SANITIZE_common.o := n
+KASAN_SANITIZE_perf_event.o := n
+
 # Make sure load_percpu_segment has no stackprotector
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_common.o		:= $(nostackp)
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Instrumentation of this files may result in unbootable machine.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 7fd54f0..a7bb360 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -8,6 +8,9 @@ CFLAGS_REMOVE_common.o = -pg
 CFLAGS_REMOVE_perf_event.o = -pg
 endif
 
+KASAN_SANITIZE_common.o := n
+KASAN_SANITIZE_perf_event.o := n
+
 # Make sure load_percpu_segment has no stackprotector
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_common.o		:= $(nostackp)
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
@ 2014-07-09 11:29   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:29 UTC (permalink / raw)
  To: linux-arm-kernel

Instrumentation of this files may result in unbootable machine.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 7fd54f0..a7bb360 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -8,6 +8,9 @@ CFLAGS_REMOVE_common.o = -pg
 CFLAGS_REMOVE_perf_event.o = -pg
 endif
 
+KASAN_SANITIZE_common.o := n
+KASAN_SANITIZE_perf_event.o := n
+
 # Make sure load_percpu_segment has no stackprotector
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_common.o		:= $(nostackp)
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 06/21] x86: mm: init: allocate shadow memory for kasan
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/mm/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f971306..d9925ee 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
 #include <linux/swap.h>
 #include <linux/memblock.h>
 #include <linux/bootmem.h>	/* for max_low_pfn */
+#include <linux/kasan.h>
 
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
@@ -678,5 +679,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_alloc_shadow();
 }
 
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 06/21] x86: mm: init: allocate shadow memory for kasan
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/mm/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f971306..d9925ee 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
 #include <linux/swap.h>
 #include <linux/memblock.h>
 #include <linux/bootmem.h>	/* for max_low_pfn */
+#include <linux/kasan.h>
 
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
@@ -678,5 +679,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_alloc_shadow();
 }
 
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 06/21] x86: mm: init: allocate shadow memory for kasan
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/mm/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f971306..d9925ee 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
 #include <linux/swap.h>
 #include <linux/memblock.h>
 #include <linux/bootmem.h>	/* for max_low_pfn */
+#include <linux/kasan.h>
 
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
@@ -678,5 +679,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_alloc_shadow();
 }
 
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 07/21] x86: Kconfig: enable kernel address sanitizer
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Now everything in x86 code is ready for kasan. Enable it.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8657c06..f9863b3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -132,6 +132,7 @@ config X86
 	select HAVE_CC_STACKPROTECTOR
 	select GENERIC_CPU_AUTOPROBE
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_KASAN
 
 config INSTRUCTION_DECODER
 	def_bool y
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 07/21] x86: Kconfig: enable kernel address sanitizer
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Now everything in x86 code is ready for kasan. Enable it.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8657c06..f9863b3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -132,6 +132,7 @@ config X86
 	select HAVE_CC_STACKPROTECTOR
 	select GENERIC_CPU_AUTOPROBE
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_KASAN
 
 config INSTRUCTION_DECODER
 	def_bool y
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 07/21] x86: Kconfig: enable kernel address sanitizer
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Now everything in x86 code is ready for kasan. Enable it.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8657c06..f9863b3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -132,6 +132,7 @@ config X86
 	select HAVE_CC_STACKPROTECTOR
 	select GENERIC_CPU_AUTOPROBE
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_KASAN
 
 config INSTRUCTION_DECODER
 	def_bool y
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/Makefile           |  2 ++
 mm/kasan/kasan.c      | 18 ++++++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  4 ++++
 6 files changed, 38 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7efc3eb..4adc0a1 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -17,6 +17,9 @@ void kasan_disable_local(void);
 void kasan_alloc_shadow(void);
 void kasan_init_shadow(void);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
 static inline void kasan_init_shadow(void) {}
 static inline void kasan_alloc_shadow(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/Makefile b/mm/Makefile
index dbe9a22..6a9c3f8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,8 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_page_alloc.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e2cd345..109478e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
 	}
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (likely(page && !PageHighMem(page)))
+		unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (likely(!PageHighMem(page)))
+		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
+}
+
 void *kasan_memcpy(void *dst, const void *src, size_t len)
 {
 	if (unlikely(len == 0))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 711ae4f..be9597e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -5,6 +5,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 2430e05..6ef9e57 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "buffer overflow";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_page(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8c9eeec..67833d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -61,6 +61,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -2807,6 +2809,7 @@ out:
 	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
 		goto retry_cpuset;
 
+	kasan_alloc_pages(page, order);
 	return page;
 }
 EXPORT_SYMBOL(__alloc_pages_nodemask);
@@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	if (end != outer_end)
 		free_contig_range(end, outer_end - end);
 
+	kasan_alloc_pages(pfn_to_page(start), end - start);
 done:
 	undo_isolate_page_range(pfn_max_align_down(start),
 				pfn_max_align_up(end), migratetype);
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/Makefile           |  2 ++
 mm/kasan/kasan.c      | 18 ++++++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  4 ++++
 6 files changed, 38 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7efc3eb..4adc0a1 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -17,6 +17,9 @@ void kasan_disable_local(void);
 void kasan_alloc_shadow(void);
 void kasan_init_shadow(void);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
 static inline void kasan_init_shadow(void) {}
 static inline void kasan_alloc_shadow(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/Makefile b/mm/Makefile
index dbe9a22..6a9c3f8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,8 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_page_alloc.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e2cd345..109478e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
 	}
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (likely(page && !PageHighMem(page)))
+		unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (likely(!PageHighMem(page)))
+		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
+}
+
 void *kasan_memcpy(void *dst, const void *src, size_t len)
 {
 	if (unlikely(len == 0))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 711ae4f..be9597e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -5,6 +5,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 2430e05..6ef9e57 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "buffer overflow";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_page(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8c9eeec..67833d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -61,6 +61,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -2807,6 +2809,7 @@ out:
 	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
 		goto retry_cpuset;
 
+	kasan_alloc_pages(page, order);
 	return page;
 }
 EXPORT_SYMBOL(__alloc_pages_nodemask);
@@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	if (end != outer_end)
 		free_contig_range(end, outer_end - end);
 
+	kasan_alloc_pages(pfn_to_page(start), end - start);
 done:
 	undo_isolate_page_range(pfn_max_align_down(start),
 				pfn_max_align_up(end), migratetype);
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/Makefile           |  2 ++
 mm/kasan/kasan.c      | 18 ++++++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  4 ++++
 6 files changed, 38 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7efc3eb..4adc0a1 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -17,6 +17,9 @@ void kasan_disable_local(void);
 void kasan_alloc_shadow(void);
 void kasan_init_shadow(void);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
 static inline void kasan_init_shadow(void) {}
 static inline void kasan_alloc_shadow(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/Makefile b/mm/Makefile
index dbe9a22..6a9c3f8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,8 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_page_alloc.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e2cd345..109478e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
 	}
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (likely(page && !PageHighMem(page)))
+		unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (likely(!PageHighMem(page)))
+		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
+}
+
 void *kasan_memcpy(void *dst, const void *src, size_t len)
 {
 	if (unlikely(len == 0))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 711ae4f..be9597e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -5,6 +5,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 2430e05..6ef9e57 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "buffer overflow";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_page(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8c9eeec..67833d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -61,6 +61,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -2807,6 +2809,7 @@ out:
 	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
 		goto retry_cpuset;
 
+	kasan_alloc_pages(page, order);
 	return page;
 }
 EXPORT_SYMBOL(__alloc_pages_nodemask);
@@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	if (end != outer_end)
 		free_contig_range(end, outer_end - end);
 
+	kasan_alloc_pages(pfn_to_page(start), end - start);
 done:
 	undo_isolate_page_range(pfn_max_align_down(start),
 				pfn_max_align_up(end), migratetype);
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 09/21] mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Code in slub.c and slab_common.c files could validly access to object's
redzones

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/Makefile b/mm/Makefile
index 6a9c3f8..59cc184 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -3,6 +3,8 @@
 #
 
 KASAN_SANITIZE_page_alloc.o := n
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
 
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 09/21] mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Code in slub.c and slab_common.c files could validly access to object's
redzones

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/Makefile b/mm/Makefile
index 6a9c3f8..59cc184 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -3,6 +3,8 @@
 #
 
 KASAN_SANITIZE_page_alloc.o := n
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
 
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 09/21] mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Code in slub.c and slab_common.c files could validly access to object's
redzones

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/Makefile b/mm/Makefile
index 6a9c3f8..59cc184 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -3,6 +3,8 @@
 #
 
 KASAN_SANITIZE_page_alloc.o := n
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
 
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

This patch shares virt_to_cache() between slab and slub and
it used in cache_from_obj() now.
Later virt_to_cache() will be kernel address sanitizer also.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.c |  6 ------
 mm/slab.h | 10 +++++++---
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index e7763db..fa4f840 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -433,12 +433,6 @@ static inline void set_obj_status(struct page *page, int idx, int val) {}
 static int slab_max_order = SLAB_MAX_ORDER_LO;
 static bool slab_max_order_set __initdata;
 
-static inline struct kmem_cache *virt_to_cache(const void *obj)
-{
-	struct page *page = virt_to_head_page(obj);
-	return page->slab_cache;
-}
-
 static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
 				 unsigned int idx)
 {
diff --git a/mm/slab.h b/mm/slab.h
index 84c160a..1257ade 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,10 +260,15 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
 }
 #endif
 
+static inline struct kmem_cache *virt_to_cache(const void *obj)
+{
+	struct page *page = virt_to_head_page(obj);
+	return page->slab_cache;
+}
+
 static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 {
 	struct kmem_cache *cachep;
-	struct page *page;
 
 	/*
 	 * When kmemcg is not being used, both assignments should return the
@@ -275,8 +280,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 	if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
 		return s;
 
-	page = virt_to_head_page(x);
-	cachep = page->slab_cache;
+	cachep = virt_to_cache(x);
 	if (slab_equal_or_root(cachep, s))
 		return cachep;
 
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

This patch shares virt_to_cache() between slab and slub and
it used in cache_from_obj() now.
Later virt_to_cache() will be kernel address sanitizer also.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.c |  6 ------
 mm/slab.h | 10 +++++++---
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index e7763db..fa4f840 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -433,12 +433,6 @@ static inline void set_obj_status(struct page *page, int idx, int val) {}
 static int slab_max_order = SLAB_MAX_ORDER_LO;
 static bool slab_max_order_set __initdata;
 
-static inline struct kmem_cache *virt_to_cache(const void *obj)
-{
-	struct page *page = virt_to_head_page(obj);
-	return page->slab_cache;
-}
-
 static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
 				 unsigned int idx)
 {
diff --git a/mm/slab.h b/mm/slab.h
index 84c160a..1257ade 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,10 +260,15 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
 }
 #endif
 
+static inline struct kmem_cache *virt_to_cache(const void *obj)
+{
+	struct page *page = virt_to_head_page(obj);
+	return page->slab_cache;
+}
+
 static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 {
 	struct kmem_cache *cachep;
-	struct page *page;
 
 	/*
 	 * When kmemcg is not being used, both assignments should return the
@@ -275,8 +280,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 	if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
 		return s;
 
-	page = virt_to_head_page(x);
-	cachep = page->slab_cache;
+	cachep = virt_to_cache(x);
 	if (slab_equal_or_root(cachep, s))
 		return cachep;
 
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

This patch shares virt_to_cache() between slab and slub and
it used in cache_from_obj() now.
Later virt_to_cache() will be kernel address sanitizer also.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.c |  6 ------
 mm/slab.h | 10 +++++++---
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index e7763db..fa4f840 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -433,12 +433,6 @@ static inline void set_obj_status(struct page *page, int idx, int val) {}
 static int slab_max_order = SLAB_MAX_ORDER_LO;
 static bool slab_max_order_set __initdata;
 
-static inline struct kmem_cache *virt_to_cache(const void *obj)
-{
-	struct page *page = virt_to_head_page(obj);
-	return page->slab_cache;
-}
-
 static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
 				 unsigned int idx)
 {
diff --git a/mm/slab.h b/mm/slab.h
index 84c160a..1257ade 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,10 +260,15 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
 }
 #endif
 
+static inline struct kmem_cache *virt_to_cache(const void *obj)
+{
+	struct page *page = virt_to_head_page(obj);
+	return page->slab_cache;
+}
+
 static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 {
 	struct kmem_cache *cachep;
-	struct page *page;
 
 	/*
 	 * When kmemcg is not being used, both assignments should return the
@@ -275,8 +280,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 	if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
 		return s;
 
-	page = virt_to_head_page(x);
-	cachep = page->slab_cache;
+	cachep = virt_to_cache(x);
 	if (slab_equal_or_root(cachep, s))
 		return cachep;
 
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.h | 5 +++++
 mm/slub.c | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 1257ade..912af7f 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
 
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
+void slab_err(struct kmem_cache *s, struct page *page,
+		const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 
 #endif /* MM_SLAB_H */
diff --git a/mm/slub.c b/mm/slub.c
index 6641a8f..3bdd9ac 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.h | 5 +++++
 mm/slub.c | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 1257ade..912af7f 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
 
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
+void slab_err(struct kmem_cache *s, struct page *page,
+		const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 
 #endif /* MM_SLAB_H */
diff --git a/mm/slub.c b/mm/slub.c
index 6641a8f..3bdd9ac 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.h | 5 +++++
 mm/slub.c | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 1257ade..912af7f 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
 
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
+void slab_err(struct kmem_cache *s, struct page *page,
+		const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 
 #endif /* MM_SLAB_H */
diff --git a/mm/slub.c b/mm/slub.c
index 6641a8f..3bdd9ac 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

To avoid false positive reports in kernel address sanitizer krealloc/kzfree
functions shouldn't be instrumented. Since we want to instrument other
functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
instrumented.

Unfortunately we can't completely disable instrumentation for one function.
We could disable compiler's instrumentation for one function by using
__atribute__((no_sanitize_address)).
But the problem here is that memset call will be replaced by instumented
version kasan_memset since currently it's implemented as define:

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab_common.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/util.c        | 91 --------------------------------------------------------
 2 files changed, 91 insertions(+), 91 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index d31c4ba..8df59b09 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -787,3 +787,94 @@ static int __init slab_proc_init(void)
 }
 module_init(slab_proc_init);
 #endif /* CONFIG_SLABINFO */
+
+static __always_inline void *__do_krealloc(const void *p, size_t new_size,
+					   gfp_t flags)
+{
+	void *ret;
+	size_t ks = 0;
+
+	if (p)
+		ks = ksize(p);
+
+	if (ks >= new_size)
+		return (void *)p;
+
+	ret = kmalloc_track_caller(new_size, flags);
+	if (ret && p)
+		memcpy(ret, p, ks);
+
+	return ret;
+}
+
+/**
+ * __krealloc - like krealloc() but don't free @p.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * This function is like krealloc() except it never frees the originally
+ * allocated buffer. Use this if you don't want to free the buffer immediately
+ * like, for example, with RCU.
+ */
+void *__krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+	if (unlikely(!new_size))
+		return ZERO_SIZE_PTR;
+
+	return __do_krealloc(p, new_size, flags);
+
+}
+EXPORT_SYMBOL(__krealloc);
+
+/**
+ * krealloc - reallocate memory. The contents will remain unchanged.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * The contents of the object pointed to are preserved up to the
+ * lesser of the new and old sizes.  If @p is %NULL, krealloc()
+ * behaves exactly like kmalloc().  If @new_size is 0 and @p is not a
+ * %NULL pointer, the object pointed to is freed.
+ */
+void *krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+	void *ret;
+
+	if (unlikely(!new_size)) {
+		kfree(p);
+		return ZERO_SIZE_PTR;
+	}
+
+	ret = __do_krealloc(p, new_size, flags);
+	if (ret && p != ret)
+		kfree(p);
+
+	return ret;
+}
+EXPORT_SYMBOL(krealloc);
+
+/**
+ * kzfree - like kfree but zero memory
+ * @p: object to free memory of
+ *
+ * The memory of the object @p points to is zeroed before freed.
+ * If @p is %NULL, kzfree() does nothing.
+ *
+ * Note: this function zeroes the whole allocated buffer which can be a good
+ * deal bigger than the requested buffer size passed to kmalloc(). So be
+ * careful when using this function in performance sensitive code.
+ */
+void kzfree(const void *p)
+{
+	size_t ks;
+	void *mem = (void *)p;
+
+	if (unlikely(ZERO_OR_NULL_PTR(mem)))
+		return;
+	ks = ksize(mem);
+	memset(mem, 0, ks);
+	kfree(mem);
+}
+EXPORT_SYMBOL(kzfree);
diff --git a/mm/util.c b/mm/util.c
index 8f326ed..2992e16 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -142,97 +142,6 @@ void *memdup_user(const void __user *src, size_t len)
 }
 EXPORT_SYMBOL(memdup_user);
 
-static __always_inline void *__do_krealloc(const void *p, size_t new_size,
-					   gfp_t flags)
-{
-	void *ret;
-	size_t ks = 0;
-
-	if (p)
-		ks = ksize(p);
-
-	if (ks >= new_size)
-		return (void *)p;
-
-	ret = kmalloc_track_caller(new_size, flags);
-	if (ret && p)
-		memcpy(ret, p, ks);
-
-	return ret;
-}
-
-/**
- * __krealloc - like krealloc() but don't free @p.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * This function is like krealloc() except it never frees the originally
- * allocated buffer. Use this if you don't want to free the buffer immediately
- * like, for example, with RCU.
- */
-void *__krealloc(const void *p, size_t new_size, gfp_t flags)
-{
-	if (unlikely(!new_size))
-		return ZERO_SIZE_PTR;
-
-	return __do_krealloc(p, new_size, flags);
-
-}
-EXPORT_SYMBOL(__krealloc);
-
-/**
- * krealloc - reallocate memory. The contents will remain unchanged.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * The contents of the object pointed to are preserved up to the
- * lesser of the new and old sizes.  If @p is %NULL, krealloc()
- * behaves exactly like kmalloc().  If @new_size is 0 and @p is not a
- * %NULL pointer, the object pointed to is freed.
- */
-void *krealloc(const void *p, size_t new_size, gfp_t flags)
-{
-	void *ret;
-
-	if (unlikely(!new_size)) {
-		kfree(p);
-		return ZERO_SIZE_PTR;
-	}
-
-	ret = __do_krealloc(p, new_size, flags);
-	if (ret && p != ret)
-		kfree(p);
-
-	return ret;
-}
-EXPORT_SYMBOL(krealloc);
-
-/**
- * kzfree - like kfree but zero memory
- * @p: object to free memory of
- *
- * The memory of the object @p points to is zeroed before freed.
- * If @p is %NULL, kzfree() does nothing.
- *
- * Note: this function zeroes the whole allocated buffer which can be a good
- * deal bigger than the requested buffer size passed to kmalloc(). So be
- * careful when using this function in performance sensitive code.
- */
-void kzfree(const void *p)
-{
-	size_t ks;
-	void *mem = (void *)p;
-
-	if (unlikely(ZERO_OR_NULL_PTR(mem)))
-		return;
-	ks = ksize(mem);
-	memset(mem, 0, ks);
-	kfree(mem);
-}
-EXPORT_SYMBOL(kzfree);
-
 /*
  * strndup_user - duplicate an existing string from user space
  * @s: The string to duplicate
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

To avoid false positive reports in kernel address sanitizer krealloc/kzfree
functions shouldn't be instrumented. Since we want to instrument other
functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
instrumented.

Unfortunately we can't completely disable instrumentation for one function.
We could disable compiler's instrumentation for one function by using
__atribute__((no_sanitize_address)).
But the problem here is that memset call will be replaced by instumented
version kasan_memset since currently it's implemented as define:

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab_common.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/util.c        | 91 --------------------------------------------------------
 2 files changed, 91 insertions(+), 91 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index d31c4ba..8df59b09 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -787,3 +787,94 @@ static int __init slab_proc_init(void)
 }
 module_init(slab_proc_init);
 #endif /* CONFIG_SLABINFO */
+
+static __always_inline void *__do_krealloc(const void *p, size_t new_size,
+					   gfp_t flags)
+{
+	void *ret;
+	size_t ks = 0;
+
+	if (p)
+		ks = ksize(p);
+
+	if (ks >= new_size)
+		return (void *)p;
+
+	ret = kmalloc_track_caller(new_size, flags);
+	if (ret && p)
+		memcpy(ret, p, ks);
+
+	return ret;
+}
+
+/**
+ * __krealloc - like krealloc() but don't free @p.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * This function is like krealloc() except it never frees the originally
+ * allocated buffer. Use this if you don't want to free the buffer immediately
+ * like, for example, with RCU.
+ */
+void *__krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+	if (unlikely(!new_size))
+		return ZERO_SIZE_PTR;
+
+	return __do_krealloc(p, new_size, flags);
+
+}
+EXPORT_SYMBOL(__krealloc);
+
+/**
+ * krealloc - reallocate memory. The contents will remain unchanged.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * The contents of the object pointed to are preserved up to the
+ * lesser of the new and old sizes.  If @p is %NULL, krealloc()
+ * behaves exactly like kmalloc().  If @new_size is 0 and @p is not a
+ * %NULL pointer, the object pointed to is freed.
+ */
+void *krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+	void *ret;
+
+	if (unlikely(!new_size)) {
+		kfree(p);
+		return ZERO_SIZE_PTR;
+	}
+
+	ret = __do_krealloc(p, new_size, flags);
+	if (ret && p != ret)
+		kfree(p);
+
+	return ret;
+}
+EXPORT_SYMBOL(krealloc);
+
+/**
+ * kzfree - like kfree but zero memory
+ * @p: object to free memory of
+ *
+ * The memory of the object @p points to is zeroed before freed.
+ * If @p is %NULL, kzfree() does nothing.
+ *
+ * Note: this function zeroes the whole allocated buffer which can be a good
+ * deal bigger than the requested buffer size passed to kmalloc(). So be
+ * careful when using this function in performance sensitive code.
+ */
+void kzfree(const void *p)
+{
+	size_t ks;
+	void *mem = (void *)p;
+
+	if (unlikely(ZERO_OR_NULL_PTR(mem)))
+		return;
+	ks = ksize(mem);
+	memset(mem, 0, ks);
+	kfree(mem);
+}
+EXPORT_SYMBOL(kzfree);
diff --git a/mm/util.c b/mm/util.c
index 8f326ed..2992e16 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -142,97 +142,6 @@ void *memdup_user(const void __user *src, size_t len)
 }
 EXPORT_SYMBOL(memdup_user);
 
-static __always_inline void *__do_krealloc(const void *p, size_t new_size,
-					   gfp_t flags)
-{
-	void *ret;
-	size_t ks = 0;
-
-	if (p)
-		ks = ksize(p);
-
-	if (ks >= new_size)
-		return (void *)p;
-
-	ret = kmalloc_track_caller(new_size, flags);
-	if (ret && p)
-		memcpy(ret, p, ks);
-
-	return ret;
-}
-
-/**
- * __krealloc - like krealloc() but don't free @p.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * This function is like krealloc() except it never frees the originally
- * allocated buffer. Use this if you don't want to free the buffer immediately
- * like, for example, with RCU.
- */
-void *__krealloc(const void *p, size_t new_size, gfp_t flags)
-{
-	if (unlikely(!new_size))
-		return ZERO_SIZE_PTR;
-
-	return __do_krealloc(p, new_size, flags);
-
-}
-EXPORT_SYMBOL(__krealloc);
-
-/**
- * krealloc - reallocate memory. The contents will remain unchanged.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * The contents of the object pointed to are preserved up to the
- * lesser of the new and old sizes.  If @p is %NULL, krealloc()
- * behaves exactly like kmalloc().  If @new_size is 0 and @p is not a
- * %NULL pointer, the object pointed to is freed.
- */
-void *krealloc(const void *p, size_t new_size, gfp_t flags)
-{
-	void *ret;
-
-	if (unlikely(!new_size)) {
-		kfree(p);
-		return ZERO_SIZE_PTR;
-	}
-
-	ret = __do_krealloc(p, new_size, flags);
-	if (ret && p != ret)
-		kfree(p);
-
-	return ret;
-}
-EXPORT_SYMBOL(krealloc);
-
-/**
- * kzfree - like kfree but zero memory
- * @p: object to free memory of
- *
- * The memory of the object @p points to is zeroed before freed.
- * If @p is %NULL, kzfree() does nothing.
- *
- * Note: this function zeroes the whole allocated buffer which can be a good
- * deal bigger than the requested buffer size passed to kmalloc(). So be
- * careful when using this function in performance sensitive code.
- */
-void kzfree(const void *p)
-{
-	size_t ks;
-	void *mem = (void *)p;
-
-	if (unlikely(ZERO_OR_NULL_PTR(mem)))
-		return;
-	ks = ksize(mem);
-	memset(mem, 0, ks);
-	kfree(mem);
-}
-EXPORT_SYMBOL(kzfree);
-
 /*
  * strndup_user - duplicate an existing string from user space
  * @s: The string to duplicate
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

To avoid false positive reports in kernel address sanitizer krealloc/kzfree
functions shouldn't be instrumented. Since we want to instrument other
functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
instrumented.

Unfortunately we can't completely disable instrumentation for one function.
We could disable compiler's instrumentation for one function by using
__atribute__((no_sanitize_address)).
But the problem here is that memset call will be replaced by instumented
version kasan_memset since currently it's implemented as define:

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab_common.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/util.c        | 91 --------------------------------------------------------
 2 files changed, 91 insertions(+), 91 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index d31c4ba..8df59b09 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -787,3 +787,94 @@ static int __init slab_proc_init(void)
 }
 module_init(slab_proc_init);
 #endif /* CONFIG_SLABINFO */
+
+static __always_inline void *__do_krealloc(const void *p, size_t new_size,
+					   gfp_t flags)
+{
+	void *ret;
+	size_t ks = 0;
+
+	if (p)
+		ks = ksize(p);
+
+	if (ks >= new_size)
+		return (void *)p;
+
+	ret = kmalloc_track_caller(new_size, flags);
+	if (ret && p)
+		memcpy(ret, p, ks);
+
+	return ret;
+}
+
+/**
+ * __krealloc - like krealloc() but don't free @p.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * This function is like krealloc() except it never frees the originally
+ * allocated buffer. Use this if you don't want to free the buffer immediately
+ * like, for example, with RCU.
+ */
+void *__krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+	if (unlikely(!new_size))
+		return ZERO_SIZE_PTR;
+
+	return __do_krealloc(p, new_size, flags);
+
+}
+EXPORT_SYMBOL(__krealloc);
+
+/**
+ * krealloc - reallocate memory. The contents will remain unchanged.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * The contents of the object pointed to are preserved up to the
+ * lesser of the new and old sizes.  If @p is %NULL, krealloc()
+ * behaves exactly like kmalloc().  If @new_size is 0 and @p is not a
+ * %NULL pointer, the object pointed to is freed.
+ */
+void *krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+	void *ret;
+
+	if (unlikely(!new_size)) {
+		kfree(p);
+		return ZERO_SIZE_PTR;
+	}
+
+	ret = __do_krealloc(p, new_size, flags);
+	if (ret && p != ret)
+		kfree(p);
+
+	return ret;
+}
+EXPORT_SYMBOL(krealloc);
+
+/**
+ * kzfree - like kfree but zero memory
+ * @p: object to free memory of
+ *
+ * The memory of the object @p points to is zeroed before freed.
+ * If @p is %NULL, kzfree() does nothing.
+ *
+ * Note: this function zeroes the whole allocated buffer which can be a good
+ * deal bigger than the requested buffer size passed to kmalloc(). So be
+ * careful when using this function in performance sensitive code.
+ */
+void kzfree(const void *p)
+{
+	size_t ks;
+	void *mem = (void *)p;
+
+	if (unlikely(ZERO_OR_NULL_PTR(mem)))
+		return;
+	ks = ksize(mem);
+	memset(mem, 0, ks);
+	kfree(mem);
+}
+EXPORT_SYMBOL(kzfree);
diff --git a/mm/util.c b/mm/util.c
index 8f326ed..2992e16 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -142,97 +142,6 @@ void *memdup_user(const void __user *src, size_t len)
 }
 EXPORT_SYMBOL(memdup_user);
 
-static __always_inline void *__do_krealloc(const void *p, size_t new_size,
-					   gfp_t flags)
-{
-	void *ret;
-	size_t ks = 0;
-
-	if (p)
-		ks = ksize(p);
-
-	if (ks >= new_size)
-		return (void *)p;
-
-	ret = kmalloc_track_caller(new_size, flags);
-	if (ret && p)
-		memcpy(ret, p, ks);
-
-	return ret;
-}
-
-/**
- * __krealloc - like krealloc() but don't free @p.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * This function is like krealloc() except it never frees the originally
- * allocated buffer. Use this if you don't want to free the buffer immediately
- * like, for example, with RCU.
- */
-void *__krealloc(const void *p, size_t new_size, gfp_t flags)
-{
-	if (unlikely(!new_size))
-		return ZERO_SIZE_PTR;
-
-	return __do_krealloc(p, new_size, flags);
-
-}
-EXPORT_SYMBOL(__krealloc);
-
-/**
- * krealloc - reallocate memory. The contents will remain unchanged.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * The contents of the object pointed to are preserved up to the
- * lesser of the new and old sizes.  If @p is %NULL, krealloc()
- * behaves exactly like kmalloc().  If @new_size is 0 and @p is not a
- * %NULL pointer, the object pointed to is freed.
- */
-void *krealloc(const void *p, size_t new_size, gfp_t flags)
-{
-	void *ret;
-
-	if (unlikely(!new_size)) {
-		kfree(p);
-		return ZERO_SIZE_PTR;
-	}
-
-	ret = __do_krealloc(p, new_size, flags);
-	if (ret && p != ret)
-		kfree(p);
-
-	return ret;
-}
-EXPORT_SYMBOL(krealloc);
-
-/**
- * kzfree - like kfree but zero memory
- * @p: object to free memory of
- *
- * The memory of the object @p points to is zeroed before freed.
- * If @p is %NULL, kzfree() does nothing.
- *
- * Note: this function zeroes the whole allocated buffer which can be a good
- * deal bigger than the requested buffer size passed to kmalloc(). So be
- * careful when using this function in performance sensitive code.
- */
-void kzfree(const void *p)
-{
-	size_t ks;
-	void *mem = (void *)p;
-
-	if (unlikely(ZERO_OR_NULL_PTR(mem)))
-		return;
-	ks = ksize(mem);
-	memset(mem, 0, ks);
-	kfree(mem);
-}
-EXPORT_SYMBOL(kzfree);
-
 /*
  * strndup_user - duplicate an existing string from user space
  * @s: The string to duplicate
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

When caller creates new kmem_cache, requested size of kmem_cache
will be stored in alloc_size. Later alloc_size will be used by
kerenel address sanitizer to mark alloc_size of slab object as
accessible and the rest of its size as redzone.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h |  5 +++++
 mm/slab.h                | 10 ++++++++++
 mm/slab_common.c         |  2 ++
 mm/slub.c                |  1 +
 4 files changed, 18 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..b8b8154 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -68,6 +68,11 @@ struct kmem_cache {
 	int object_size;	/* The size of an object without meta data */
 	int offset;		/* Free pointer offset. */
 	int cpu_partial;	/* Number of per cpu partial objects to keep around */
+
+#ifdef CONFIG_KASAN
+	int alloc_size;		/* actual allocation size kmem_cache_create */
+#endif
+
 	struct kmem_cache_order_objects oo;
 
 	/* Allocation and freeing of slabs */
diff --git a/mm/slab.h b/mm/slab.h
index 912af7f..cb2e776 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,6 +260,16 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
 }
 #endif
 
+#ifdef CONFIG_KASAN
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size)
+{
+	s->alloc_size = size;
+}
+#else
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size) { }
+#endif
+
+
 static inline struct kmem_cache *virt_to_cache(const void *obj)
 {
 	struct page *page = virt_to_head_page(obj);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 8df59b09..f5b52f0 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -147,6 +147,7 @@ do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
 	s->name = name;
 	s->object_size = object_size;
 	s->size = size;
+	kasan_set_alloc_size(s, object_size);
 	s->align = align;
 	s->ctor = ctor;
 
@@ -409,6 +410,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
 
 	s->name = name;
 	s->size = s->object_size = size;
+	kasan_set_alloc_size(s, size);
 	s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size);
 	err = __kmem_cache_create(s, flags);
 
diff --git a/mm/slub.c b/mm/slub.c
index 3bdd9ac..6ddedf9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3724,6 +3724,7 @@ __kmem_cache_alias(const char *name, size_t size, size_t align,
 		 * the complete object on kzalloc.
 		 */
 		s->object_size = max(s->object_size, (int)size);
+		kasan_set_alloc_size(s, max(s->alloc_size, (int)size));
 		s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
 
 		for_each_memcg_cache_index(i) {
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

When caller creates new kmem_cache, requested size of kmem_cache
will be stored in alloc_size. Later alloc_size will be used by
kerenel address sanitizer to mark alloc_size of slab object as
accessible and the rest of its size as redzone.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h |  5 +++++
 mm/slab.h                | 10 ++++++++++
 mm/slab_common.c         |  2 ++
 mm/slub.c                |  1 +
 4 files changed, 18 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..b8b8154 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -68,6 +68,11 @@ struct kmem_cache {
 	int object_size;	/* The size of an object without meta data */
 	int offset;		/* Free pointer offset. */
 	int cpu_partial;	/* Number of per cpu partial objects to keep around */
+
+#ifdef CONFIG_KASAN
+	int alloc_size;		/* actual allocation size kmem_cache_create */
+#endif
+
 	struct kmem_cache_order_objects oo;
 
 	/* Allocation and freeing of slabs */
diff --git a/mm/slab.h b/mm/slab.h
index 912af7f..cb2e776 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,6 +260,16 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
 }
 #endif
 
+#ifdef CONFIG_KASAN
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size)
+{
+	s->alloc_size = size;
+}
+#else
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size) { }
+#endif
+
+
 static inline struct kmem_cache *virt_to_cache(const void *obj)
 {
 	struct page *page = virt_to_head_page(obj);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 8df59b09..f5b52f0 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -147,6 +147,7 @@ do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
 	s->name = name;
 	s->object_size = object_size;
 	s->size = size;
+	kasan_set_alloc_size(s, object_size);
 	s->align = align;
 	s->ctor = ctor;
 
@@ -409,6 +410,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
 
 	s->name = name;
 	s->size = s->object_size = size;
+	kasan_set_alloc_size(s, size);
 	s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size);
 	err = __kmem_cache_create(s, flags);
 
diff --git a/mm/slub.c b/mm/slub.c
index 3bdd9ac..6ddedf9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3724,6 +3724,7 @@ __kmem_cache_alias(const char *name, size_t size, size_t align,
 		 * the complete object on kzalloc.
 		 */
 		s->object_size = max(s->object_size, (int)size);
+		kasan_set_alloc_size(s, max(s->alloc_size, (int)size));
 		s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
 
 		for_each_memcg_cache_index(i) {
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

When caller creates new kmem_cache, requested size of kmem_cache
will be stored in alloc_size. Later alloc_size will be used by
kerenel address sanitizer to mark alloc_size of slab object as
accessible and the rest of its size as redzone.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h |  5 +++++
 mm/slab.h                | 10 ++++++++++
 mm/slab_common.c         |  2 ++
 mm/slub.c                |  1 +
 4 files changed, 18 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..b8b8154 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -68,6 +68,11 @@ struct kmem_cache {
 	int object_size;	/* The size of an object without meta data */
 	int offset;		/* Free pointer offset. */
 	int cpu_partial;	/* Number of per cpu partial objects to keep around */
+
+#ifdef CONFIG_KASAN
+	int alloc_size;		/* actual allocation size kmem_cache_create */
+#endif
+
 	struct kmem_cache_order_objects oo;
 
 	/* Allocation and freeing of slabs */
diff --git a/mm/slab.h b/mm/slab.h
index 912af7f..cb2e776 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,6 +260,16 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
 }
 #endif
 
+#ifdef CONFIG_KASAN
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size)
+{
+	s->alloc_size = size;
+}
+#else
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size) { }
+#endif
+
+
 static inline struct kmem_cache *virt_to_cache(const void *obj)
 {
 	struct page *page = virt_to_head_page(obj);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 8df59b09..f5b52f0 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -147,6 +147,7 @@ do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
 	s->name = name;
 	s->object_size = object_size;
 	s->size = size;
+	kasan_set_alloc_size(s, object_size);
 	s->align = align;
 	s->ctor = ctor;
 
@@ -409,6 +410,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
 
 	s->name = name;
 	s->size = s->object_size = size;
+	kasan_set_alloc_size(s, size);
 	s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size);
 	err = __kmem_cache_create(s, flags);
 
diff --git a/mm/slub.c b/mm/slub.c
index 3bdd9ac..6ddedf9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3724,6 +3724,7 @@ __kmem_cache_alias(const char *name, size_t size, size_t align,
 		 * the complete object on kzalloc.
 		 */
 		s->object_size = max(s->object_size, (int)size);
+		kasan_set_alloc_size(s, max(s->alloc_size, (int)size));
 		s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
 
 		for_each_memcg_cache_index(i) {
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Some code in slub could validly touch memory marked by kasan as unaccessible.
Even though slub.c doesn't instrumented, functions called in it are instrumented,
so to avoid false positive reports such places are protected by
kasan_disable_local()/kasan_enable_local() calls.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 6ddedf9..c8dbea7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
 	if (!(s->flags & SLAB_STORE_USER))
 		return;
 
+	kasan_disable_local();
 	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
 	print_track("Freed", get_track(s, object, TRACK_FREE));
+	kasan_enable_local();
 }
 
 static void print_page_info(struct page *page)
@@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	unsigned int off;	/* Offset of last byte */
 	u8 *addr = page_address(page);
 
+	kasan_disable_local();
+
 	print_tracking(s, p);
 
 	print_page_info(page);
@@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 		/* Beginning of the filler is the free pointer */
 		print_section("Padding ", p + off, s->size - off);
 
+	kasan_enable_local();
+
 	dump_stack();
 }
 
@@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
 					struct page *page,
 					void *object, unsigned long addr)
 {
+
+	kasan_disable_local();
 	if (!check_slab(s, page))
 		goto bad;
 
@@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
 		set_track(s, object, TRACK_ALLOC, addr);
 	trace(s, page, object, 1);
 	init_object(s, object, SLUB_RED_ACTIVE);
+	kasan_enable_local();
 	return 1;
 
 bad:
@@ -1041,6 +1050,7 @@ bad:
 		page->inuse = page->objects;
 		page->freelist = NULL;
 	}
+	kasan_enable_local();
 	return 0;
 }
 
@@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
 
 	spin_lock_irqsave(&n->list_lock, *flags);
 	slab_lock(page);
+	kasan_disable_local();
 
 	if (!check_slab(s, page))
 		goto fail;
@@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
 	trace(s, page, object, 0);
 	init_object(s, object, SLUB_RED_INACTIVE);
 out:
+	kasan_enable_local();
 	slab_unlock(page);
 	/*
 	 * Keep node_lock to preserve integrity
@@ -1096,6 +1108,7 @@ out:
 	return n;
 
 fail:
+	kasan_enable_local();
 	slab_unlock(page);
 	spin_unlock_irqrestore(&n->list_lock, *flags);
 	slab_fix(s, "Object at 0x%p not freed", object);
@@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_disable_local();
 		s->ctor(object);
+		kasan_enable_local();
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 
 	if (kmem_cache_debug(s)) {
 		void *p;
-
+		kasan_disable_local();
 		slab_pad_check(s, page);
 		for_each_object(p, s, page_address(page),
 						page->objects)
 			check_object(s, page, p, SLUB_RED_INACTIVE);
+		kasan_enable_local();
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Some code in slub could validly touch memory marked by kasan as unaccessible.
Even though slub.c doesn't instrumented, functions called in it are instrumented,
so to avoid false positive reports such places are protected by
kasan_disable_local()/kasan_enable_local() calls.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 6ddedf9..c8dbea7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
 	if (!(s->flags & SLAB_STORE_USER))
 		return;
 
+	kasan_disable_local();
 	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
 	print_track("Freed", get_track(s, object, TRACK_FREE));
+	kasan_enable_local();
 }
 
 static void print_page_info(struct page *page)
@@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	unsigned int off;	/* Offset of last byte */
 	u8 *addr = page_address(page);
 
+	kasan_disable_local();
+
 	print_tracking(s, p);
 
 	print_page_info(page);
@@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 		/* Beginning of the filler is the free pointer */
 		print_section("Padding ", p + off, s->size - off);
 
+	kasan_enable_local();
+
 	dump_stack();
 }
 
@@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
 					struct page *page,
 					void *object, unsigned long addr)
 {
+
+	kasan_disable_local();
 	if (!check_slab(s, page))
 		goto bad;
 
@@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
 		set_track(s, object, TRACK_ALLOC, addr);
 	trace(s, page, object, 1);
 	init_object(s, object, SLUB_RED_ACTIVE);
+	kasan_enable_local();
 	return 1;
 
 bad:
@@ -1041,6 +1050,7 @@ bad:
 		page->inuse = page->objects;
 		page->freelist = NULL;
 	}
+	kasan_enable_local();
 	return 0;
 }
 
@@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
 
 	spin_lock_irqsave(&n->list_lock, *flags);
 	slab_lock(page);
+	kasan_disable_local();
 
 	if (!check_slab(s, page))
 		goto fail;
@@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
 	trace(s, page, object, 0);
 	init_object(s, object, SLUB_RED_INACTIVE);
 out:
+	kasan_enable_local();
 	slab_unlock(page);
 	/*
 	 * Keep node_lock to preserve integrity
@@ -1096,6 +1108,7 @@ out:
 	return n;
 
 fail:
+	kasan_enable_local();
 	slab_unlock(page);
 	spin_unlock_irqrestore(&n->list_lock, *flags);
 	slab_fix(s, "Object at 0x%p not freed", object);
@@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_disable_local();
 		s->ctor(object);
+		kasan_enable_local();
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 
 	if (kmem_cache_debug(s)) {
 		void *p;
-
+		kasan_disable_local();
 		slab_pad_check(s, page);
 		for_each_object(p, s, page_address(page),
 						page->objects)
 			check_object(s, page, p, SLUB_RED_INACTIVE);
+		kasan_enable_local();
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Some code in slub could validly touch memory marked by kasan as unaccessible.
Even though slub.c doesn't instrumented, functions called in it are instrumented,
so to avoid false positive reports such places are protected by
kasan_disable_local()/kasan_enable_local() calls.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 6ddedf9..c8dbea7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
 	if (!(s->flags & SLAB_STORE_USER))
 		return;
 
+	kasan_disable_local();
 	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
 	print_track("Freed", get_track(s, object, TRACK_FREE));
+	kasan_enable_local();
 }
 
 static void print_page_info(struct page *page)
@@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	unsigned int off;	/* Offset of last byte */
 	u8 *addr = page_address(page);
 
+	kasan_disable_local();
+
 	print_tracking(s, p);
 
 	print_page_info(page);
@@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 		/* Beginning of the filler is the free pointer */
 		print_section("Padding ", p + off, s->size - off);
 
+	kasan_enable_local();
+
 	dump_stack();
 }
 
@@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
 					struct page *page,
 					void *object, unsigned long addr)
 {
+
+	kasan_disable_local();
 	if (!check_slab(s, page))
 		goto bad;
 
@@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
 		set_track(s, object, TRACK_ALLOC, addr);
 	trace(s, page, object, 1);
 	init_object(s, object, SLUB_RED_ACTIVE);
+	kasan_enable_local();
 	return 1;
 
 bad:
@@ -1041,6 +1050,7 @@ bad:
 		page->inuse = page->objects;
 		page->freelist = NULL;
 	}
+	kasan_enable_local();
 	return 0;
 }
 
@@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
 
 	spin_lock_irqsave(&n->list_lock, *flags);
 	slab_lock(page);
+	kasan_disable_local();
 
 	if (!check_slab(s, page))
 		goto fail;
@@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
 	trace(s, page, object, 0);
 	init_object(s, object, SLUB_RED_INACTIVE);
 out:
+	kasan_enable_local();
 	slab_unlock(page);
 	/*
 	 * Keep node_lock to preserve integrity
@@ -1096,6 +1108,7 @@ out:
 	return n;
 
 fail:
+	kasan_enable_local();
 	slab_unlock(page);
 	spin_unlock_irqrestore(&n->list_lock, *flags);
 	slab_fix(s, "Object at 0x%p not freed", object);
@@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_disable_local();
 		s->ctor(object);
+		kasan_enable_local();
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 
 	if (kmem_cache_debug(s)) {
 		void *p;
-
+		kasan_disable_local();
 		slab_pad_check(s, page);
 		for_each_object(p, s, page_address(page),
 						page->objects)
 			check_object(s, page, p, SLUB_RED_INACTIVE);
+		kasan_enable_local();
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Allocated slab page, this whole page marked as unaccessible
in corresponding shadow memory.
On allocation of slub object requested allocation size marked as
accessible, and the rest of the object (including slub's metadata)
marked as redzone (unaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible by kasan_krealloc call.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  22 ++++++++++
 include/linux/slab.h  |  19 +++++++--
 lib/Kconfig.kasan     |   2 +
 mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |   5 +++
 mm/kasan/report.c     |  23 +++++++++++
 mm/slab.h             |   2 +-
 mm/slab_common.c      |   9 +++--
 mm/slub.c             |  24 ++++++++++-
 9 files changed, 208 insertions(+), 8 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4adc0a1..583c011 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -20,6 +20,17 @@ void kasan_init_shadow(void);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_alloc_slab_pages(struct page *page, int order);
+void kasan_free_slab_pages(struct page *page, int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 68b1feab..a9513e9 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
  */
 static __always_inline void *kmalloc(size_t size, gfp_t flags)
 {
+	void *ret;
+
 	if (__builtin_constant_p(size)) {
 		if (size > KMALLOC_MAX_CACHE_SIZE)
 			return kmalloc_large(size, flags);
@@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
 			if (!index)
 				return ZERO_SIZE_PTR;
 
-			return kmem_cache_alloc_trace(kmalloc_caches[index],
+			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
 					flags, size);
+
+			kasan_kmalloc(kmalloc_caches[index], ret, size);
+
+			return ret;
 		}
 #endif
 	}
@@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
 static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 {
 #ifndef CONFIG_SLOB
+	void *ret;
+
 	if (__builtin_constant_p(size) &&
 		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
 		int i = kmalloc_index(size);
@@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 		if (!i)
 			return ZERO_SIZE_PTR;
 
-		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
-						flags, node, size);
+		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
+						  flags, node, size);
+
+		kasan_kmalloc(kmalloc_caches[i], ret, size);
+
+		return ret;
 	}
 #endif
 	return __kmalloc_node(size, flags, node);
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 2bfff78..289a624 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: dynamic memory error detector"
+	depends on SLUB
+	select STACKTRACE
 	default n
 	help
 	  Enables AddressSanitizer - dynamic memory error detector,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 109478e..9b5182a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
 	}
 }
 
+void kasan_alloc_slab_pages(struct page *page, int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
+}
+
+void kasan_free_slab_pages(struct page *page, int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(object == NULL))
+		return;
+
+	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
+	unpoison_shadow(object, cache->alloc_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	unpoison_shadow(object, size);
+	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	unpoison_shadow(ptr, size);
+	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc_large);
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	page = virt_to_page(ptr);
+	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (unlikely(!kasan_initialized))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index be9597e..f925d03 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 6ef9e57..6d829af 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "buffer overflow";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_page(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_REDZONE:
+		cache = virt_to_cache((void *)info->access_addr);
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			cache = virt_to_cache((void *)info->access_addr);
+			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
+			object = virt_to_obj(cache, slab_start,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
 		dump_page(page, "kasan error");
 		dump_stack();
 		break;
diff --git a/mm/slab.h b/mm/slab.h
index cb2e776..b22ed8b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -353,6 +353,6 @@ void slab_err(struct kmem_cache *s, struct page *page,
 		const char *fmt, ...);
 void object_err(struct kmem_cache *s, struct page *page,
 		u8 *object, char *reason);
-
+size_t __ksize(const void *obj);
 
 #endif /* MM_SLAB_H */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f5b52f0..313e270 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -625,6 +625,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -797,10 +798,12 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	size_t ks = 0;
 
 	if (p)
-		ks = ksize(p);
+		ks = __ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
@@ -875,7 +878,7 @@ void kzfree(const void *p)
 
 	if (unlikely(ZERO_OR_NULL_PTR(mem)))
 		return;
-	ks = ksize(mem);
+	ks = __ksize(mem);
 	memset(mem, 0, ks);
 	kfree(mem);
 }
diff --git a/mm/slub.c b/mm/slub.c
index c8dbea7..87d2198 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -1245,11 +1246,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1267,11 +1270,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1371,6 +1376,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (!page)
 		return NULL;
 
+	kasan_alloc_slab_pages(page, oo_order(oo));
+
 	page->objects = oo_objects(oo);
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -1450,6 +1457,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
+	kasan_free_slab_pages(page, compound_order(page));
 
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2907,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3289,6 +3298,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3332,12 +3343,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3353,6 +3366,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Allocated slab page, this whole page marked as unaccessible
in corresponding shadow memory.
On allocation of slub object requested allocation size marked as
accessible, and the rest of the object (including slub's metadata)
marked as redzone (unaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible by kasan_krealloc call.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  22 ++++++++++
 include/linux/slab.h  |  19 +++++++--
 lib/Kconfig.kasan     |   2 +
 mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |   5 +++
 mm/kasan/report.c     |  23 +++++++++++
 mm/slab.h             |   2 +-
 mm/slab_common.c      |   9 +++--
 mm/slub.c             |  24 ++++++++++-
 9 files changed, 208 insertions(+), 8 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4adc0a1..583c011 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -20,6 +20,17 @@ void kasan_init_shadow(void);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_alloc_slab_pages(struct page *page, int order);
+void kasan_free_slab_pages(struct page *page, int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 68b1feab..a9513e9 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
  */
 static __always_inline void *kmalloc(size_t size, gfp_t flags)
 {
+	void *ret;
+
 	if (__builtin_constant_p(size)) {
 		if (size > KMALLOC_MAX_CACHE_SIZE)
 			return kmalloc_large(size, flags);
@@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
 			if (!index)
 				return ZERO_SIZE_PTR;
 
-			return kmem_cache_alloc_trace(kmalloc_caches[index],
+			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
 					flags, size);
+
+			kasan_kmalloc(kmalloc_caches[index], ret, size);
+
+			return ret;
 		}
 #endif
 	}
@@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
 static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 {
 #ifndef CONFIG_SLOB
+	void *ret;
+
 	if (__builtin_constant_p(size) &&
 		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
 		int i = kmalloc_index(size);
@@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 		if (!i)
 			return ZERO_SIZE_PTR;
 
-		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
-						flags, node, size);
+		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
+						  flags, node, size);
+
+		kasan_kmalloc(kmalloc_caches[i], ret, size);
+
+		return ret;
 	}
 #endif
 	return __kmalloc_node(size, flags, node);
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 2bfff78..289a624 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: dynamic memory error detector"
+	depends on SLUB
+	select STACKTRACE
 	default n
 	help
 	  Enables AddressSanitizer - dynamic memory error detector,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 109478e..9b5182a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
 	}
 }
 
+void kasan_alloc_slab_pages(struct page *page, int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
+}
+
+void kasan_free_slab_pages(struct page *page, int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(object == NULL))
+		return;
+
+	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
+	unpoison_shadow(object, cache->alloc_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	unpoison_shadow(object, size);
+	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	unpoison_shadow(ptr, size);
+	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc_large);
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	page = virt_to_page(ptr);
+	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (unlikely(!kasan_initialized))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index be9597e..f925d03 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 6ef9e57..6d829af 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "buffer overflow";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_page(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_REDZONE:
+		cache = virt_to_cache((void *)info->access_addr);
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			cache = virt_to_cache((void *)info->access_addr);
+			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
+			object = virt_to_obj(cache, slab_start,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
 		dump_page(page, "kasan error");
 		dump_stack();
 		break;
diff --git a/mm/slab.h b/mm/slab.h
index cb2e776..b22ed8b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -353,6 +353,6 @@ void slab_err(struct kmem_cache *s, struct page *page,
 		const char *fmt, ...);
 void object_err(struct kmem_cache *s, struct page *page,
 		u8 *object, char *reason);
-
+size_t __ksize(const void *obj);
 
 #endif /* MM_SLAB_H */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f5b52f0..313e270 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -625,6 +625,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -797,10 +798,12 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	size_t ks = 0;
 
 	if (p)
-		ks = ksize(p);
+		ks = __ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
@@ -875,7 +878,7 @@ void kzfree(const void *p)
 
 	if (unlikely(ZERO_OR_NULL_PTR(mem)))
 		return;
-	ks = ksize(mem);
+	ks = __ksize(mem);
 	memset(mem, 0, ks);
 	kfree(mem);
 }
diff --git a/mm/slub.c b/mm/slub.c
index c8dbea7..87d2198 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -1245,11 +1246,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1267,11 +1270,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1371,6 +1376,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (!page)
 		return NULL;
 
+	kasan_alloc_slab_pages(page, oo_order(oo));
+
 	page->objects = oo_objects(oo);
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -1450,6 +1457,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
+	kasan_free_slab_pages(page, compound_order(page));
 
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2907,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3289,6 +3298,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3332,12 +3343,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3353,6 +3366,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Allocated slab page, this whole page marked as unaccessible
in corresponding shadow memory.
On allocation of slub object requested allocation size marked as
accessible, and the rest of the object (including slub's metadata)
marked as redzone (unaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible by kasan_krealloc call.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  22 ++++++++++
 include/linux/slab.h  |  19 +++++++--
 lib/Kconfig.kasan     |   2 +
 mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |   5 +++
 mm/kasan/report.c     |  23 +++++++++++
 mm/slab.h             |   2 +-
 mm/slab_common.c      |   9 +++--
 mm/slub.c             |  24 ++++++++++-
 9 files changed, 208 insertions(+), 8 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4adc0a1..583c011 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -20,6 +20,17 @@ void kasan_init_shadow(void);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_alloc_slab_pages(struct page *page, int order);
+void kasan_free_slab_pages(struct page *page, int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 68b1feab..a9513e9 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
  */
 static __always_inline void *kmalloc(size_t size, gfp_t flags)
 {
+	void *ret;
+
 	if (__builtin_constant_p(size)) {
 		if (size > KMALLOC_MAX_CACHE_SIZE)
 			return kmalloc_large(size, flags);
@@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
 			if (!index)
 				return ZERO_SIZE_PTR;
 
-			return kmem_cache_alloc_trace(kmalloc_caches[index],
+			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
 					flags, size);
+
+			kasan_kmalloc(kmalloc_caches[index], ret, size);
+
+			return ret;
 		}
 #endif
 	}
@@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
 static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 {
 #ifndef CONFIG_SLOB
+	void *ret;
+
 	if (__builtin_constant_p(size) &&
 		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
 		int i = kmalloc_index(size);
@@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 		if (!i)
 			return ZERO_SIZE_PTR;
 
-		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
-						flags, node, size);
+		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
+						  flags, node, size);
+
+		kasan_kmalloc(kmalloc_caches[i], ret, size);
+
+		return ret;
 	}
 #endif
 	return __kmalloc_node(size, flags, node);
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 2bfff78..289a624 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: dynamic memory error detector"
+	depends on SLUB
+	select STACKTRACE
 	default n
 	help
 	  Enables AddressSanitizer - dynamic memory error detector,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 109478e..9b5182a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
 	}
 }
 
+void kasan_alloc_slab_pages(struct page *page, int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
+}
+
+void kasan_free_slab_pages(struct page *page, int order)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(object == NULL))
+		return;
+
+	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
+	unpoison_shadow(object, cache->alloc_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	unpoison_shadow(object, size);
+	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	unpoison_shadow(ptr, size);
+	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc_large);
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page;
+
+	if (unlikely(!kasan_initialized))
+		return;
+
+	page = virt_to_page(ptr);
+	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (unlikely(!kasan_initialized))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index be9597e..f925d03 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 6ef9e57..6d829af 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "buffer overflow";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_page(info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_REDZONE:
+		cache = virt_to_cache((void *)info->access_addr);
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			cache = virt_to_cache((void *)info->access_addr);
+			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
+			object = virt_to_obj(cache, slab_start,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
 		dump_page(page, "kasan error");
 		dump_stack();
 		break;
diff --git a/mm/slab.h b/mm/slab.h
index cb2e776..b22ed8b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -353,6 +353,6 @@ void slab_err(struct kmem_cache *s, struct page *page,
 		const char *fmt, ...);
 void object_err(struct kmem_cache *s, struct page *page,
 		u8 *object, char *reason);
-
+size_t __ksize(const void *obj);
 
 #endif /* MM_SLAB_H */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f5b52f0..313e270 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -625,6 +625,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -797,10 +798,12 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	size_t ks = 0;
 
 	if (p)
-		ks = ksize(p);
+		ks = __ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
@@ -875,7 +878,7 @@ void kzfree(const void *p)
 
 	if (unlikely(ZERO_OR_NULL_PTR(mem)))
 		return;
-	ks = ksize(mem);
+	ks = __ksize(mem);
 	memset(mem, 0, ks);
 	kfree(mem);
 }
diff --git a/mm/slub.c b/mm/slub.c
index c8dbea7..87d2198 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -1245,11 +1246,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1267,11 +1270,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1371,6 +1376,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (!page)
 		return NULL;
 
+	kasan_alloc_slab_pages(page, oo_order(oo));
+
 	page->objects = oo_objects(oo);
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -1450,6 +1457,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
+	kasan_free_slab_pages(page, compound_order(page));
 
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2907,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3289,6 +3298,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3332,12 +3343,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3353,6 +3366,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 16/21] arm: boot: compressed: disable kasan's instrumentation
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

To avoid build errors, compiler's instrumentation used for kernel
address sanitizer, must be disabled for code not linked with kernel.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/boot/compressed/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 76a50ec..03f2976 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinuz image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 OBJS		=
 
 # Ensure that MMCIF loader code appears early in the image
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 16/21] arm: boot: compressed: disable kasan's instrumentation
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

To avoid build errors, compiler's instrumentation used for kernel
address sanitizer, must be disabled for code not linked with kernel.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/boot/compressed/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 76a50ec..03f2976 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinuz image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 OBJS		=
 
 # Ensure that MMCIF loader code appears early in the image
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 16/21] arm: boot: compressed: disable kasan's instrumentation
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

To avoid build errors, compiler's instrumentation used for kernel
address sanitizer, must be disabled for code not linked with kernel.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/boot/compressed/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 76a50ec..03f2976 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinuz image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 OBJS		=
 
 # Ensure that MMCIF loader code appears early in the image
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 17/21] arm: add kasan hooks fort memcpy/memmove/memset functions
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.

This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y

In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/include/asm/string.h | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index cf4f3aa..3cbe47f 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -38,4 +38,34 @@ extern void __memzero(void *ptr, __kernel_size_t n);
 		(__p);							\
 	})
 
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In case if any of mem*() fucntions are written in C we use our instrumented
+ * functions for perfomance reasons. It's should be faster to check whole
+ * accessed memory range at once, then do a lot of checks at each memory access.
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case #undef KASAN_HOOKS before includes.
+ */
+#undef memset
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 #endif
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 17/21] arm: add kasan hooks fort memcpy/memmove/memset functions
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.

This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y

In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/include/asm/string.h | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index cf4f3aa..3cbe47f 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -38,4 +38,34 @@ extern void __memzero(void *ptr, __kernel_size_t n);
 		(__p);							\
 	})
 
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In case if any of mem*() fucntions are written in C we use our instrumented
+ * functions for perfomance reasons. It's should be faster to check whole
+ * accessed memory range at once, then do a lot of checks at each memory access.
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case #undef KASAN_HOOKS before includes.
+ */
+#undef memset
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 #endif
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 17/21] arm: add kasan hooks fort memcpy/memmove/memset functions
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.

This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y

In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/include/asm/string.h | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index cf4f3aa..3cbe47f 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -38,4 +38,34 @@ extern void __memzero(void *ptr, __kernel_size_t n);
 		(__p);							\
 	})
 
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In case if any of mem*() fucntions are written in C we use our instrumented
+ * functions for perfomance reasons. It's should be faster to check whole
+ * accessed memory range at once, then do a lot of checks at each memory access.
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case #undef KASAN_HOOKS before includes.
+ */
+#undef memset
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
 #endif
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 18/21] arm: mm: reserve shadow memory for kasan
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/mm/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d..02fce2c 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -22,6 +22,7 @@
 #include <linux/memblock.h>
 #include <linux/dma-contiguous.h>
 #include <linux/sizes.h>
+#include <linux/kasan.h>
 
 #include <asm/cp15.h>
 #include <asm/mach-types.h>
@@ -324,6 +325,8 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
 	 */
 	dma_contiguous_reserve(min(arm_dma_limit, arm_lowmem_limit));
 
+	kasan_alloc_shadow();
+
 	arm_memblock_steal_permitted = false;
 	memblock_dump_all();
 }
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 18/21] arm: mm: reserve shadow memory for kasan
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/mm/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d..02fce2c 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -22,6 +22,7 @@
 #include <linux/memblock.h>
 #include <linux/dma-contiguous.h>
 #include <linux/sizes.h>
+#include <linux/kasan.h>
 
 #include <asm/cp15.h>
 #include <asm/mach-types.h>
@@ -324,6 +325,8 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
 	 */
 	dma_contiguous_reserve(min(arm_dma_limit, arm_lowmem_limit));
 
+	kasan_alloc_shadow();
+
 	arm_memblock_steal_permitted = false;
 	memblock_dump_all();
 }
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 18/21] arm: mm: reserve shadow memory for kasan
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/mm/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d..02fce2c 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -22,6 +22,7 @@
 #include <linux/memblock.h>
 #include <linux/dma-contiguous.h>
 #include <linux/sizes.h>
+#include <linux/kasan.h>
 
 #include <asm/cp15.h>
 #include <asm/mach-types.h>
@@ -324,6 +325,8 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
 	 */
 	dma_contiguous_reserve(min(arm_dma_limit, arm_lowmem_limit));
 
+	kasan_alloc_shadow();
+
 	arm_memblock_steal_permitted = false;
 	memblock_dump_all();
 }
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 19/21] arm: Kconfig: enable kernel address sanitizer
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Now everything in x86 code is ready for kasan. Enable it.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c52d1ca..c62db6c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -26,6 +26,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
+	select HAVE_ARCH_KASAN
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_TRACEHOOK
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 19/21] arm: Kconfig: enable kernel address sanitizer
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

Now everything in x86 code is ready for kasan. Enable it.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c52d1ca..c62db6c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -26,6 +26,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
+	select HAVE_ARCH_KASAN
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_TRACEHOOK
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 19/21] arm: Kconfig: enable kernel address sanitizer
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

Now everything in x86 code is ready for kasan. Enable it.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c52d1ca..c62db6c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -26,6 +26,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
+	select HAVE_ARCH_KASAN
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_TRACEHOOK
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in __d_lookup_rcu.
__d_lookup_rcu may validly read a little beyound allocated size.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index b7e8b20..dff64f2 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+		unpoison_shadow(dname,
+				roundup(name->len + 1, sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in __d_lookup_rcu.
__d_lookup_rcu may validly read a little beyound allocated size.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index b7e8b20..dff64f2 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+		unpoison_shadow(dname,
+				roundup(name->len + 1, sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in __d_lookup_rcu.
__d_lookup_rcu may validly read a little beyound allocated size.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index b7e8b20..dff64f2 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+		unpoison_shadow(dname,
+				roundup(name->len + 1, sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 21/21] lib: add kmalloc_bug_test module
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 11:30   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.debug       |   8 ++
 lib/Makefile            |   1 +
 lib/test_kmalloc_bugs.c | 254 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kmalloc_bugs.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 67a4dfc..64fd9e6 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -609,6 +609,14 @@ config DEBUG_STACKOVERFLOW
 
 	  If in doubt, say "N".
 
+config KMALLOC_BUG_TEST
+	tristate "Module for testing bugs detection in sl[auo]b"
+	default n
+	help
+	  This is a test module doing varios nasty things like
+	  out of bounds accesses, use after free. It is usefull for testing
+	  kernel debugging features like kernel address sanitizer.
+
 source "lib/Kconfig.kmemcheck"
 
 source "lib/Kconfig.kasan"
diff --git a/lib/Makefile b/lib/Makefile
index e48067c..af68259 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
 obj-$(CONFIG_TEST_MODULE) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
+obj-$(CONFIG_KMALLOC_BUG_TEST) += test_kmalloc_bugs.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kmalloc_bugs.c b/lib/test_kmalloc_bugs.c
new file mode 100644
index 0000000..04cd11b
--- /dev/null
+++ b/lib/test_kmalloc_bugs.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kmalloc bug test: " fmt
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+void __init kmalloc_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = PAGE_SIZE*3 - 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*ptr = 'x';
+}
+
+void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[0] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_rigth();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return 0;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 21/21] lib: add kmalloc_bug_test module
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm, Andrey Ryabinin

This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.debug       |   8 ++
 lib/Makefile            |   1 +
 lib/test_kmalloc_bugs.c | 254 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kmalloc_bugs.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 67a4dfc..64fd9e6 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -609,6 +609,14 @@ config DEBUG_STACKOVERFLOW
 
 	  If in doubt, say "N".
 
+config KMALLOC_BUG_TEST
+	tristate "Module for testing bugs detection in sl[auo]b"
+	default n
+	help
+	  This is a test module doing varios nasty things like
+	  out of bounds accesses, use after free. It is usefull for testing
+	  kernel debugging features like kernel address sanitizer.
+
 source "lib/Kconfig.kmemcheck"
 
 source "lib/Kconfig.kasan"
diff --git a/lib/Makefile b/lib/Makefile
index e48067c..af68259 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
 obj-$(CONFIG_TEST_MODULE) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
+obj-$(CONFIG_KMALLOC_BUG_TEST) += test_kmalloc_bugs.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kmalloc_bugs.c b/lib/test_kmalloc_bugs.c
new file mode 100644
index 0000000..04cd11b
--- /dev/null
+++ b/lib/test_kmalloc_bugs.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kmalloc bug test: " fmt
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+void __init kmalloc_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = PAGE_SIZE*3 - 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*ptr = 'x';
+}
+
+void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[0] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_rigth();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return 0;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 21/21] lib: add kmalloc_bug_test module
@ 2014-07-09 11:30   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-09 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.debug       |   8 ++
 lib/Makefile            |   1 +
 lib/test_kmalloc_bugs.c | 254 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kmalloc_bugs.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 67a4dfc..64fd9e6 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -609,6 +609,14 @@ config DEBUG_STACKOVERFLOW
 
 	  If in doubt, say "N".
 
+config KMALLOC_BUG_TEST
+	tristate "Module for testing bugs detection in sl[auo]b"
+	default n
+	help
+	  This is a test module doing varios nasty things like
+	  out of bounds accesses, use after free. It is usefull for testing
+	  kernel debugging features like kernel address sanitizer.
+
 source "lib/Kconfig.kmemcheck"
 
 source "lib/Kconfig.kasan"
diff --git a/lib/Makefile b/lib/Makefile
index e48067c..af68259 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
 obj-$(CONFIG_TEST_MODULE) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
+obj-$(CONFIG_KMALLOC_BUG_TEST) += test_kmalloc_bugs.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kmalloc_bugs.c b/lib/test_kmalloc_bugs.c
new file mode 100644
index 0000000..04cd11b
--- /dev/null
+++ b/lib/test_kmalloc_bugs.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kmalloc bug test: " fmt
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+void __init kmalloc_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = PAGE_SIZE*3 - 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*ptr = 'x';
+}
+
+void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[0] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_rigth();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return 0;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
1.8.5.5

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 11:29   ` Andrey Ryabinin
  (?)
@ 2014-07-09 14:26     ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:26 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> +
> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
> +
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */

We call these zones "PADDING". Redzones are associated with an object.
Padding is there because bytes are left over, unusable or necessary for
alignment.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 14:26     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:26 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> +
> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
> +
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */

We call these zones "PADDING". Redzones are associated with an object.
Padding is there because bytes are left over, unusable or necessary for
alignment.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 14:26     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> +
> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
> +
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */

We call these zones "PADDING". Redzones are associated with an object.
Padding is there because bytes are left over, unusable or necessary for
alignment.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-09 14:29     ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:29 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.

Hmmm... This is allocator specific. At some future point it would be good
to move error reporting to slab_common.c and use those from all
allocators.

> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slab.h | 5 +++++
>  mm/slub.c | 4 ++--
>  2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 1257ade..912af7f 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>
>  void *slab_next(struct seq_file *m, void *p, loff_t *pos);
>  void slab_stop(struct seq_file *m, void *p);
> +void slab_err(struct kmem_cache *s, struct page *page,
> +		const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +		u8 *object, char *reason);
> +
>
>  #endif /* MM_SLAB_H */
> diff --git a/mm/slub.c b/mm/slub.c
> index 6641a8f..3bdd9ac 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  	dump_stack();
>  }
>
> -static void object_err(struct kmem_cache *s, struct page *page,
> +void object_err(struct kmem_cache *s, struct page *page,
>  			u8 *object, char *reason)
>  {
>  	slab_bug(s, "%s", reason);
>  	print_trailer(s, page, object);
>  }
>
> -static void slab_err(struct kmem_cache *s, struct page *page,
> +void slab_err(struct kmem_cache *s, struct page *page,
>  			const char *fmt, ...)
>  {
>  	va_list args;
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
@ 2014-07-09 14:29     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:29 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.

Hmmm... This is allocator specific. At some future point it would be good
to move error reporting to slab_common.c and use those from all
allocators.

> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slab.h | 5 +++++
>  mm/slub.c | 4 ++--
>  2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 1257ade..912af7f 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>
>  void *slab_next(struct seq_file *m, void *p, loff_t *pos);
>  void slab_stop(struct seq_file *m, void *p);
> +void slab_err(struct kmem_cache *s, struct page *page,
> +		const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +		u8 *object, char *reason);
> +
>
>  #endif /* MM_SLAB_H */
> diff --git a/mm/slub.c b/mm/slub.c
> index 6641a8f..3bdd9ac 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  	dump_stack();
>  }
>
> -static void object_err(struct kmem_cache *s, struct page *page,
> +void object_err(struct kmem_cache *s, struct page *page,
>  			u8 *object, char *reason)
>  {
>  	slab_bug(s, "%s", reason);
>  	print_trailer(s, page, object);
>  }
>
> -static void slab_err(struct kmem_cache *s, struct page *page,
> +void slab_err(struct kmem_cache *s, struct page *page,
>  			const char *fmt, ...)
>  {
>  	va_list args;
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
@ 2014-07-09 14:29     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.

Hmmm... This is allocator specific. At some future point it would be good
to move error reporting to slab_common.c and use those from all
allocators.

> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slab.h | 5 +++++
>  mm/slub.c | 4 ++--
>  2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 1257ade..912af7f 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>
>  void *slab_next(struct seq_file *m, void *p, loff_t *pos);
>  void slab_stop(struct seq_file *m, void *p);
> +void slab_err(struct kmem_cache *s, struct page *page,
> +		const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +		u8 *object, char *reason);
> +
>
>  #endif /* MM_SLAB_H */
> diff --git a/mm/slub.c b/mm/slub.c
> index 6641a8f..3bdd9ac 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  	dump_stack();
>  }
>
> -static void object_err(struct kmem_cache *s, struct page *page,
> +void object_err(struct kmem_cache *s, struct page *page,
>  			u8 *object, char *reason)
>  {
>  	slab_bug(s, "%s", reason);
>  	print_trailer(s, page, object);
>  }
>
> -static void slab_err(struct kmem_cache *s, struct page *page,
> +void slab_err(struct kmem_cache *s, struct page *page,
>  			const char *fmt, ...)
>  {
>  	va_list args;
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-09 14:32     ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:32 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> To avoid false positive reports in kernel address sanitizer krealloc/kzfree
> functions shouldn't be instrumented. Since we want to instrument other
> functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
> instrumented.
>
> Unfortunately we can't completely disable instrumentation for one function.
> We could disable compiler's instrumentation for one function by using
> __atribute__((no_sanitize_address)).
> But the problem here is that memset call will be replaced by instumented
> version kasan_memset since currently it's implemented as define:

Looks good to me and useful regardless of the sanitizer going in.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
@ 2014-07-09 14:32     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:32 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> To avoid false positive reports in kernel address sanitizer krealloc/kzfree
> functions shouldn't be instrumented. Since we want to instrument other
> functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
> instrumented.
>
> Unfortunately we can't completely disable instrumentation for one function.
> We could disable compiler's instrumentation for one function by using
> __atribute__((no_sanitize_address)).
> But the problem here is that memset call will be replaced by instumented
> version kasan_memset since currently it's implemented as define:

Looks good to me and useful regardless of the sanitizer going in.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
@ 2014-07-09 14:32     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> To avoid false positive reports in kernel address sanitizer krealloc/kzfree
> functions shouldn't be instrumented. Since we want to instrument other
> functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
> instrumented.
>
> Unfortunately we can't completely disable instrumentation for one function.
> We could disable compiler's instrumentation for one function by using
> __atribute__((no_sanitize_address)).
> But the problem here is that memset call will be replaced by instumented
> version kasan_memset since currently it's implemented as define:

Looks good to me and useful regardless of the sanitizer going in.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-09 14:33     ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:33 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> When caller creates new kmem_cache, requested size of kmem_cache
> will be stored in alloc_size. Later alloc_size will be used by
> kerenel address sanitizer to mark alloc_size of slab object as
> accessible and the rest of its size as redzone.

I think this patch is not needed since object_size == alloc_size right?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
@ 2014-07-09 14:33     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:33 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> When caller creates new kmem_cache, requested size of kmem_cache
> will be stored in alloc_size. Later alloc_size will be used by
> kerenel address sanitizer to mark alloc_size of slab object as
> accessible and the rest of its size as redzone.

I think this patch is not needed since object_size == alloc_size right?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
@ 2014-07-09 14:33     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> When caller creates new kmem_cache, requested size of kmem_cache
> will be stored in alloc_size. Later alloc_size will be used by
> kerenel address sanitizer to mark alloc_size of slab object as
> accessible and the rest of its size as redzone.

I think this patch is not needed since object_size == alloc_size right?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-09 14:48     ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:48 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size marked as
> accessible, and the rest of the object (including slub's metadata)
> marked as redzone (unaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible by kasan_krealloc call.

Do you really need to go through all of this? Add the hooks to
kmem_cache_alloc_trace() instead and use the existing instrumentation
that is there for other purposes?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-09 14:48     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:48 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size marked as
> accessible, and the rest of the object (including slub's metadata)
> marked as redzone (unaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible by kasan_krealloc call.

Do you really need to go through all of this? Add the hooks to
kmem_cache_alloc_trace() instead and use the existing instrumentation
that is there for other purposes?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-09 14:48     ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-09 14:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size marked as
> accessible, and the rest of the object (including slub's metadata)
> marked as redzone (unaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible by kasan_krealloc call.

Do you really need to go through all of this? Add the hooks to
kmem_cache_alloc_trace() instead and use the existing instrumentation
that is there for other purposes?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 11:29   ` Andrey Ryabinin
  (?)
@ 2014-07-09 19:29     ` Andi Kleen
  -1 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:29 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

Andrey Ryabinin <a.ryabinin@samsung.com> writes:

Seems like a useful facility. Thanks for working on it. Overall the code
looks fairly good. Some comments below.


> +
> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> +
> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
> + - is based on compiler instrumentation (fast),
> + - detects OOB for both writes and reads,
> + - provides UAF detection,

Please expand the acronym.

> +
> +|--------|        |--------|
> +| Memory |----    | Memory |
> +|--------|    \   |--------|
> +| Shadow |--   -->| Shadow |
> +|--------|  \     |--------|
> +|   Bad  |   ---->|  Bad   |
> +|--------|  /     |--------|
> +| Shadow |--   -->| Shadow |
> +|--------|    /   |--------|
> +| Memory |----    | Memory |
> +|--------|        |--------|

I guess this implies it's incompatible with memory hotplug, as the 
shadow couldn't be extended?

That's fine, but you should exclude that in Kconfig.

There are likely more exclude dependencies for Kconfig too.
Neds dependencies on the right sparse mem options?
Does it work with kmemcheck? If not exclude.

Perhaps try to boot it with all other debug options and see which ones break.

> diff --git a/Makefile b/Makefile
> index 64ab7b3..08a07f2 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
>  CFLAGS_KERNEL	=
>  AFLAGS_KERNEL	=
>  CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
> +CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
> +			--param asan-use-after-return=0 \
> +			--param asan-globals=0 \
> +			--param asan-memintrin=0 \
> +			--param asan-instrumentation-with-call-threshold=0 \

Hardcoding --param is not very nice. They can change from compiler
to compiler version. Need some version checking?

Also you should probably have some check that the compiler supports it
(and print some warning if not)
Otherwise randconfig builds will be broken if the compiler doesn't.

Also does the kernel really build/work without the other patches?
If not please move this patchkit to the end of the series, to keep
the patchkit bisectable (this may need moving parts of the includes
into a separate patch)

> diff --git a/commit b/commit
> new file mode 100644
> index 0000000..134f4dd
> --- /dev/null
> +++ b/commit
> @@ -0,0 +1,3 @@
> +
> +I'm working on address sanitizer for kernel.
> +fuck this bloody.
> \ No newline at end of file

Heh. Please remove.

> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> new file mode 100644
> index 0000000..2bfff78
> --- /dev/null
> +++ b/lib/Kconfig.kasan
> @@ -0,0 +1,20 @@
> +config HAVE_ARCH_KASAN
> +	bool
> +
> +if HAVE_ARCH_KASAN
> +
> +config KASAN
> +	bool "AddressSanitizer: dynamic memory error detector"
> +	default n
> +	help
> +	  Enables AddressSanitizer - dynamic memory error detector,
> +	  that finds out-of-bounds and use-after-free bugs.

Needs much more description.

> +
> +config KASAN_SANITIZE_ALL
> +	bool "Instrument entire kernel"
> +	depends on KASAN
> +	default y
> +	help
> +	  This enables compiler intrumentation for entire kernel
> +

Same.


> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..e2cd345
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,292 @@
> +/*
> + *

Add one line here what the file does. Same for other files.

> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> +#include "kasan.h"
> +#include "../slab.h"

That's ugly, but ok.

> +
> +static bool __read_mostly kasan_initialized;

It would be better to use a static_key, but I guess your initialization
is too early?

Of course the proposal to move it into start_kernel and get rid of the
flag would be best.

> +
> +unsigned long kasan_shadow_start;
> +unsigned long kasan_shadow_end;
> +
> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */

Do these all need to be global?

> +
> +
> +static inline bool addr_is_in_mem(unsigned long addr)
> +{
> +	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> +}

Of course there are lots of cases where this doesn't work (like large
holes), but I assume this has been checked elsewhere?


> +
> +void kasan_enable_local(void)
> +{
> +	if (likely(kasan_initialized))
> +		current->kasan_depth--;
> +}
> +
> +void kasan_disable_local(void)
> +{
> +	if (likely(kasan_initialized))
> +		current->kasan_depth++;
> +}

Couldn't this be done without checking the flag?


> +		return;
> +
> +	if (unlikely(addr < TASK_SIZE)) {
> +		info.access_addr = addr;
> +		info.access_size = size;
> +		info.is_write = write;
> +		info.ip = _RET_IP_;
> +		kasan_report_user_access(&info);
> +		return;
> +	}

How about vsyscall pages here?

> +
> +	if (!addr_is_in_mem(addr))
> +		return;
> +
> +	access_addr = memory_is_poisoned(addr, size);
> +	if (likely(access_addr == 0))
> +		return;
> +
> +	info.access_addr = access_addr;
> +	info.access_size = size;
> +	info.is_write = write;
> +	info.ip = _RET_IP_;
> +	kasan_report_error(&info);
> +}
> +
> +void __init kasan_alloc_shadow(void)
> +{
> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> +	unsigned long shadow_size;
> +	phys_addr_t shadow_phys_start;
> +
> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
> +
> +	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
> +	if (!shadow_phys_start) {
> +		pr_err("Unable to reserve shadow memory\n");
> +		return;

Wouldn't this crash&burn later? panic?

> +void *kasan_memcpy(void *dst, const void *src, size_t len)
> +{
> +	if (unlikely(len == 0))
> +		return dst;
> +
> +	check_memory_region((unsigned long)src, len, false);
> +	check_memory_region((unsigned long)dst, len, true);

I assume this handles negative len?
Also check for overlaps?

> +
> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
> +{
> +	return x - ((x - slab_start) % s->size);
> +}

This should be in the respective slab headers, not hard coded.

> +void kasan_report_error(struct access_info *info)
> +{
> +	kasan_disable_local();
> +	pr_err("================================="
> +		"=================================\n");
> +	print_error_description(info);
> +	print_address_description(info);
> +	print_shadow_for_address(info->access_addr);
> +	pr_err("================================="
> +		"=================================\n");
> +	kasan_enable_local();
> +}
> +
> +void kasan_report_user_access(struct access_info *info)
> +{
> +	kasan_disable_local();

Should print the same prefix oopses use, a lot of log grep tools
look for that. 

Also you may want some lock to prevent multiple
reports mixing. 

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 19:29     ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:29 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

Andrey Ryabinin <a.ryabinin@samsung.com> writes:

Seems like a useful facility. Thanks for working on it. Overall the code
looks fairly good. Some comments below.


> +
> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> +
> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
> + - is based on compiler instrumentation (fast),
> + - detects OOB for both writes and reads,
> + - provides UAF detection,

Please expand the acronym.

> +
> +|--------|        |--------|
> +| Memory |----    | Memory |
> +|--------|    \   |--------|
> +| Shadow |--   -->| Shadow |
> +|--------|  \     |--------|
> +|   Bad  |   ---->|  Bad   |
> +|--------|  /     |--------|
> +| Shadow |--   -->| Shadow |
> +|--------|    /   |--------|
> +| Memory |----    | Memory |
> +|--------|        |--------|

I guess this implies it's incompatible with memory hotplug, as the 
shadow couldn't be extended?

That's fine, but you should exclude that in Kconfig.

There are likely more exclude dependencies for Kconfig too.
Neds dependencies on the right sparse mem options?
Does it work with kmemcheck? If not exclude.

Perhaps try to boot it with all other debug options and see which ones break.

> diff --git a/Makefile b/Makefile
> index 64ab7b3..08a07f2 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
>  CFLAGS_KERNEL	=
>  AFLAGS_KERNEL	=
>  CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
> +CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
> +			--param asan-use-after-return=0 \
> +			--param asan-globals=0 \
> +			--param asan-memintrin=0 \
> +			--param asan-instrumentation-with-call-threshold=0 \

Hardcoding --param is not very nice. They can change from compiler
to compiler version. Need some version checking?

Also you should probably have some check that the compiler supports it
(and print some warning if not)
Otherwise randconfig builds will be broken if the compiler doesn't.

Also does the kernel really build/work without the other patches?
If not please move this patchkit to the end of the series, to keep
the patchkit bisectable (this may need moving parts of the includes
into a separate patch)

> diff --git a/commit b/commit
> new file mode 100644
> index 0000000..134f4dd
> --- /dev/null
> +++ b/commit
> @@ -0,0 +1,3 @@
> +
> +I'm working on address sanitizer for kernel.
> +fuck this bloody.
> \ No newline at end of file

Heh. Please remove.

> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> new file mode 100644
> index 0000000..2bfff78
> --- /dev/null
> +++ b/lib/Kconfig.kasan
> @@ -0,0 +1,20 @@
> +config HAVE_ARCH_KASAN
> +	bool
> +
> +if HAVE_ARCH_KASAN
> +
> +config KASAN
> +	bool "AddressSanitizer: dynamic memory error detector"
> +	default n
> +	help
> +	  Enables AddressSanitizer - dynamic memory error detector,
> +	  that finds out-of-bounds and use-after-free bugs.

Needs much more description.

> +
> +config KASAN_SANITIZE_ALL
> +	bool "Instrument entire kernel"
> +	depends on KASAN
> +	default y
> +	help
> +	  This enables compiler intrumentation for entire kernel
> +

Same.


> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..e2cd345
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,292 @@
> +/*
> + *

Add one line here what the file does. Same for other files.

> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> +#include "kasan.h"
> +#include "../slab.h"

That's ugly, but ok.

> +
> +static bool __read_mostly kasan_initialized;

It would be better to use a static_key, but I guess your initialization
is too early?

Of course the proposal to move it into start_kernel and get rid of the
flag would be best.

> +
> +unsigned long kasan_shadow_start;
> +unsigned long kasan_shadow_end;
> +
> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */

Do these all need to be global?

> +
> +
> +static inline bool addr_is_in_mem(unsigned long addr)
> +{
> +	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> +}

Of course there are lots of cases where this doesn't work (like large
holes), but I assume this has been checked elsewhere?


> +
> +void kasan_enable_local(void)
> +{
> +	if (likely(kasan_initialized))
> +		current->kasan_depth--;
> +}
> +
> +void kasan_disable_local(void)
> +{
> +	if (likely(kasan_initialized))
> +		current->kasan_depth++;
> +}

Couldn't this be done without checking the flag?


> +		return;
> +
> +	if (unlikely(addr < TASK_SIZE)) {
> +		info.access_addr = addr;
> +		info.access_size = size;
> +		info.is_write = write;
> +		info.ip = _RET_IP_;
> +		kasan_report_user_access(&info);
> +		return;
> +	}

How about vsyscall pages here?

> +
> +	if (!addr_is_in_mem(addr))
> +		return;
> +
> +	access_addr = memory_is_poisoned(addr, size);
> +	if (likely(access_addr == 0))
> +		return;
> +
> +	info.access_addr = access_addr;
> +	info.access_size = size;
> +	info.is_write = write;
> +	info.ip = _RET_IP_;
> +	kasan_report_error(&info);
> +}
> +
> +void __init kasan_alloc_shadow(void)
> +{
> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> +	unsigned long shadow_size;
> +	phys_addr_t shadow_phys_start;
> +
> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
> +
> +	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
> +	if (!shadow_phys_start) {
> +		pr_err("Unable to reserve shadow memory\n");
> +		return;

Wouldn't this crash&burn later? panic?

> +void *kasan_memcpy(void *dst, const void *src, size_t len)
> +{
> +	if (unlikely(len == 0))
> +		return dst;
> +
> +	check_memory_region((unsigned long)src, len, false);
> +	check_memory_region((unsigned long)dst, len, true);

I assume this handles negative len?
Also check for overlaps?

> +
> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
> +{
> +	return x - ((x - slab_start) % s->size);
> +}

This should be in the respective slab headers, not hard coded.

> +void kasan_report_error(struct access_info *info)
> +{
> +	kasan_disable_local();
> +	pr_err("================================="
> +		"=================================\n");
> +	print_error_description(info);
> +	print_address_description(info);
> +	print_shadow_for_address(info->access_addr);
> +	pr_err("================================="
> +		"=================================\n");
> +	kasan_enable_local();
> +}
> +
> +void kasan_report_user_access(struct access_info *info)
> +{
> +	kasan_disable_local();

Should print the same prefix oopses use, a lot of log grep tools
look for that. 

Also you may want some lock to prevent multiple
reports mixing. 

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 19:29     ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:29 UTC (permalink / raw)
  To: linux-arm-kernel

Andrey Ryabinin <a.ryabinin@samsung.com> writes:

Seems like a useful facility. Thanks for working on it. Overall the code
looks fairly good. Some comments below.


> +
> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> +
> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
> + - is based on compiler instrumentation (fast),
> + - detects OOB for both writes and reads,
> + - provides UAF detection,

Please expand the acronym.

> +
> +|--------|        |--------|
> +| Memory |----    | Memory |
> +|--------|    \   |--------|
> +| Shadow |--   -->| Shadow |
> +|--------|  \     |--------|
> +|   Bad  |   ---->|  Bad   |
> +|--------|  /     |--------|
> +| Shadow |--   -->| Shadow |
> +|--------|    /   |--------|
> +| Memory |----    | Memory |
> +|--------|        |--------|

I guess this implies it's incompatible with memory hotplug, as the 
shadow couldn't be extended?

That's fine, but you should exclude that in Kconfig.

There are likely more exclude dependencies for Kconfig too.
Neds dependencies on the right sparse mem options?
Does it work with kmemcheck? If not exclude.

Perhaps try to boot it with all other debug options and see which ones break.

> diff --git a/Makefile b/Makefile
> index 64ab7b3..08a07f2 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
>  CFLAGS_KERNEL	=
>  AFLAGS_KERNEL	=
>  CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
> +CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
> +			--param asan-use-after-return=0 \
> +			--param asan-globals=0 \
> +			--param asan-memintrin=0 \
> +			--param asan-instrumentation-with-call-threshold=0 \

Hardcoding --param is not very nice. They can change from compiler
to compiler version. Need some version checking?

Also you should probably have some check that the compiler supports it
(and print some warning if not)
Otherwise randconfig builds will be broken if the compiler doesn't.

Also does the kernel really build/work without the other patches?
If not please move this patchkit to the end of the series, to keep
the patchkit bisectable (this may need moving parts of the includes
into a separate patch)

> diff --git a/commit b/commit
> new file mode 100644
> index 0000000..134f4dd
> --- /dev/null
> +++ b/commit
> @@ -0,0 +1,3 @@
> +
> +I'm working on address sanitizer for kernel.
> +fuck this bloody.
> \ No newline at end of file

Heh. Please remove.

> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> new file mode 100644
> index 0000000..2bfff78
> --- /dev/null
> +++ b/lib/Kconfig.kasan
> @@ -0,0 +1,20 @@
> +config HAVE_ARCH_KASAN
> +	bool
> +
> +if HAVE_ARCH_KASAN
> +
> +config KASAN
> +	bool "AddressSanitizer: dynamic memory error detector"
> +	default n
> +	help
> +	  Enables AddressSanitizer - dynamic memory error detector,
> +	  that finds out-of-bounds and use-after-free bugs.

Needs much more description.

> +
> +config KASAN_SANITIZE_ALL
> +	bool "Instrument entire kernel"
> +	depends on KASAN
> +	default y
> +	help
> +	  This enables compiler intrumentation for entire kernel
> +

Same.


> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..e2cd345
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,292 @@
> +/*
> + *

Add one line here what the file does. Same for other files.

> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> +#include "kasan.h"
> +#include "../slab.h"

That's ugly, but ok.

> +
> +static bool __read_mostly kasan_initialized;

It would be better to use a static_key, but I guess your initialization
is too early?

Of course the proposal to move it into start_kernel and get rid of the
flag would be best.

> +
> +unsigned long kasan_shadow_start;
> +unsigned long kasan_shadow_end;
> +
> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */

Do these all need to be global?

> +
> +
> +static inline bool addr_is_in_mem(unsigned long addr)
> +{
> +	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> +}

Of course there are lots of cases where this doesn't work (like large
holes), but I assume this has been checked elsewhere?


> +
> +void kasan_enable_local(void)
> +{
> +	if (likely(kasan_initialized))
> +		current->kasan_depth--;
> +}
> +
> +void kasan_disable_local(void)
> +{
> +	if (likely(kasan_initialized))
> +		current->kasan_depth++;
> +}

Couldn't this be done without checking the flag?


> +		return;
> +
> +	if (unlikely(addr < TASK_SIZE)) {
> +		info.access_addr = addr;
> +		info.access_size = size;
> +		info.is_write = write;
> +		info.ip = _RET_IP_;
> +		kasan_report_user_access(&info);
> +		return;
> +	}

How about vsyscall pages here?

> +
> +	if (!addr_is_in_mem(addr))
> +		return;
> +
> +	access_addr = memory_is_poisoned(addr, size);
> +	if (likely(access_addr == 0))
> +		return;
> +
> +	info.access_addr = access_addr;
> +	info.access_size = size;
> +	info.is_write = write;
> +	info.ip = _RET_IP_;
> +	kasan_report_error(&info);
> +}
> +
> +void __init kasan_alloc_shadow(void)
> +{
> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> +	unsigned long shadow_size;
> +	phys_addr_t shadow_phys_start;
> +
> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
> +
> +	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
> +	if (!shadow_phys_start) {
> +		pr_err("Unable to reserve shadow memory\n");
> +		return;

Wouldn't this crash&burn later? panic?

> +void *kasan_memcpy(void *dst, const void *src, size_t len)
> +{
> +	if (unlikely(len == 0))
> +		return dst;
> +
> +	check_memory_region((unsigned long)src, len, false);
> +	check_memory_region((unsigned long)dst, len, true);

I assume this handles negative len?
Also check for overlaps?

> +
> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
> +{
> +	return x - ((x - slab_start) % s->size);
> +}

This should be in the respective slab headers, not hard coded.

> +void kasan_report_error(struct access_info *info)
> +{
> +	kasan_disable_local();
> +	pr_err("================================="
> +		"=================================\n");
> +	print_error_description(info);
> +	print_address_description(info);
> +	print_shadow_for_address(info->access_addr);
> +	pr_err("================================="
> +		"=================================\n");
> +	kasan_enable_local();
> +}
> +
> +void kasan_report_user_access(struct access_info *info)
> +{
> +	kasan_disable_local();

Should print the same prefix oopses use, a lot of log grep tools
look for that. 

Also you may want some lock to prevent multiple
reports mixing. 

-Andi
-- 
ak at linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
  2014-07-09 11:29   ` Andrey Ryabinin
  (?)
@ 2014-07-09 19:31     ` Andi Kleen
  -1 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:31 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> +
> +#undef memcpy
> +void *kasan_memset(void *ptr, int val, size_t len);
> +void *kasan_memcpy(void *dst, const void *src, size_t len);
> +void *kasan_memmove(void *dst, const void *src, size_t len);
> +
> +#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
> +#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
> +#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))

I don't think just define is enough, gcc can call these functions
implicitely too (both with and without __). For example for a struct copy.

You need to have true linker level aliases. 

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
@ 2014-07-09 19:31     ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:31 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> +
> +#undef memcpy
> +void *kasan_memset(void *ptr, int val, size_t len);
> +void *kasan_memcpy(void *dst, const void *src, size_t len);
> +void *kasan_memmove(void *dst, const void *src, size_t len);
> +
> +#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
> +#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
> +#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))

I don't think just define is enough, gcc can call these functions
implicitely too (both with and without __). For example for a struct copy.

You need to have true linker level aliases. 

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
@ 2014-07-09 19:31     ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:31 UTC (permalink / raw)
  To: linux-arm-kernel

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> +
> +#undef memcpy
> +void *kasan_memset(void *ptr, int val, size_t len);
> +void *kasan_memcpy(void *dst, const void *src, size_t len);
> +void *kasan_memmove(void *dst, const void *src, size_t len);
> +
> +#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
> +#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
> +#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))

I don't think just define is enough, gcc can call these functions
implicitely too (both with and without __). For example for a struct copy.

You need to have true linker level aliases. 

-Andi

-- 
ak at linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
  2014-07-09 11:29   ` Andrey Ryabinin
  (?)
@ 2014-07-09 19:33     ` Andi Kleen
  -1 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:33 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

Andrey Ryabinin <a.ryabinin@samsung.com> writes:

> Instrumentation of this files may result in unbootable machine.

This doesn't make sense. Is the code not NMI safe? 
If yes that would need to be fixed because

Please debug more.

perf is a common source of bugs (see Vice Weaver's fuzzer results),
so it would be good to have this functionality for it.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
@ 2014-07-09 19:33     ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:33 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

Andrey Ryabinin <a.ryabinin@samsung.com> writes:

> Instrumentation of this files may result in unbootable machine.

This doesn't make sense. Is the code not NMI safe? 
If yes that would need to be fixed because

Please debug more.

perf is a common source of bugs (see Vice Weaver's fuzzer results),
so it would be good to have this functionality for it.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
@ 2014-07-09 19:33     ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 19:33 UTC (permalink / raw)
  To: linux-arm-kernel

Andrey Ryabinin <a.ryabinin@samsung.com> writes:

> Instrumentation of this files may result in unbootable machine.

This doesn't make sense. Is the code not NMI safe? 
If yes that would need to be fixed because

Please debug more.

perf is a common source of bugs (see Vice Weaver's fuzzer results),
so it would be good to have this functionality for it.

-Andi

-- 
ak at linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 11:29   ` Andrey Ryabinin
  (?)
@ 2014-07-09 20:26     ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:26 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + kasan_shadow_start;
>      }

How does this interact with vmalloc() addresses or those from a kmap()?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 20:26     ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:26 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + kasan_shadow_start;
>      }

How does this interact with vmalloc() addresses or those from a kmap()?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 20:26     ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:26 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + kasan_shadow_start;
>      }

How does this interact with vmalloc() addresses or those from a kmap()?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 11:29   ` Andrey Ryabinin
  (?)
@ 2014-07-09 20:37     ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:37 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +void __init kasan_alloc_shadow(void)
> +{
> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> +	unsigned long shadow_size;
> +	phys_addr_t shadow_phys_start;
> +
> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;

This calculation is essentially meaningless, and it's going to break
when we have sparse memory situations like having big holes.  This code
attempts to allocate non-sparse data for backing what might be very
sparse memory ranges.

It's quite OK for us to handle configurations today where we have 2GB of
RAM with 1GB at 0x0 and 1GB at 0x10000000000.  This code would attempt
to allocate a 128GB shadow area for this configuration with 2GB of RAM. :)

You're probably going to get stuck doing something similar to the
sparsemem-vmemmap code does.  You could handle this for normal sparsemem
by adding a shadow area pointer to the memory section.
Or, just vmalloc() (get_vm_area() really) the virtual space and then
make sure to allocate the backing store before you need it (handling the
faults would probably get too tricky).

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 20:37     ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:37 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +void __init kasan_alloc_shadow(void)
> +{
> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> +	unsigned long shadow_size;
> +	phys_addr_t shadow_phys_start;
> +
> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;

This calculation is essentially meaningless, and it's going to break
when we have sparse memory situations like having big holes.  This code
attempts to allocate non-sparse data for backing what might be very
sparse memory ranges.

It's quite OK for us to handle configurations today where we have 2GB of
RAM with 1GB at 0x0 and 1GB at 0x10000000000.  This code would attempt
to allocate a 128GB shadow area for this configuration with 2GB of RAM. :)

You're probably going to get stuck doing something similar to the
sparsemem-vmemmap code does.  You could handle this for normal sparsemem
by adding a shadow area pointer to the memory section.
Or, just vmalloc() (get_vm_area() really) the virtual space and then
make sure to allocate the backing store before you need it (handling the
faults would probably get too tricky).

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 20:37     ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:37 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +void __init kasan_alloc_shadow(void)
> +{
> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> +	unsigned long shadow_size;
> +	phys_addr_t shadow_phys_start;
> +
> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;

This calculation is essentially meaningless, and it's going to break
when we have sparse memory situations like having big holes.  This code
attempts to allocate non-sparse data for backing what might be very
sparse memory ranges.

It's quite OK for us to handle configurations today where we have 2GB of
RAM with 1GB at 0x0 and 1GB at 0x10000000000.  This code would attempt
to allocate a 128GB shadow area for this configuration with 2GB of RAM. :)

You're probably going to get stuck doing something similar to the
sparsemem-vmemmap code does.  You could handle this for normal sparsemem
by adding a shadow area pointer to the memory section.
Or, just vmalloc() (get_vm_area() really) the virtual space and then
make sure to allocate the backing store before you need it (handling the
faults would probably get too tricky).

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 11:29   ` Andrey Ryabinin
  (?)
@ 2014-07-09 20:38     ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:38 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +config KASAN
> +	bool "AddressSanitizer: dynamic memory error detector"
> +	default n
> +	help
> +	  Enables AddressSanitizer - dynamic memory error detector,
> +	  that finds out-of-bounds and use-after-free bugs.

This definitely needs some more text like "This option eats boatloads of
memory and will slow your system down enough that it should never be
used in production unless you are crazy".

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 20:38     ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:38 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +config KASAN
> +	bool "AddressSanitizer: dynamic memory error detector"
> +	default n
> +	help
> +	  Enables AddressSanitizer - dynamic memory error detector,
> +	  that finds out-of-bounds and use-after-free bugs.

This definitely needs some more text like "This option eats boatloads of
memory and will slow your system down enough that it should never be
used in production unless you are crazy".

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 20:38     ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 20:38 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +config KASAN
> +	bool "AddressSanitizer: dynamic memory error detector"
> +	default n
> +	help
> +	  Enables AddressSanitizer - dynamic memory error detector,
> +	  that finds out-of-bounds and use-after-free bugs.

This definitely needs some more text like "This option eats boatloads of
memory and will slow your system down enough that it should never be
used in production unless you are crazy".

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 19:29     ` Andi Kleen
  (?)
@ 2014-07-09 20:40       ` Yuri Gribov
  -1 siblings, 0 replies; 862+ messages in thread
From: Yuri Gribov @ 2014-07-09 20:40 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

On Wed, Jul 9, 2014 at 11:29 PM, Andi Kleen <andi@firstfloor.org> wrote:
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?

We plan to address this soon. CFLAGS will look more like
-fsanitize=kernel-address but this flag is not yet in gcc.

-Y

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 20:40       ` Yuri Gribov
  0 siblings, 0 replies; 862+ messages in thread
From: Yuri Gribov @ 2014-07-09 20:40 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

On Wed, Jul 9, 2014 at 11:29 PM, Andi Kleen <andi@firstfloor.org> wrote:
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?

We plan to address this soon. CFLAGS will look more like
-fsanitize=kernel-address but this flag is not yet in gcc.

-Y

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-09 20:40       ` Yuri Gribov
  0 siblings, 0 replies; 862+ messages in thread
From: Yuri Gribov @ 2014-07-09 20:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 9, 2014 at 11:29 PM, Andi Kleen <andi@firstfloor.org> wrote:
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?

We plan to address this soon. CFLAGS will look more like
-fsanitize=kernel-address but this flag is not yet in gcc.

-Y

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
  2014-07-09 11:29 ` Andrey Ryabinin
  (?)
@ 2014-07-09 21:19   ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 21:19 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

This is totally self-serving (and employer-serving), but has anybody
thought about this large collection of memory debugging tools that we
are growing?  It helps to have them all in the same places in the menus
(thanks for adding it to Memory Debugging, btw!).

But, this gives us at least four things that overlap with kasan's
features on some level.  Each of these has its own advantages and
disadvantages, of course:

1. DEBUG_PAGEALLOC
2. SLUB debugging / DEBUG_OBJECTS
3. kmemcheck
4. kasan
... and there are surely more coming down pike.  Like Intel MPX:

> https://software.intel.com/en-us/articles/introduction-to-intel-memory-protection-extensions

Or, do we just keep adding these overlapping tools and their associated
code over and over and fragment their user bases?

You're also claiming that "KASAN is better than all of
CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
DEBUG_PAGEALLOC on kernels where KASAN is available?

Maybe we just need to keep these out of mainline and make Andrew carry
it in -mm until the end of time. :)

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 21:19   ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 21:19 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

This is totally self-serving (and employer-serving), but has anybody
thought about this large collection of memory debugging tools that we
are growing?  It helps to have them all in the same places in the menus
(thanks for adding it to Memory Debugging, btw!).

But, this gives us at least four things that overlap with kasan's
features on some level.  Each of these has its own advantages and
disadvantages, of course:

1. DEBUG_PAGEALLOC
2. SLUB debugging / DEBUG_OBJECTS
3. kmemcheck
4. kasan
... and there are surely more coming down pike.  Like Intel MPX:

> https://software.intel.com/en-us/articles/introduction-to-intel-memory-protection-extensions

Or, do we just keep adding these overlapping tools and their associated
code over and over and fragment their user bases?

You're also claiming that "KASAN is better than all of
CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
DEBUG_PAGEALLOC on kernels where KASAN is available?

Maybe we just need to keep these out of mainline and make Andrew carry
it in -mm until the end of time. :)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 21:19   ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 21:19 UTC (permalink / raw)
  To: linux-arm-kernel

This is totally self-serving (and employer-serving), but has anybody
thought about this large collection of memory debugging tools that we
are growing?  It helps to have them all in the same places in the menus
(thanks for adding it to Memory Debugging, btw!).

But, this gives us at least four things that overlap with kasan's
features on some level.  Each of these has its own advantages and
disadvantages, of course:

1. DEBUG_PAGEALLOC
2. SLUB debugging / DEBUG_OBJECTS
3. kmemcheck
4. kasan
... and there are surely more coming down pike.  Like Intel MPX:

> https://software.intel.com/en-us/articles/introduction-to-intel-memory-protection-extensions

Or, do we just keep adding these overlapping tools and their associated
code over and over and fragment their user bases?

You're also claiming that "KASAN is better than all of
CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
DEBUG_PAGEALLOC on kernels where KASAN is available?

Maybe we just need to keep these out of mainline and make Andrew carry
it in -mm until the end of time. :)

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
  2014-07-09 21:19   ` Dave Hansen
  (?)
@ 2014-07-09 21:44     ` Andi Kleen
  -1 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 21:44 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-kbuild, linux-arm-kernel, x86, linux-mm

Dave Hansen <dave.hansen@intel.com> writes:
>
> You're also claiming that "KASAN is better than all of

better as in finding more bugs, but surely not better as in
"do so with less overhead"

> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
> DEBUG_PAGEALLOC on kernels where KASAN is available?

I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.

DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
KASAN is "slow but thorough" There are niches for both.

But I could see KASAN eventually deprecating kmemcheck, which
is just incredible slow.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 21:44     ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 21:44 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-kbuild, linux-arm-kernel, x86, linux-mm

Dave Hansen <dave.hansen@intel.com> writes:
>
> You're also claiming that "KASAN is better than all of

better as in finding more bugs, but surely not better as in
"do so with less overhead"

> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
> DEBUG_PAGEALLOC on kernels where KASAN is available?

I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.

DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
KASAN is "slow but thorough" There are niches for both.

But I could see KASAN eventually deprecating kmemcheck, which
is just incredible slow.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 21:44     ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-09 21:44 UTC (permalink / raw)
  To: linux-arm-kernel

Dave Hansen <dave.hansen@intel.com> writes:
>
> You're also claiming that "KASAN is better than all of

better as in finding more bugs, but surely not better as in
"do so with less overhead"

> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
> DEBUG_PAGEALLOC on kernels where KASAN is available?

I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.

DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
KASAN is "slow but thorough" There are niches for both.

But I could see KASAN eventually deprecating kmemcheck, which
is just incredible slow.

-Andi

-- 
ak at linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
  2014-07-09 21:44     ` Andi Kleen
  (?)
@ 2014-07-09 21:59       ` Vegard Nossum
  -1 siblings, 0 replies; 862+ messages in thread
From: Vegard Nossum @ 2014-07-09 21:59 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Dave Hansen, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kbuild,
	linux-arm-kernel, x86 maintainers, Linux Memory Management List

On 9 July 2014 23:44, Andi Kleen <andi@firstfloor.org> wrote:
> Dave Hansen <dave.hansen@intel.com> writes:
>>
>> You're also claiming that "KASAN is better than all of
>
> better as in finding more bugs, but surely not better as in
> "do so with less overhead"
>
>> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
>> DEBUG_PAGEALLOC on kernels where KASAN is available?
>
> I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
>
> DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
> KASAN is "slow but thorough" There are niches for both.
>
> But I could see KASAN eventually deprecating kmemcheck, which
> is just incredible slow.

FWIW, I definitely agree with this -- if KASAN can do everything that
kmemcheck can, it is no doubt the right way forward.


Vegard

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 21:59       ` Vegard Nossum
  0 siblings, 0 replies; 862+ messages in thread
From: Vegard Nossum @ 2014-07-09 21:59 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Dave Hansen, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kbuild,
	linux-arm-kernel, x86 maintainers, Linux Memory Management List

On 9 July 2014 23:44, Andi Kleen <andi@firstfloor.org> wrote:
> Dave Hansen <dave.hansen@intel.com> writes:
>>
>> You're also claiming that "KASAN is better than all of
>
> better as in finding more bugs, but surely not better as in
> "do so with less overhead"
>
>> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
>> DEBUG_PAGEALLOC on kernels where KASAN is available?
>
> I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
>
> DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
> KASAN is "slow but thorough" There are niches for both.
>
> But I could see KASAN eventually deprecating kmemcheck, which
> is just incredible slow.

FWIW, I definitely agree with this -- if KASAN can do everything that
kmemcheck can, it is no doubt the right way forward.


Vegard

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 21:59       ` Vegard Nossum
  0 siblings, 0 replies; 862+ messages in thread
From: Vegard Nossum @ 2014-07-09 21:59 UTC (permalink / raw)
  To: linux-arm-kernel

On 9 July 2014 23:44, Andi Kleen <andi@firstfloor.org> wrote:
> Dave Hansen <dave.hansen@intel.com> writes:
>>
>> You're also claiming that "KASAN is better than all of
>
> better as in finding more bugs, but surely not better as in
> "do so with less overhead"
>
>> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
>> DEBUG_PAGEALLOC on kernels where KASAN is available?
>
> I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
>
> DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
> KASAN is "slow but thorough" There are niches for both.
>
> But I could see KASAN eventually deprecating kmemcheck, which
> is just incredible slow.

FWIW, I definitely agree with this -- if KASAN can do everything that
kmemcheck can, it is no doubt the right way forward.


Vegard

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
  2014-07-09 21:59       ` Vegard Nossum
  (?)
@ 2014-07-09 23:33         ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 23:33 UTC (permalink / raw)
  To: Vegard Nossum, Andi Kleen
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, kbuild,
	linux-arm-kernel, x86 maintainers, Linux Memory Management List

On 07/09/2014 02:59 PM, Vegard Nossum wrote:
>> > But I could see KASAN eventually deprecating kmemcheck, which
>> > is just incredible slow.
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.

That's very cool.  For what it's worth, the per-arch work does appear to
be pretty minimal and the things like the string function replacements
_should_ be able to be made generic.  Aren't the x86_32/x86_64 and arm
hooks pretty much copied-and-pasted?


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 23:33         ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 23:33 UTC (permalink / raw)
  To: Vegard Nossum, Andi Kleen
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, kbuild,
	linux-arm-kernel, x86 maintainers, Linux Memory Management List

On 07/09/2014 02:59 PM, Vegard Nossum wrote:
>> > But I could see KASAN eventually deprecating kmemcheck, which
>> > is just incredible slow.
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.

That's very cool.  For what it's worth, the per-arch work does appear to
be pretty minimal and the things like the string function replacements
_should_ be able to be made generic.  Aren't the x86_32/x86_64 and arm
hooks pretty much copied-and-pasted?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-09 23:33         ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-09 23:33 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/2014 02:59 PM, Vegard Nossum wrote:
>> > But I could see KASAN eventually deprecating kmemcheck, which
>> > is just incredible slow.
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.

That's very cool.  For what it's worth, the per-arch work does appear to
be pretty minimal and the things like the string function replacements
_should_ be able to be made generic.  Aren't the x86_32/x86_64 and arm
hooks pretty much copied-and-pasted?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
  2014-07-09 21:59       ` Vegard Nossum
  (?)
@ 2014-07-10  0:03         ` Andi Kleen
  -1 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-10  0:03 UTC (permalink / raw)
  To: Vegard Nossum
  Cc: Andi Kleen, Dave Hansen, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kbuild,
	linux-arm-kernel, x86 maintainers, Linux Memory Management List

> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.

Thanks

BTW I didn't want to sound like I'm against kmemcheck. It is a very
useful tool and was impressive work given the constraints (no help from
the compiler)

-andi

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-10  0:03         ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-10  0:03 UTC (permalink / raw)
  To: Vegard Nossum
  Cc: Andi Kleen, Dave Hansen, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, kbuild,
	linux-arm-kernel, x86 maintainers, Linux Memory Management List

> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.

Thanks

BTW I didn't want to sound like I'm against kmemcheck. It is a very
useful tool and was impressive work given the constraints (no help from
the compiler)

-andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-10  0:03         ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-07-10  0:03 UTC (permalink / raw)
  To: linux-arm-kernel

> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.

Thanks

BTW I didn't want to sound like I'm against kmemcheck. It is a very
useful tool and was impressive work given the constraints (no help from
the compiler)

-andi

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 14:26     ` Christoph Lameter
  (?)
@ 2014-07-10  7:31       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:31 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:26, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> +
>> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
>> +
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
> 
> We call these zones "PADDING". Redzones are associated with an object.
> Padding is there because bytes are left over, unusable or necessary for
> alignment.
> 
Goop point. I will change the name to make it less confusing.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10  7:31       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:31 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:26, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> +
>> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
>> +
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
> 
> We call these zones "PADDING". Redzones are associated with an object.
> Padding is there because bytes are left over, unusable or necessary for
> alignment.
> 
Goop point. I will change the name to make it less confusing.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10  7:31       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:31 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/14 18:26, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> +
>> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
>> +
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
> 
> We call these zones "PADDING". Redzones are associated with an object.
> Padding is there because bytes are left over, unusable or necessary for
> alignment.
> 
Goop point. I will change the name to make it less confusing.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
  2014-07-09 14:29     ` Christoph Lameter
  (?)
@ 2014-07-10  7:41       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:41 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:29, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> Remove static and add function declarations to mm/slab.h so they
>> could be used by kernel address sanitizer.
> 
> Hmmm... This is allocator specific. At some future point it would be good
> to move error reporting to slab_common.c and use those from all
> allocators.
> 

I could move declarations to kasan internals, but it will look ugly too.
I also had an idea about unifying SLAB_DEBUG and SLUB_DEBUG at some future.
I can't tell right now how hard it will be, but it seems doable.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
@ 2014-07-10  7:41       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:41 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:29, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> Remove static and add function declarations to mm/slab.h so they
>> could be used by kernel address sanitizer.
> 
> Hmmm... This is allocator specific. At some future point it would be good
> to move error reporting to slab_common.c and use those from all
> allocators.
> 

I could move declarations to kasan internals, but it will look ugly too.
I also had an idea about unifying SLAB_DEBUG and SLUB_DEBUG at some future.
I can't tell right now how hard it will be, but it seems doable.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
@ 2014-07-10  7:41       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:41 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/14 18:29, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> Remove static and add function declarations to mm/slab.h so they
>> could be used by kernel address sanitizer.
> 
> Hmmm... This is allocator specific. At some future point it would be good
> to move error reporting to slab_common.c and use those from all
> allocators.
> 

I could move declarations to kasan internals, but it will look ugly too.
I also had an idea about unifying SLAB_DEBUG and SLUB_DEBUG at some future.
I can't tell right now how hard it will be, but it seems doable.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
  2014-07-09 14:32     ` Christoph Lameter
  (?)
@ 2014-07-10  7:43       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:43 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:32, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> To avoid false positive reports in kernel address sanitizer krealloc/kzfree
>> functions shouldn't be instrumented. Since we want to instrument other
>> functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
>> instrumented.
>>
>> Unfortunately we can't completely disable instrumentation for one function.
>> We could disable compiler's instrumentation for one function by using
>> __atribute__((no_sanitize_address)).
>> But the problem here is that memset call will be replaced by instumented
>> version kasan_memset since currently it's implemented as define:
> 
> Looks good to me and useful regardless of the sanitizer going in.
> 
> Acked-by: Christoph Lameter <cl@linux.com>
> 

I also noticed in mm/util.c:

	/* Tracepoints definitions. */
	EXPORT_TRACEPOINT_SYMBOL(kmalloc);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);
	EXPORT_TRACEPOINT_SYMBOL(kmalloc_node);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node);
	EXPORT_TRACEPOINT_SYMBOL(kfree);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free);

Should I send another patch to move this to slab_common.c?



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
@ 2014-07-10  7:43       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:43 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:32, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> To avoid false positive reports in kernel address sanitizer krealloc/kzfree
>> functions shouldn't be instrumented. Since we want to instrument other
>> functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
>> instrumented.
>>
>> Unfortunately we can't completely disable instrumentation for one function.
>> We could disable compiler's instrumentation for one function by using
>> __atribute__((no_sanitize_address)).
>> But the problem here is that memset call will be replaced by instumented
>> version kasan_memset since currently it's implemented as define:
> 
> Looks good to me and useful regardless of the sanitizer going in.
> 
> Acked-by: Christoph Lameter <cl@linux.com>
> 

I also noticed in mm/util.c:

	/* Tracepoints definitions. */
	EXPORT_TRACEPOINT_SYMBOL(kmalloc);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);
	EXPORT_TRACEPOINT_SYMBOL(kmalloc_node);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node);
	EXPORT_TRACEPOINT_SYMBOL(kfree);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free);

Should I send another patch to move this to slab_common.c?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
@ 2014-07-10  7:43       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  7:43 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/14 18:32, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> To avoid false positive reports in kernel address sanitizer krealloc/kzfree
>> functions shouldn't be instrumented. Since we want to instrument other
>> functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
>> instrumented.
>>
>> Unfortunately we can't completely disable instrumentation for one function.
>> We could disable compiler's instrumentation for one function by using
>> __atribute__((no_sanitize_address)).
>> But the problem here is that memset call will be replaced by instumented
>> version kasan_memset since currently it's implemented as define:
> 
> Looks good to me and useful regardless of the sanitizer going in.
> 
> Acked-by: Christoph Lameter <cl@linux.com>
> 

I also noticed in mm/util.c:

	/* Tracepoints definitions. */
	EXPORT_TRACEPOINT_SYMBOL(kmalloc);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);
	EXPORT_TRACEPOINT_SYMBOL(kmalloc_node);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node);
	EXPORT_TRACEPOINT_SYMBOL(kfree);
	EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free);

Should I send another patch to move this to slab_common.c?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
  2014-07-09 14:33     ` Christoph Lameter
  (?)
@ 2014-07-10  8:44       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  8:44 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:33, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> When caller creates new kmem_cache, requested size of kmem_cache
>> will be stored in alloc_size. Later alloc_size will be used by
>> kerenel address sanitizer to mark alloc_size of slab object as
>> accessible and the rest of its size as redzone.
> 
> I think this patch is not needed since object_size == alloc_size right?
> 

I vaguely remember there was a reason for this patch, but I can't see/recall it now.
Probably I misunderstood something. I'll drop this patch

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
@ 2014-07-10  8:44       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  8:44 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:33, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> When caller creates new kmem_cache, requested size of kmem_cache
>> will be stored in alloc_size. Later alloc_size will be used by
>> kerenel address sanitizer to mark alloc_size of slab object as
>> accessible and the rest of its size as redzone.
> 
> I think this patch is not needed since object_size == alloc_size right?
> 

I vaguely remember there was a reason for this patch, but I can't see/recall it now.
Probably I misunderstood something. I'll drop this patch

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache
@ 2014-07-10  8:44       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  8:44 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/14 18:33, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> When caller creates new kmem_cache, requested size of kmem_cache
>> will be stored in alloc_size. Later alloc_size will be used by
>> kerenel address sanitizer to mark alloc_size of slab object as
>> accessible and the rest of its size as redzone.
> 
> I think this patch is not needed since object_size == alloc_size right?
> 

I vaguely remember there was a reason for this patch, but I can't see/recall it now.
Probably I misunderstood something. I'll drop this patch

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
  2014-07-09 14:48     ` Christoph Lameter
  (?)
@ 2014-07-10  9:24       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  9:24 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:48, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Allocated slab page, this whole page marked as unaccessible
>> in corresponding shadow memory.
>> On allocation of slub object requested allocation size marked as
>> accessible, and the rest of the object (including slub's metadata)
>> marked as redzone (unaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible by kasan_krealloc call.
> 
> Do you really need to go through all of this? Add the hooks to
> kmem_cache_alloc_trace() instead and use the existing instrumentation
> that is there for other purposes?
> 

I could move kasan_kmalloc hooks kmem_cache_alloc_trace(), and I think it will look better.
Hovewer I will require two hooks instead of one (for CONFIG_TRACING=y and CONFIG_TRACING=n).

Btw, seems I broke CONFIG_SL[AO]B configurations in this patch by  introducing __ksize function
which used in krealloc now.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-10  9:24       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  9:24 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/09/14 18:48, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Allocated slab page, this whole page marked as unaccessible
>> in corresponding shadow memory.
>> On allocation of slub object requested allocation size marked as
>> accessible, and the rest of the object (including slub's metadata)
>> marked as redzone (unaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible by kasan_krealloc call.
> 
> Do you really need to go through all of this? Add the hooks to
> kmem_cache_alloc_trace() instead and use the existing instrumentation
> that is there for other purposes?
> 

I could move kasan_kmalloc hooks kmem_cache_alloc_trace(), and I think it will look better.
Hovewer I will require two hooks instead of one (for CONFIG_TRACING=y and CONFIG_TRACING=n).

Btw, seems I broke CONFIG_SL[AO]B configurations in this patch by  introducing __ksize function
which used in krealloc now.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-10  9:24       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10  9:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/14 18:48, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Allocated slab page, this whole page marked as unaccessible
>> in corresponding shadow memory.
>> On allocation of slub object requested allocation size marked as
>> accessible, and the rest of the object (including slub's metadata)
>> marked as redzone (unaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible by kasan_krealloc call.
> 
> Do you really need to go through all of this? Add the hooks to
> kmem_cache_alloc_trace() instead and use the existing instrumentation
> that is there for other purposes?
> 

I could move kasan_kmalloc hooks kmem_cache_alloc_trace(), and I think it will look better.
Hovewer I will require two hooks instead of one (for CONFIG_TRACING=y and CONFIG_TRACING=n).

Btw, seems I broke CONFIG_SL[AO]B configurations in this patch by  introducing __ksize function
which used in krealloc now.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 11:29   ` Andrey Ryabinin
  (?)
@ 2014-07-10 11:55     ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-07-10 11:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm

On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
> 
> The main features of kasan is:
>  - is based on compiler instrumentation (fast),
>  - detects out of bounds for both writes and reads,
>  - provides use after free detection,
> 
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
> 
> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
> latter).
> 
> Implementation details:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
> 
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + kasan_shadow_start;
>      }
> 
> where KASAN_SHADOW_SCALE_SHIFT = 3.
> 
> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are unaccessible.
> Different negative values used to distinguish between different kinds of
> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
> 
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
> 
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
> 
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>

I gave it a spin, and it seems that it fails for what you might call a "regular"
memory size these days, in my case it was 18G:

[    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
[    0.000000]
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
[    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
[    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
[    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
[    0.000000] Call Trace:
[    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
[    0.000000] panic (kernel/panic.c:119)
[    0.000000] memblock_alloc_base (mm/memblock.c:1092)
[    0.000000] memblock_alloc (mm/memblock.c:1097)
[    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
[    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
[    0.000000] paging_init (arch/x86/mm/init_64.c:677)
[    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
[    0.000000] ? printk (kernel/printk/printk.c:1839)
[    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
[    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
[    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
[    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)

It got better when I reduced memory to 1GB, but then my system just failed to boot
at all because that's not enough to bring everything up.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 11:55     ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-07-10 11:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm

On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
> 
> The main features of kasan is:
>  - is based on compiler instrumentation (fast),
>  - detects out of bounds for both writes and reads,
>  - provides use after free detection,
> 
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
> 
> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
> latter).
> 
> Implementation details:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
> 
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + kasan_shadow_start;
>      }
> 
> where KASAN_SHADOW_SCALE_SHIFT = 3.
> 
> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are unaccessible.
> Different negative values used to distinguish between different kinds of
> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
> 
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
> 
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
> 
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>

I gave it a spin, and it seems that it fails for what you might call a "regular"
memory size these days, in my case it was 18G:

[    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
[    0.000000]
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
[    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
[    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
[    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
[    0.000000] Call Trace:
[    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
[    0.000000] panic (kernel/panic.c:119)
[    0.000000] memblock_alloc_base (mm/memblock.c:1092)
[    0.000000] memblock_alloc (mm/memblock.c:1097)
[    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
[    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
[    0.000000] paging_init (arch/x86/mm/init_64.c:677)
[    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
[    0.000000] ? printk (kernel/printk/printk.c:1839)
[    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
[    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
[    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
[    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)

It got better when I reduced memory to 1GB, but then my system just failed to boot
at all because that's not enough to bring everything up.


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 11:55     ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-07-10 11:55 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
> 
> The main features of kasan is:
>  - is based on compiler instrumentation (fast),
>  - detects out of bounds for both writes and reads,
>  - provides use after free detection,
> 
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
> 
> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
> latter).
> 
> Implementation details:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
> 
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + kasan_shadow_start;
>      }
> 
> where KASAN_SHADOW_SCALE_SHIFT = 3.
> 
> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are unaccessible.
> Different negative values used to distinguish between different kinds of
> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
> 
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
> 
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
> 
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>

I gave it a spin, and it seems that it fails for what you might call a "regular"
memory size these days, in my case it was 18G:

[    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
[    0.000000]
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
[    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
[    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
[    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
[    0.000000] Call Trace:
[    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
[    0.000000] panic (kernel/panic.c:119)
[    0.000000] memblock_alloc_base (mm/memblock.c:1092)
[    0.000000] memblock_alloc (mm/memblock.c:1097)
[    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
[    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
[    0.000000] paging_init (arch/x86/mm/init_64.c:677)
[    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
[    0.000000] ? printk (kernel/printk/printk.c:1839)
[    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
[    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
[    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
[    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)

It got better when I reduced memory to 1GB, but then my system just failed to boot
at all because that's not enough to bring everything up.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 19:29     ` Andi Kleen
  (?)
@ 2014-07-10 12:10       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 12:10 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm, dave.hansen

On 07/09/14 23:29, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> 
> Seems like a useful facility. Thanks for working on it. Overall the code
> looks fairly good. Some comments below.
> 
> 
>> +
>> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
>> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>> +
>> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
>> + - is based on compiler instrumentation (fast),
>> + - detects OOB for both writes and reads,
>> + - provides UAF detection,
> 
> Please expand the acronym.
> 
Sure, will do.

>> +
>> +|--------|        |--------|
>> +| Memory |----    | Memory |
>> +|--------|    \   |--------|
>> +| Shadow |--   -->| Shadow |
>> +|--------|  \     |--------|
>> +|   Bad  |   ---->|  Bad   |
>> +|--------|  /     |--------|
>> +| Shadow |--   -->| Shadow |
>> +|--------|    /   |--------|
>> +| Memory |----    | Memory |
>> +|--------|        |--------|
> 
> I guess this implies it's incompatible with memory hotplug, as the 
> shadow couldn't be extended?
> 
> That's fine, but you should exclude that in Kconfig.
> 
> There are likely more exclude dependencies for Kconfig too.
> Neds dependencies on the right sparse mem options?
> Does it work with kmemcheck? If not exclude.
> 
> Perhaps try to boot it with all other debug options and see which ones break.
> 

Besides Kconfig dependencies I might need to disable instrumentation in some places.
For example kasan doesn't play well with kmemleak. Kmemleak may look for pointers inside redzones
and kasan treats this as an error.

>> diff --git a/Makefile b/Makefile
>> index 64ab7b3..08a07f2 100644
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
>>  CFLAGS_KERNEL	=
>>  AFLAGS_KERNEL	=
>>  CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
>> +CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
>> +			--param asan-use-after-return=0 \
>> +			--param asan-globals=0 \
>> +			--param asan-memintrin=0 \
>> +			--param asan-instrumentation-with-call-threshold=0 \
> 
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?
> 
> Also you should probably have some check that the compiler supports it
> (and print some warning if not)
> Otherwise randconfig builds will be broken if the compiler doesn't.
> 
> Also does the kernel really build/work without the other patches?
> If not please move this patchkit to the end of the series, to keep
> the patchkit bisectable (this may need moving parts of the includes
> into a separate patch)
> 
It's buildable. At this point you can't select CONFIG_KASAN = y because there is no
arch that supports kasan (HAVE_ARCH_KASAN config). But after x86 patches kernel could be
build and run with kasan. At that point kasan will be able to catch only "wild" memory
accesses (when someone outside mm/kasan/* tries to access shadow memory).

>> diff --git a/commit b/commit
>> new file mode 100644
>> index 0000000..134f4dd
>> --- /dev/null
>> +++ b/commit
>> @@ -0,0 +1,3 @@
>> +
>> +I'm working on address sanitizer for kernel.
>> +fuck this bloody.
>> \ No newline at end of file
> 
> Heh. Please remove.
> 

Oops. No idea how it get there :)

>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> new file mode 100644
>> index 0000000..2bfff78
>> --- /dev/null
>> +++ b/lib/Kconfig.kasan
>> @@ -0,0 +1,20 @@
>> +config HAVE_ARCH_KASAN
>> +	bool
>> +
>> +if HAVE_ARCH_KASAN
>> +
>> +config KASAN
>> +	bool "AddressSanitizer: dynamic memory error detector"
>> +	default n
>> +	help
>> +	  Enables AddressSanitizer - dynamic memory error detector,
>> +	  that finds out-of-bounds and use-after-free bugs.
> 
> Needs much more description.
> 
>> +
>> +config KASAN_SANITIZE_ALL
>> +	bool "Instrument entire kernel"
>> +	depends on KASAN
>> +	default y
>> +	help
>> +	  This enables compiler intrumentation for entire kernel
>> +
> 
> Same.
> 
> 
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> new file mode 100644
>> index 0000000..e2cd345
>> --- /dev/null
>> +++ b/mm/kasan/kasan.c
>> @@ -0,0 +1,292 @@
>> +/*
>> + *
> 
> Add one line here what the file does. Same for other files.
> 
>> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
>> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> +#include "kasan.h"
>> +#include "../slab.h"
> 
> That's ugly, but ok.
Hm... "../slab.h" is not needed in this file. linux/slab.h is enough here.

> 
>> +
>> +static bool __read_mostly kasan_initialized;
> 
> It would be better to use a static_key, but I guess your initialization
> is too early?

No, not too early. kasan_init_shadow which switches this flag called just after jump_label_init,
so it's not a problem for static_key, but there is another one.
I tried static key here. I works really well for arm, but it has some problems on x86.
While switching static key by calling static_key_slow_inc, the first byte of static key is replaced with
breakpoint (look at text_poke_bp()). After that, at first memory access __asan_load/__asan_store called and
we are executing this breakpoint from the code that trying to update that instruction.

text_poke_bp()
{
	....
	//replace first byte with breakpoint
		....
			___asan_load*()
				....
				if (static_key_false(&kasan_initlized)) <-- static key update still in progress
		....
	//patching code done
}

To make static_key work on x86 I need to disable instrumentation in text_poke_bp() and in any other functions that called from it.
It might be a big problem if text_poke_bp uses some very generic functions.

Another better option would be to get rid of kasan_initilized check in kasan_enabled():
static inline bool kasan_enabled(void)
{
	return likely(kasan_initialized
		&& !current->kasan_depth);
}


> 
> Of course the proposal to move it into start_kernel and get rid of the
> flag would be best.
>

that's the plan for future.


>> +
>> +unsigned long kasan_shadow_start;
>> +unsigned long kasan_shadow_end;
>> +
>> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
>> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
> 
> Do these all need to be global?
> 

For now only  kasan_shadow_start and kasan_shadow_offset need to be global.
It also should be possible to get rid of using kasan_shadow_start in kasan_shadow_to_mem(), and make it static

>> +
>> +
>> +static inline bool addr_is_in_mem(unsigned long addr)
>> +{
>> +	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> +}
> 
> Of course there are lots of cases where this doesn't work (like large
> holes), but I assume this has been checked elsewhere?
> 
Seems I need to do some work for sparsemem configurations.

> 
>> +
>> +void kasan_enable_local(void)
>> +{
>> +	if (likely(kasan_initialized))
>> +		current->kasan_depth--;
>> +}
>> +
>> +void kasan_disable_local(void)
>> +{
>> +	if (likely(kasan_initialized))
>> +		current->kasan_depth++;
>> +}
> 
> Couldn't this be done without checking the flag?
> 
Not sure. Do we always have current available? I assume it should be initialized at some point of boot process.
I will check that.


> 
>> +		return;
>> +
>> +	if (unlikely(addr < TASK_SIZE)) {
>> +		info.access_addr = addr;
>> +		info.access_size = size;
>> +		info.is_write = write;
>> +		info.ip = _RET_IP_;
>> +		kasan_report_user_access(&info);
>> +		return;
>> +	}
> 
> How about vsyscall pages here?
> 

Not sure what do you mean. Could you please elaborate?

>> +
>> +	if (!addr_is_in_mem(addr))
>> +		return;
>> +
>> +	access_addr = memory_is_poisoned(addr, size);
>> +	if (likely(access_addr == 0))
>> +		return;
>> +
>> +	info.access_addr = access_addr;
>> +	info.access_size = size;
>> +	info.is_write = write;
>> +	info.ip = _RET_IP_;
>> +	kasan_report_error(&info);
>> +}
>> +
>> +void __init kasan_alloc_shadow(void)
>> +{
>> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
>> +	unsigned long shadow_size;
>> +	phys_addr_t shadow_phys_start;
>> +
>> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
>> +
>> +	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
>> +	if (!shadow_phys_start) {
>> +		pr_err("Unable to reserve shadow memory\n");
>> +		return;
> 
> Wouldn't this crash&burn later? panic?
> 

As already Sasha reported it will panic in memblock_alloc.

>> +void *kasan_memcpy(void *dst, const void *src, size_t len)
>> +{
>> +	if (unlikely(len == 0))
>> +		return dst;
>> +
>> +	check_memory_region((unsigned long)src, len, false);
>> +	check_memory_region((unsigned long)dst, len, true);
> 
> I assume this handles negative len?
> Also check for overlaps?
> 
Will do.

>> +
>> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
>> +{
>> +	return x - ((x - slab_start) % s->size);
>> +}
> 
> This should be in the respective slab headers, not hard coded.
> 
Agreed.

>> +void kasan_report_error(struct access_info *info)
>> +{
>> +	kasan_disable_local();
>> +	pr_err("================================="
>> +		"=================================\n");
>> +	print_error_description(info);
>> +	print_address_description(info);
>> +	print_shadow_for_address(info->access_addr);
>> +	pr_err("================================="
>> +		"=================================\n");
>> +	kasan_enable_local();
>> +}
>> +
>> +void kasan_report_user_access(struct access_info *info)
>> +{
>> +	kasan_disable_local();
> 
> Should print the same prefix oopses use, a lot of log grep tools
> look for that. 
> 
Ok

> Also you may want some lock to prevent multiple
> reports mixing. 

I think hiding it into
 if (spin_trylock) { ... }

would be enough.
I think it might be a good idea to add option for reporting only first error.
It will be usefull for some cases (for example strlen on not null terminated string makes kasan crazy)

Thanks for review

> 
> -Andi
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 12:10       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 12:10 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm, dave.hansen

On 07/09/14 23:29, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> 
> Seems like a useful facility. Thanks for working on it. Overall the code
> looks fairly good. Some comments below.
> 
> 
>> +
>> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
>> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>> +
>> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
>> + - is based on compiler instrumentation (fast),
>> + - detects OOB for both writes and reads,
>> + - provides UAF detection,
> 
> Please expand the acronym.
> 
Sure, will do.

>> +
>> +|--------|        |--------|
>> +| Memory |----    | Memory |
>> +|--------|    \   |--------|
>> +| Shadow |--   -->| Shadow |
>> +|--------|  \     |--------|
>> +|   Bad  |   ---->|  Bad   |
>> +|--------|  /     |--------|
>> +| Shadow |--   -->| Shadow |
>> +|--------|    /   |--------|
>> +| Memory |----    | Memory |
>> +|--------|        |--------|
> 
> I guess this implies it's incompatible with memory hotplug, as the 
> shadow couldn't be extended?
> 
> That's fine, but you should exclude that in Kconfig.
> 
> There are likely more exclude dependencies for Kconfig too.
> Neds dependencies on the right sparse mem options?
> Does it work with kmemcheck? If not exclude.
> 
> Perhaps try to boot it with all other debug options and see which ones break.
> 

Besides Kconfig dependencies I might need to disable instrumentation in some places.
For example kasan doesn't play well with kmemleak. Kmemleak may look for pointers inside redzones
and kasan treats this as an error.

>> diff --git a/Makefile b/Makefile
>> index 64ab7b3..08a07f2 100644
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
>>  CFLAGS_KERNEL	=
>>  AFLAGS_KERNEL	=
>>  CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
>> +CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
>> +			--param asan-use-after-return=0 \
>> +			--param asan-globals=0 \
>> +			--param asan-memintrin=0 \
>> +			--param asan-instrumentation-with-call-threshold=0 \
> 
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?
> 
> Also you should probably have some check that the compiler supports it
> (and print some warning if not)
> Otherwise randconfig builds will be broken if the compiler doesn't.
> 
> Also does the kernel really build/work without the other patches?
> If not please move this patchkit to the end of the series, to keep
> the patchkit bisectable (this may need moving parts of the includes
> into a separate patch)
> 
It's buildable. At this point you can't select CONFIG_KASAN = y because there is no
arch that supports kasan (HAVE_ARCH_KASAN config). But after x86 patches kernel could be
build and run with kasan. At that point kasan will be able to catch only "wild" memory
accesses (when someone outside mm/kasan/* tries to access shadow memory).

>> diff --git a/commit b/commit
>> new file mode 100644
>> index 0000000..134f4dd
>> --- /dev/null
>> +++ b/commit
>> @@ -0,0 +1,3 @@
>> +
>> +I'm working on address sanitizer for kernel.
>> +fuck this bloody.
>> \ No newline at end of file
> 
> Heh. Please remove.
> 

Oops. No idea how it get there :)

>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> new file mode 100644
>> index 0000000..2bfff78
>> --- /dev/null
>> +++ b/lib/Kconfig.kasan
>> @@ -0,0 +1,20 @@
>> +config HAVE_ARCH_KASAN
>> +	bool
>> +
>> +if HAVE_ARCH_KASAN
>> +
>> +config KASAN
>> +	bool "AddressSanitizer: dynamic memory error detector"
>> +	default n
>> +	help
>> +	  Enables AddressSanitizer - dynamic memory error detector,
>> +	  that finds out-of-bounds and use-after-free bugs.
> 
> Needs much more description.
> 
>> +
>> +config KASAN_SANITIZE_ALL
>> +	bool "Instrument entire kernel"
>> +	depends on KASAN
>> +	default y
>> +	help
>> +	  This enables compiler intrumentation for entire kernel
>> +
> 
> Same.
> 
> 
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> new file mode 100644
>> index 0000000..e2cd345
>> --- /dev/null
>> +++ b/mm/kasan/kasan.c
>> @@ -0,0 +1,292 @@
>> +/*
>> + *
> 
> Add one line here what the file does. Same for other files.
> 
>> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
>> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> +#include "kasan.h"
>> +#include "../slab.h"
> 
> That's ugly, but ok.
Hm... "../slab.h" is not needed in this file. linux/slab.h is enough here.

> 
>> +
>> +static bool __read_mostly kasan_initialized;
> 
> It would be better to use a static_key, but I guess your initialization
> is too early?

No, not too early. kasan_init_shadow which switches this flag called just after jump_label_init,
so it's not a problem for static_key, but there is another one.
I tried static key here. I works really well for arm, but it has some problems on x86.
While switching static key by calling static_key_slow_inc, the first byte of static key is replaced with
breakpoint (look at text_poke_bp()). After that, at first memory access __asan_load/__asan_store called and
we are executing this breakpoint from the code that trying to update that instruction.

text_poke_bp()
{
	....
	//replace first byte with breakpoint
		....
			___asan_load*()
				....
				if (static_key_false(&kasan_initlized)) <-- static key update still in progress
		....
	//patching code done
}

To make static_key work on x86 I need to disable instrumentation in text_poke_bp() and in any other functions that called from it.
It might be a big problem if text_poke_bp uses some very generic functions.

Another better option would be to get rid of kasan_initilized check in kasan_enabled():
static inline bool kasan_enabled(void)
{
	return likely(kasan_initialized
		&& !current->kasan_depth);
}


> 
> Of course the proposal to move it into start_kernel and get rid of the
> flag would be best.
>

that's the plan for future.


>> +
>> +unsigned long kasan_shadow_start;
>> +unsigned long kasan_shadow_end;
>> +
>> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
>> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
> 
> Do these all need to be global?
> 

For now only  kasan_shadow_start and kasan_shadow_offset need to be global.
It also should be possible to get rid of using kasan_shadow_start in kasan_shadow_to_mem(), and make it static

>> +
>> +
>> +static inline bool addr_is_in_mem(unsigned long addr)
>> +{
>> +	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> +}
> 
> Of course there are lots of cases where this doesn't work (like large
> holes), but I assume this has been checked elsewhere?
> 
Seems I need to do some work for sparsemem configurations.

> 
>> +
>> +void kasan_enable_local(void)
>> +{
>> +	if (likely(kasan_initialized))
>> +		current->kasan_depth--;
>> +}
>> +
>> +void kasan_disable_local(void)
>> +{
>> +	if (likely(kasan_initialized))
>> +		current->kasan_depth++;
>> +}
> 
> Couldn't this be done without checking the flag?
> 
Not sure. Do we always have current available? I assume it should be initialized at some point of boot process.
I will check that.


> 
>> +		return;
>> +
>> +	if (unlikely(addr < TASK_SIZE)) {
>> +		info.access_addr = addr;
>> +		info.access_size = size;
>> +		info.is_write = write;
>> +		info.ip = _RET_IP_;
>> +		kasan_report_user_access(&info);
>> +		return;
>> +	}
> 
> How about vsyscall pages here?
> 

Not sure what do you mean. Could you please elaborate?

>> +
>> +	if (!addr_is_in_mem(addr))
>> +		return;
>> +
>> +	access_addr = memory_is_poisoned(addr, size);
>> +	if (likely(access_addr == 0))
>> +		return;
>> +
>> +	info.access_addr = access_addr;
>> +	info.access_size = size;
>> +	info.is_write = write;
>> +	info.ip = _RET_IP_;
>> +	kasan_report_error(&info);
>> +}
>> +
>> +void __init kasan_alloc_shadow(void)
>> +{
>> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
>> +	unsigned long shadow_size;
>> +	phys_addr_t shadow_phys_start;
>> +
>> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
>> +
>> +	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
>> +	if (!shadow_phys_start) {
>> +		pr_err("Unable to reserve shadow memory\n");
>> +		return;
> 
> Wouldn't this crash&burn later? panic?
> 

As already Sasha reported it will panic in memblock_alloc.

>> +void *kasan_memcpy(void *dst, const void *src, size_t len)
>> +{
>> +	if (unlikely(len == 0))
>> +		return dst;
>> +
>> +	check_memory_region((unsigned long)src, len, false);
>> +	check_memory_region((unsigned long)dst, len, true);
> 
> I assume this handles negative len?
> Also check for overlaps?
> 
Will do.

>> +
>> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
>> +{
>> +	return x - ((x - slab_start) % s->size);
>> +}
> 
> This should be in the respective slab headers, not hard coded.
> 
Agreed.

>> +void kasan_report_error(struct access_info *info)
>> +{
>> +	kasan_disable_local();
>> +	pr_err("================================="
>> +		"=================================\n");
>> +	print_error_description(info);
>> +	print_address_description(info);
>> +	print_shadow_for_address(info->access_addr);
>> +	pr_err("================================="
>> +		"=================================\n");
>> +	kasan_enable_local();
>> +}
>> +
>> +void kasan_report_user_access(struct access_info *info)
>> +{
>> +	kasan_disable_local();
> 
> Should print the same prefix oopses use, a lot of log grep tools
> look for that. 
> 
Ok

> Also you may want some lock to prevent multiple
> reports mixing. 

I think hiding it into
 if (spin_trylock) { ... }

would be enough.
I think it might be a good idea to add option for reporting only first error.
It will be usefull for some cases (for example strlen on not null terminated string makes kasan crazy)

Thanks for review

> 
> -Andi
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 12:10       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 12:10 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/14 23:29, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> 
> Seems like a useful facility. Thanks for working on it. Overall the code
> looks fairly good. Some comments below.
> 
> 
>> +
>> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
>> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>> +
>> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
>> + - is based on compiler instrumentation (fast),
>> + - detects OOB for both writes and reads,
>> + - provides UAF detection,
> 
> Please expand the acronym.
> 
Sure, will do.

>> +
>> +|--------|        |--------|
>> +| Memory |----    | Memory |
>> +|--------|    \   |--------|
>> +| Shadow |--   -->| Shadow |
>> +|--------|  \     |--------|
>> +|   Bad  |   ---->|  Bad   |
>> +|--------|  /     |--------|
>> +| Shadow |--   -->| Shadow |
>> +|--------|    /   |--------|
>> +| Memory |----    | Memory |
>> +|--------|        |--------|
> 
> I guess this implies it's incompatible with memory hotplug, as the 
> shadow couldn't be extended?
> 
> That's fine, but you should exclude that in Kconfig.
> 
> There are likely more exclude dependencies for Kconfig too.
> Neds dependencies on the right sparse mem options?
> Does it work with kmemcheck? If not exclude.
> 
> Perhaps try to boot it with all other debug options and see which ones break.
> 

Besides Kconfig dependencies I might need to disable instrumentation in some places.
For example kasan doesn't play well with kmemleak. Kmemleak may look for pointers inside redzones
and kasan treats this as an error.

>> diff --git a/Makefile b/Makefile
>> index 64ab7b3..08a07f2 100644
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
>>  CFLAGS_KERNEL	=
>>  AFLAGS_KERNEL	=
>>  CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
>> +CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
>> +			--param asan-use-after-return=0 \
>> +			--param asan-globals=0 \
>> +			--param asan-memintrin=0 \
>> +			--param asan-instrumentation-with-call-threshold=0 \
> 
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?
> 
> Also you should probably have some check that the compiler supports it
> (and print some warning if not)
> Otherwise randconfig builds will be broken if the compiler doesn't.
> 
> Also does the kernel really build/work without the other patches?
> If not please move this patchkit to the end of the series, to keep
> the patchkit bisectable (this may need moving parts of the includes
> into a separate patch)
> 
It's buildable. At this point you can't select CONFIG_KASAN = y because there is no
arch that supports kasan (HAVE_ARCH_KASAN config). But after x86 patches kernel could be
build and run with kasan. At that point kasan will be able to catch only "wild" memory
accesses (when someone outside mm/kasan/* tries to access shadow memory).

>> diff --git a/commit b/commit
>> new file mode 100644
>> index 0000000..134f4dd
>> --- /dev/null
>> +++ b/commit
>> @@ -0,0 +1,3 @@
>> +
>> +I'm working on address sanitizer for kernel.
>> +fuck this bloody.
>> \ No newline at end of file
> 
> Heh. Please remove.
> 

Oops. No idea how it get there :)

>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> new file mode 100644
>> index 0000000..2bfff78
>> --- /dev/null
>> +++ b/lib/Kconfig.kasan
>> @@ -0,0 +1,20 @@
>> +config HAVE_ARCH_KASAN
>> +	bool
>> +
>> +if HAVE_ARCH_KASAN
>> +
>> +config KASAN
>> +	bool "AddressSanitizer: dynamic memory error detector"
>> +	default n
>> +	help
>> +	  Enables AddressSanitizer - dynamic memory error detector,
>> +	  that finds out-of-bounds and use-after-free bugs.
> 
> Needs much more description.
> 
>> +
>> +config KASAN_SANITIZE_ALL
>> +	bool "Instrument entire kernel"
>> +	depends on KASAN
>> +	default y
>> +	help
>> +	  This enables compiler intrumentation for entire kernel
>> +
> 
> Same.
> 
> 
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> new file mode 100644
>> index 0000000..e2cd345
>> --- /dev/null
>> +++ b/mm/kasan/kasan.c
>> @@ -0,0 +1,292 @@
>> +/*
>> + *
> 
> Add one line here what the file does. Same for other files.
> 
>> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
>> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> +#include "kasan.h"
>> +#include "../slab.h"
> 
> That's ugly, but ok.
Hm... "../slab.h" is not needed in this file. linux/slab.h is enough here.

> 
>> +
>> +static bool __read_mostly kasan_initialized;
> 
> It would be better to use a static_key, but I guess your initialization
> is too early?

No, not too early. kasan_init_shadow which switches this flag called just after jump_label_init,
so it's not a problem for static_key, but there is another one.
I tried static key here. I works really well for arm, but it has some problems on x86.
While switching static key by calling static_key_slow_inc, the first byte of static key is replaced with
breakpoint (look at text_poke_bp()). After that, at first memory access __asan_load/__asan_store called and
we are executing this breakpoint from the code that trying to update that instruction.

text_poke_bp()
{
	....
	//replace first byte with breakpoint
		....
			___asan_load*()
				....
				if (static_key_false(&kasan_initlized)) <-- static key update still in progress
		....
	//patching code done
}

To make static_key work on x86 I need to disable instrumentation in text_poke_bp() and in any other functions that called from it.
It might be a big problem if text_poke_bp uses some very generic functions.

Another better option would be to get rid of kasan_initilized check in kasan_enabled():
static inline bool kasan_enabled(void)
{
	return likely(kasan_initialized
		&& !current->kasan_depth);
}


> 
> Of course the proposal to move it into start_kernel and get rid of the
> flag would be best.
>

that's the plan for future.


>> +
>> +unsigned long kasan_shadow_start;
>> +unsigned long kasan_shadow_end;
>> +
>> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
>> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
> 
> Do these all need to be global?
> 

For now only  kasan_shadow_start and kasan_shadow_offset need to be global.
It also should be possible to get rid of using kasan_shadow_start in kasan_shadow_to_mem(), and make it static

>> +
>> +
>> +static inline bool addr_is_in_mem(unsigned long addr)
>> +{
>> +	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> +}
> 
> Of course there are lots of cases where this doesn't work (like large
> holes), but I assume this has been checked elsewhere?
> 
Seems I need to do some work for sparsemem configurations.

> 
>> +
>> +void kasan_enable_local(void)
>> +{
>> +	if (likely(kasan_initialized))
>> +		current->kasan_depth--;
>> +}
>> +
>> +void kasan_disable_local(void)
>> +{
>> +	if (likely(kasan_initialized))
>> +		current->kasan_depth++;
>> +}
> 
> Couldn't this be done without checking the flag?
> 
Not sure. Do we always have current available? I assume it should be initialized at some point of boot process.
I will check that.


> 
>> +		return;
>> +
>> +	if (unlikely(addr < TASK_SIZE)) {
>> +		info.access_addr = addr;
>> +		info.access_size = size;
>> +		info.is_write = write;
>> +		info.ip = _RET_IP_;
>> +		kasan_report_user_access(&info);
>> +		return;
>> +	}
> 
> How about vsyscall pages here?
> 

Not sure what do you mean. Could you please elaborate?

>> +
>> +	if (!addr_is_in_mem(addr))
>> +		return;
>> +
>> +	access_addr = memory_is_poisoned(addr, size);
>> +	if (likely(access_addr == 0))
>> +		return;
>> +
>> +	info.access_addr = access_addr;
>> +	info.access_size = size;
>> +	info.is_write = write;
>> +	info.ip = _RET_IP_;
>> +	kasan_report_error(&info);
>> +}
>> +
>> +void __init kasan_alloc_shadow(void)
>> +{
>> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
>> +	unsigned long shadow_size;
>> +	phys_addr_t shadow_phys_start;
>> +
>> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
>> +
>> +	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
>> +	if (!shadow_phys_start) {
>> +		pr_err("Unable to reserve shadow memory\n");
>> +		return;
> 
> Wouldn't this crash&burn later? panic?
> 

As already Sasha reported it will panic in memblock_alloc.

>> +void *kasan_memcpy(void *dst, const void *src, size_t len)
>> +{
>> +	if (unlikely(len == 0))
>> +		return dst;
>> +
>> +	check_memory_region((unsigned long)src, len, false);
>> +	check_memory_region((unsigned long)dst, len, true);
> 
> I assume this handles negative len?
> Also check for overlaps?
> 
Will do.

>> +
>> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
>> +{
>> +	return x - ((x - slab_start) % s->size);
>> +}
> 
> This should be in the respective slab headers, not hard coded.
> 
Agreed.

>> +void kasan_report_error(struct access_info *info)
>> +{
>> +	kasan_disable_local();
>> +	pr_err("================================="
>> +		"=================================\n");
>> +	print_error_description(info);
>> +	print_address_description(info);
>> +	print_shadow_for_address(info->access_addr);
>> +	pr_err("================================="
>> +		"=================================\n");
>> +	kasan_enable_local();
>> +}
>> +
>> +void kasan_report_user_access(struct access_info *info)
>> +{
>> +	kasan_disable_local();
> 
> Should print the same prefix oopses use, a lot of log grep tools
> look for that. 
> 
Ok

> Also you may want some lock to prevent multiple
> reports mixing. 

I think hiding it into
 if (spin_trylock) { ... }

would be enough.
I think it might be a good idea to add option for reporting only first error.
It will be usefull for some cases (for example strlen on not null terminated string makes kasan crazy)

Thanks for review

> 
> -Andi
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-09 20:26     ` Dave Hansen
  (?)
@ 2014-07-10 12:12       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 12:12 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/10/14 00:26, Dave Hansen wrote:
> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>      {
>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>                              + kasan_shadow_start;
>>      }
> 
> How does this interact with vmalloc() addresses or those from a kmap()?
> 

It's used only for lowmem:

static inline bool addr_is_in_mem(unsigned long addr)
{
	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
}



static __always_inline void check_memory_region(unsigned long addr,
						size_t size, bool write)
{

	....
	if (!addr_is_in_mem(addr))
		return;
	// check shadow here


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 12:12       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 12:12 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/10/14 00:26, Dave Hansen wrote:
> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>      {
>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>                              + kasan_shadow_start;
>>      }
> 
> How does this interact with vmalloc() addresses or those from a kmap()?
> 

It's used only for lowmem:

static inline bool addr_is_in_mem(unsigned long addr)
{
	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
}



static __always_inline void check_memory_region(unsigned long addr,
						size_t size, bool write)
{

	....
	if (!addr_is_in_mem(addr))
		return;
	// check shadow here

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 12:12       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 12:12 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/14 00:26, Dave Hansen wrote:
> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>      {
>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>                              + kasan_shadow_start;
>>      }
> 
> How does this interact with vmalloc() addresses or those from a kmap()?
> 

It's used only for lowmem:

static inline bool addr_is_in_mem(unsigned long addr)
{
	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
}



static __always_inline void check_memory_region(unsigned long addr,
						size_t size, bool write)
{

	....
	if (!addr_is_in_mem(addr))
		return;
	// check shadow here

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 11:55     ` Sasha Levin
  (?)
@ 2014-07-10 13:01       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:01 UTC (permalink / raw)
  To: Sasha Levin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

On 07/10/14 15:55, Sasha Levin wrote:
> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>
>> The main features of kasan is:
>>  - is based on compiler instrumentation (fast),
>>  - detects out of bounds for both writes and reads,
>>  - provides use after free detection,
>>
>> This patch only adds infrastructure for kernel address sanitizer. It's not
>> available for use yet. The idea and some code was borrowed from [1].
>>
>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>> latter).
>>
>> Implementation details:
>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>> on each memory access.
>>
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>      {
>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>                              + kasan_shadow_start;
>>      }
>>
>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>
>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>> Any negative value indicates that the entire 8-bytes are unaccessible.
>> Different negative values used to distinguish between different kinds of
>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>
>> To be able to detect accesses to bad memory we need a special compiler.
>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>> before each memory access of size 1, 2, 4, 8 or 16.
>>
>> These functions check whether memory region is valid to access or not by checking
>> corresponding shadow memory. If access is not valid an error printed.
>>
>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> 
> I gave it a spin, and it seems that it fails for what you might call a "regular"
> memory size these days, in my case it was 18G:
> 
> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
> [    0.000000]
> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
> [    0.000000] Call Trace:
> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
> [    0.000000] panic (kernel/panic.c:119)
> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
> [    0.000000] memblock_alloc (mm/memblock.c:1097)
> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
> [    0.000000] ? printk (kernel/printk/printk.c:1839)
> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
> 
> It got better when I reduced memory to 1GB, but then my system just failed to boot
> at all because that's not enough to bring everything up.
> 

Thanks.
I think memory size is not a problem here. I tested on my desktop with 16G.
Seems it's a problem with memory holes cited by Dave.
kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.


> 
> Thanks,
> Sasha
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 13:01       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:01 UTC (permalink / raw)
  To: Sasha Levin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

On 07/10/14 15:55, Sasha Levin wrote:
> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>
>> The main features of kasan is:
>>  - is based on compiler instrumentation (fast),
>>  - detects out of bounds for both writes and reads,
>>  - provides use after free detection,
>>
>> This patch only adds infrastructure for kernel address sanitizer. It's not
>> available for use yet. The idea and some code was borrowed from [1].
>>
>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>> latter).
>>
>> Implementation details:
>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>> on each memory access.
>>
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>      {
>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>                              + kasan_shadow_start;
>>      }
>>
>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>
>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>> Any negative value indicates that the entire 8-bytes are unaccessible.
>> Different negative values used to distinguish between different kinds of
>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>
>> To be able to detect accesses to bad memory we need a special compiler.
>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>> before each memory access of size 1, 2, 4, 8 or 16.
>>
>> These functions check whether memory region is valid to access or not by checking
>> corresponding shadow memory. If access is not valid an error printed.
>>
>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> 
> I gave it a spin, and it seems that it fails for what you might call a "regular"
> memory size these days, in my case it was 18G:
> 
> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
> [    0.000000]
> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
> [    0.000000] Call Trace:
> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
> [    0.000000] panic (kernel/panic.c:119)
> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
> [    0.000000] memblock_alloc (mm/memblock.c:1097)
> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
> [    0.000000] ? printk (kernel/printk/printk.c:1839)
> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
> 
> It got better when I reduced memory to 1GB, but then my system just failed to boot
> at all because that's not enough to bring everything up.
> 

Thanks.
I think memory size is not a problem here. I tested on my desktop with 16G.
Seems it's a problem with memory holes cited by Dave.
kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.


> 
> Thanks,
> Sasha
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 13:01       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:01 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/14 15:55, Sasha Levin wrote:
> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>
>> The main features of kasan is:
>>  - is based on compiler instrumentation (fast),
>>  - detects out of bounds for both writes and reads,
>>  - provides use after free detection,
>>
>> This patch only adds infrastructure for kernel address sanitizer. It's not
>> available for use yet. The idea and some code was borrowed from [1].
>>
>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>> latter).
>>
>> Implementation details:
>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>> on each memory access.
>>
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>      {
>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>                              + kasan_shadow_start;
>>      }
>>
>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>
>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>> Any negative value indicates that the entire 8-bytes are unaccessible.
>> Different negative values used to distinguish between different kinds of
>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>
>> To be able to detect accesses to bad memory we need a special compiler.
>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>> before each memory access of size 1, 2, 4, 8 or 16.
>>
>> These functions check whether memory region is valid to access or not by checking
>> corresponding shadow memory. If access is not valid an error printed.
>>
>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> 
> I gave it a spin, and it seems that it fails for what you might call a "regular"
> memory size these days, in my case it was 18G:
> 
> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
> [    0.000000]
> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
> [    0.000000] Call Trace:
> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
> [    0.000000] panic (kernel/panic.c:119)
> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
> [    0.000000] memblock_alloc (mm/memblock.c:1097)
> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
> [    0.000000] ? printk (kernel/printk/printk.c:1839)
> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
> 
> It got better when I reduced memory to 1GB, but then my system just failed to boot
> at all because that's not enough to bring everything up.
> 

Thanks.
I think memory size is not a problem here. I tested on my desktop with 16G.
Seems it's a problem with memory holes cited by Dave.
kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.


> 
> Thanks,
> Sasha
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
  2014-07-09 19:33     ` Andi Kleen
  (?)
@ 2014-07-10 13:15       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:15 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

On 07/09/14 23:33, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> 
>> Instrumentation of this files may result in unbootable machine.
> 
> This doesn't make sense. Is the code not NMI safe? 
> If yes that would need to be fixed because
> 
> Please debug more.
> 

Sure.
It turns out that  KASAN_SANITIZE_perf_event.o := n is not needed here.
The problem only with common.c

> perf is a common source of bugs (see Vice Weaver's fuzzer results),
> so it would be good to have this functionality for it.
> 
> -Andi
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
@ 2014-07-10 13:15       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:15 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

On 07/09/14 23:33, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> 
>> Instrumentation of this files may result in unbootable machine.
> 
> This doesn't make sense. Is the code not NMI safe? 
> If yes that would need to be fixed because
> 
> Please debug more.
> 

Sure.
It turns out that  KASAN_SANITIZE_perf_event.o := n is not needed here.
The problem only with common.c

> perf is a common source of bugs (see Vice Weaver's fuzzer results),
> so it would be good to have this functionality for it.
> 
> -Andi
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot
@ 2014-07-10 13:15       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:15 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/14 23:33, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> 
>> Instrumentation of this files may result in unbootable machine.
> 
> This doesn't make sense. Is the code not NMI safe? 
> If yes that would need to be fixed because
> 
> Please debug more.
> 

Sure.
It turns out that  KASAN_SANITIZE_perf_event.o := n is not needed here.
The problem only with common.c

> perf is a common source of bugs (see Vice Weaver's fuzzer results),
> so it would be good to have this functionality for it.
> 
> -Andi
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 13:01       ` Andrey Ryabinin
  (?)
@ 2014-07-10 13:31         ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-07-10 13:31 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
> On 07/10/14 15:55, Sasha Levin wrote:
>> > On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>> >> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>> >>
>>> >> The main features of kasan is:
>>> >>  - is based on compiler instrumentation (fast),
>>> >>  - detects out of bounds for both writes and reads,
>>> >>  - provides use after free detection,
>>> >>
>>> >> This patch only adds infrastructure for kernel address sanitizer. It's not
>>> >> available for use yet. The idea and some code was borrowed from [1].
>>> >>
>>> >> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>> >> latter).
>>> >>
>>> >> Implementation details:
>>> >> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>> >> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>> >> on each memory access.
>>> >>
>>> >> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> >> mapping with a scale and offset to translate a memory address to its corresponding
>>> >> shadow address.
>>> >>
>>> >> Here is function to translate address to corresponding shadow address:
>>> >>
>>> >>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>> >>      {
>>> >>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>> >>                              + kasan_shadow_start;
>>> >>      }
>>> >>
>>> >> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>> >>
>>> >> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>> >> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>> >> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>> >> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>> >> Any negative value indicates that the entire 8-bytes are unaccessible.
>>> >> Different negative values used to distinguish between different kinds of
>>> >> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>> >>
>>> >> To be able to detect accesses to bad memory we need a special compiler.
>>> >> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>> >> before each memory access of size 1, 2, 4, 8 or 16.
>>> >>
>>> >> These functions check whether memory region is valid to access or not by checking
>>> >> corresponding shadow memory. If access is not valid an error printed.
>>> >>
>>> >> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>> >>
>>> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> > 
>> > I gave it a spin, and it seems that it fails for what you might call a "regular"
>> > memory size these days, in my case it was 18G:
>> > 
>> > [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>> > [    0.000000]
>> > [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>> > [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>> > [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>> > [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>> > [    0.000000] Call Trace:
>> > [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>> > [    0.000000] panic (kernel/panic.c:119)
>> > [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>> > [    0.000000] memblock_alloc (mm/memblock.c:1097)
>> > [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>> > [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>> > [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>> > [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>> > [    0.000000] ? printk (kernel/printk/printk.c:1839)
>> > [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>> > [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>> > [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>> > [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>> > 
>> > It got better when I reduced memory to 1GB, but then my system just failed to boot
>> > at all because that's not enough to bring everything up.
>> > 
> Thanks.
> I think memory size is not a problem here. I tested on my desktop with 16G.
> Seems it's a problem with memory holes cited by Dave.
> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.

That's correct (I've mistyped and got 18 instead of 28 above).

However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
thing, so I'm not sure how it applies here.

Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
get KASAN running on my machine?


Thanks,
Sasha


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 13:31         ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-07-10 13:31 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
> On 07/10/14 15:55, Sasha Levin wrote:
>> > On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>> >> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>> >>
>>> >> The main features of kasan is:
>>> >>  - is based on compiler instrumentation (fast),
>>> >>  - detects out of bounds for both writes and reads,
>>> >>  - provides use after free detection,
>>> >>
>>> >> This patch only adds infrastructure for kernel address sanitizer. It's not
>>> >> available for use yet. The idea and some code was borrowed from [1].
>>> >>
>>> >> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>> >> latter).
>>> >>
>>> >> Implementation details:
>>> >> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>> >> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>> >> on each memory access.
>>> >>
>>> >> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> >> mapping with a scale and offset to translate a memory address to its corresponding
>>> >> shadow address.
>>> >>
>>> >> Here is function to translate address to corresponding shadow address:
>>> >>
>>> >>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>> >>      {
>>> >>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>> >>                              + kasan_shadow_start;
>>> >>      }
>>> >>
>>> >> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>> >>
>>> >> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>> >> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>> >> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>> >> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>> >> Any negative value indicates that the entire 8-bytes are unaccessible.
>>> >> Different negative values used to distinguish between different kinds of
>>> >> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>> >>
>>> >> To be able to detect accesses to bad memory we need a special compiler.
>>> >> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>> >> before each memory access of size 1, 2, 4, 8 or 16.
>>> >>
>>> >> These functions check whether memory region is valid to access or not by checking
>>> >> corresponding shadow memory. If access is not valid an error printed.
>>> >>
>>> >> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>> >>
>>> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> > 
>> > I gave it a spin, and it seems that it fails for what you might call a "regular"
>> > memory size these days, in my case it was 18G:
>> > 
>> > [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>> > [    0.000000]
>> > [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>> > [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>> > [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>> > [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>> > [    0.000000] Call Trace:
>> > [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>> > [    0.000000] panic (kernel/panic.c:119)
>> > [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>> > [    0.000000] memblock_alloc (mm/memblock.c:1097)
>> > [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>> > [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>> > [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>> > [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>> > [    0.000000] ? printk (kernel/printk/printk.c:1839)
>> > [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>> > [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>> > [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>> > [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>> > 
>> > It got better when I reduced memory to 1GB, but then my system just failed to boot
>> > at all because that's not enough to bring everything up.
>> > 
> Thanks.
> I think memory size is not a problem here. I tested on my desktop with 16G.
> Seems it's a problem with memory holes cited by Dave.
> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.

That's correct (I've mistyped and got 18 instead of 28 above).

However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
thing, so I'm not sure how it applies here.

Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
get KASAN running on my machine?


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 13:31         ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-07-10 13:31 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
> On 07/10/14 15:55, Sasha Levin wrote:
>> > On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>> >> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>> >>
>>> >> The main features of kasan is:
>>> >>  - is based on compiler instrumentation (fast),
>>> >>  - detects out of bounds for both writes and reads,
>>> >>  - provides use after free detection,
>>> >>
>>> >> This patch only adds infrastructure for kernel address sanitizer. It's not
>>> >> available for use yet. The idea and some code was borrowed from [1].
>>> >>
>>> >> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>> >> latter).
>>> >>
>>> >> Implementation details:
>>> >> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>> >> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>> >> on each memory access.
>>> >>
>>> >> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> >> mapping with a scale and offset to translate a memory address to its corresponding
>>> >> shadow address.
>>> >>
>>> >> Here is function to translate address to corresponding shadow address:
>>> >>
>>> >>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>> >>      {
>>> >>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>> >>                              + kasan_shadow_start;
>>> >>      }
>>> >>
>>> >> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>> >>
>>> >> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>> >> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>> >> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>> >> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>> >> Any negative value indicates that the entire 8-bytes are unaccessible.
>>> >> Different negative values used to distinguish between different kinds of
>>> >> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>> >>
>>> >> To be able to detect accesses to bad memory we need a special compiler.
>>> >> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>> >> before each memory access of size 1, 2, 4, 8 or 16.
>>> >>
>>> >> These functions check whether memory region is valid to access or not by checking
>>> >> corresponding shadow memory. If access is not valid an error printed.
>>> >>
>>> >> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>> >>
>>> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> > 
>> > I gave it a spin, and it seems that it fails for what you might call a "regular"
>> > memory size these days, in my case it was 18G:
>> > 
>> > [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>> > [    0.000000]
>> > [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>> > [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>> > [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>> > [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>> > [    0.000000] Call Trace:
>> > [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>> > [    0.000000] panic (kernel/panic.c:119)
>> > [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>> > [    0.000000] memblock_alloc (mm/memblock.c:1097)
>> > [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>> > [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>> > [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>> > [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>> > [    0.000000] ? printk (kernel/printk/printk.c:1839)
>> > [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>> > [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>> > [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>> > [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>> > 
>> > It got better when I reduced memory to 1GB, but then my system just failed to boot
>> > at all because that's not enough to bring everything up.
>> > 
> Thanks.
> I think memory size is not a problem here. I tested on my desktop with 16G.
> Seems it's a problem with memory holes cited by Dave.
> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.

That's correct (I've mistyped and got 18 instead of 28 above).

However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
thing, so I'm not sure how it applies here.

Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
get KASAN running on my machine?


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 13:31         ` Sasha Levin
  (?)
@ 2014-07-10 13:39           ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:39 UTC (permalink / raw)
  To: Sasha Levin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>>  - is based on compiler instrumentation (fast),
>>>>>>  - detects out of bounds for both writes and reads,
>>>>>>  - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>>      {
>>>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>>                              + kasan_shadow_start;
>>>>>>      }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [    0.000000]
>>>> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [    0.000000] Call Trace:
>>>> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [    0.000000] panic (kernel/panic.c:119)
>>>> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [    0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [    0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
> 
> That's correct (I've mistyped and got 18 instead of 28 above).
> 
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
> 
Right. By lowmemsize here I mean size of direct
mapping of all phys. memory (which usually called as lowmem on 32bit systems).



> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
> 
Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
Also boot cmdline might help.

> 
> Thanks,
> Sasha
> 
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 13:39           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:39 UTC (permalink / raw)
  To: Sasha Levin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>>  - is based on compiler instrumentation (fast),
>>>>>>  - detects out of bounds for both writes and reads,
>>>>>>  - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>>      {
>>>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>>                              + kasan_shadow_start;
>>>>>>      }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [    0.000000]
>>>> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [    0.000000] Call Trace:
>>>> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [    0.000000] panic (kernel/panic.c:119)
>>>> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [    0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [    0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
> 
> That's correct (I've mistyped and got 18 instead of 28 above).
> 
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
> 
Right. By lowmemsize here I mean size of direct
mapping of all phys. memory (which usually called as lowmem on 32bit systems).



> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
> 
Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
Also boot cmdline might help.

> 
> Thanks,
> Sasha
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 13:39           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:39 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>>  - is based on compiler instrumentation (fast),
>>>>>>  - detects out of bounds for both writes and reads,
>>>>>>  - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>>      {
>>>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>>                              + kasan_shadow_start;
>>>>>>      }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [    0.000000]
>>>> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [    0.000000] Call Trace:
>>>> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [    0.000000] panic (kernel/panic.c:119)
>>>> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [    0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [    0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
> 
> That's correct (I've mistyped and got 18 instead of 28 above).
> 
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
> 
Right. By lowmemsize here I mean size of direct
mapping of all phys. memory (which usually called as lowmem on 32bit systems).



> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
> 
Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
Also boot cmdline might help.

> 
> Thanks,
> Sasha
> 
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 13:31         ` Sasha Levin
  (?)
@ 2014-07-10 13:50           ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:50 UTC (permalink / raw)
  To: Sasha Levin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>>  - is based on compiler instrumentation (fast),
>>>>>>  - detects out of bounds for both writes and reads,
>>>>>>  - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>>      {
>>>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>>                              + kasan_shadow_start;
>>>>>>      }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [    0.000000]
>>>> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [    0.000000] Call Trace:
>>>> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [    0.000000] panic (kernel/panic.c:119)
>>>> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [    0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [    0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
> 
> That's correct (I've mistyped and got 18 instead of 28 above).
> 
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
> 
> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
> 

It's not boot with the same Failed to allocate error?

> 
> Thanks,
> Sasha
> 
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 13:50           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:50 UTC (permalink / raw)
  To: Sasha Levin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>>  - is based on compiler instrumentation (fast),
>>>>>>  - detects out of bounds for both writes and reads,
>>>>>>  - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>>      {
>>>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>>                              + kasan_shadow_start;
>>>>>>      }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [    0.000000]
>>>> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [    0.000000] Call Trace:
>>>> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [    0.000000] panic (kernel/panic.c:119)
>>>> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [    0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [    0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
> 
> That's correct (I've mistyped and got 18 instead of 28 above).
> 
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
> 
> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
> 

It's not boot with the same Failed to allocate error?

> 
> Thanks,
> Sasha
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 13:50           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:50 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>>  - is based on compiler instrumentation (fast),
>>>>>>  - detects out of bounds for both writes and reads,
>>>>>>  - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>>      {
>>>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>>                              + kasan_shadow_start;
>>>>>>      }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [    0.000000]
>>>> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [    0.000000] Call Trace:
>>>> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [    0.000000] panic (kernel/panic.c:119)
>>>> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [    0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [    0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
> 
> That's correct (I've mistyped and got 18 instead of 28 above).
> 
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
> 
> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
> 

It's not boot with the same Failed to allocate error?

> 
> Thanks,
> Sasha
> 
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
  2014-07-09 19:31     ` Andi Kleen
  (?)
@ 2014-07-10 13:54       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:54 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

On 07/09/14 23:31, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>> +
>> +#undef memcpy
>> +void *kasan_memset(void *ptr, int val, size_t len);
>> +void *kasan_memcpy(void *dst, const void *src, size_t len);
>> +void *kasan_memmove(void *dst, const void *src, size_t len);
>> +
>> +#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
>> +#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
>> +#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
> 
> I don't think just define is enough, gcc can call these functions
> implicitely too (both with and without __). For example for a struct copy.
> 
> You need to have true linker level aliases. 
> 

It's true, but problem with linker aliases that they cannot be disabled for some files
we don't want to instrument.

> -Andi
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
@ 2014-07-10 13:54       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:54 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm

On 07/09/14 23:31, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>> +
>> +#undef memcpy
>> +void *kasan_memset(void *ptr, int val, size_t len);
>> +void *kasan_memcpy(void *dst, const void *src, size_t len);
>> +void *kasan_memmove(void *dst, const void *src, size_t len);
>> +
>> +#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
>> +#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
>> +#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
> 
> I don't think just define is enough, gcc can call these functions
> implicitely too (both with and without __). For example for a struct copy.
> 
> You need to have true linker level aliases. 
> 

It's true, but problem with linker aliases that they cannot be disabled for some files
we don't want to instrument.

> -Andi
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions
@ 2014-07-10 13:54       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:54 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/09/14 23:31, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>> +
>> +#undef memcpy
>> +void *kasan_memset(void *ptr, int val, size_t len);
>> +void *kasan_memcpy(void *dst, const void *src, size_t len);
>> +void *kasan_memmove(void *dst, const void *src, size_t len);
>> +
>> +#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
>> +#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
>> +#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
> 
> I don't think just define is enough, gcc can call these functions
> implicitely too (both with and without __). For example for a struct copy.
> 
> You need to have true linker level aliases. 
> 

It's true, but problem with linker aliases that they cannot be disabled for some files
we don't want to instrument.

> -Andi
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
  2014-07-09 21:59       ` Vegard Nossum
  (?)
@ 2014-07-10 13:59         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:59 UTC (permalink / raw)
  To: Vegard Nossum, Andi Kleen
  Cc: Dave Hansen, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, kbuild,
	linux-arm-kernel, x86 maintainers, Linux Memory Management List

On 07/10/14 01:59, Vegard Nossum wrote:
> On 9 July 2014 23:44, Andi Kleen <andi@firstfloor.org> wrote:
>> Dave Hansen <dave.hansen@intel.com> writes:
>>>
>>> You're also claiming that "KASAN is better than all of
>>
>> better as in finding more bugs, but surely not better as in
>> "do so with less overhead"
>>
>>> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
>>> DEBUG_PAGEALLOC on kernels where KASAN is available?
>>
>> I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
>>
>> DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
>> KASAN is "slow but thorough" There are niches for both.
>>
>> But I could see KASAN eventually deprecating kmemcheck, which
>> is just incredible slow.
> 
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.
> 

AFAIK kmemcheck could catch reads of uninitialized memory.
KASAN can't do it now, but It should be possible to implementation.
There is such tool for userspace - https://code.google.com/p/memory-sanitizer/wiki/MemorySanitizer

However detection of reads of uninitialized  memory will require a different
shadow encoding. Therefore I think it would be better to make it as a separate feature, incompatible with kasan.



> 
> Vegard
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-10 13:59         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:59 UTC (permalink / raw)
  To: Vegard Nossum, Andi Kleen
  Cc: Dave Hansen, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, kbuild,
	linux-arm-kernel, x86 maintainers, Linux Memory Management List

On 07/10/14 01:59, Vegard Nossum wrote:
> On 9 July 2014 23:44, Andi Kleen <andi@firstfloor.org> wrote:
>> Dave Hansen <dave.hansen@intel.com> writes:
>>>
>>> You're also claiming that "KASAN is better than all of
>>
>> better as in finding more bugs, but surely not better as in
>> "do so with less overhead"
>>
>>> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
>>> DEBUG_PAGEALLOC on kernels where KASAN is available?
>>
>> I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
>>
>> DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
>> KASAN is "slow but thorough" There are niches for both.
>>
>> But I could see KASAN eventually deprecating kmemcheck, which
>> is just incredible slow.
> 
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.
> 

AFAIK kmemcheck could catch reads of uninitialized memory.
KASAN can't do it now, but It should be possible to implementation.
There is such tool for userspace - https://code.google.com/p/memory-sanitizer/wiki/MemorySanitizer

However detection of reads of uninitialized  memory will require a different
shadow encoding. Therefore I think it would be better to make it as a separate feature, incompatible with kasan.



> 
> Vegard
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector.
@ 2014-07-10 13:59         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 13:59 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/14 01:59, Vegard Nossum wrote:
> On 9 July 2014 23:44, Andi Kleen <andi@firstfloor.org> wrote:
>> Dave Hansen <dave.hansen@intel.com> writes:
>>>
>>> You're also claiming that "KASAN is better than all of
>>
>> better as in finding more bugs, but surely not better as in
>> "do so with less overhead"
>>
>>> CONFIG_DEBUG_PAGEALLOC".  So should we just disallow (or hide)
>>> DEBUG_PAGEALLOC on kernels where KASAN is available?
>>
>> I don't think DEBUG_PAGEALLOC/SLUB debug and kasan really conflict.
>>
>> DEBUG_PAGEALLOC/SLUB is "much lower overhead but less bugs found".
>> KASAN is "slow but thorough" There are niches for both.
>>
>> But I could see KASAN eventually deprecating kmemcheck, which
>> is just incredible slow.
> 
> FWIW, I definitely agree with this -- if KASAN can do everything that
> kmemcheck can, it is no doubt the right way forward.
> 

AFAIK kmemcheck could catch reads of uninitialized memory.
KASAN can't do it now, but It should be possible to implementation.
There is such tool for userspace - https://code.google.com/p/memory-sanitizer/wiki/MemorySanitizer

However detection of reads of uninitialized  memory will require a different
shadow encoding. Therefore I think it would be better to make it as a separate feature, incompatible with kasan.



> 
> Vegard
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 13:39           ` Andrey Ryabinin
@ 2014-07-10 14:02             ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-07-10 14:02 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Russell King, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm,
	Dave Hansen

[-- Attachment #1: Type: text/plain, Size: 1927 bytes --]

On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>> > get KASAN running on my machine?
>> > 
> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
> Also boot cmdline might help.
> 

Sure. It's the .config I use for fuzzing so it's rather big (attached).

The cmdline is:

[    0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init

And the memory map:

[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable


On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>> > get KASAN running on my machine?
>> >
> It's not boot with the same Failed to allocate error?

I think I misunderstood your question here. With >1GB is triggers a panic() when
KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
because 1GB isn't enough to load everything - so it fails in some other random
spot as it runs on out memory.


Thanks,
Sasha

[-- Attachment #2: config.sasha.gz --]
[-- Type: application/gzip, Size: 40233 bytes --]

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 14:02             ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-07-10 14:02 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>> > get KASAN running on my machine?
>> > 
> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
> Also boot cmdline might help.
> 

Sure. It's the .config I use for fuzzing so it's rather big (attached).

The cmdline is:

[    0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init

And the memory map:

[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable


On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>> > get KASAN running on my machine?
>> >
> It's not boot with the same Failed to allocate error?

I think I misunderstood your question here. With >1GB is triggers a panic() when
KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
because 1GB isn't enough to load everything - so it fails in some other random
spot as it runs on out memory.


Thanks,
Sasha
-------------- next part --------------
A non-text attachment was scrubbed...
Name: config.sasha.gz
Type: application/gzip
Size: 40233 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140710/19f1c49f/attachment-0001.gz>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
  2014-07-10  7:41       ` Andrey Ryabinin
  (?)
@ 2014-07-10 14:07         ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-10 14:07 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Thu, 10 Jul 2014, Andrey Ryabinin wrote:

> On 07/09/14 18:29, Christoph Lameter wrote:
> > On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> >
> >> Remove static and add function declarations to mm/slab.h so they
> >> could be used by kernel address sanitizer.
> >
> > Hmmm... This is allocator specific. At some future point it would be good
> > to move error reporting to slab_common.c and use those from all
> > allocators.
> >
>
> I could move declarations to kasan internals, but it will look ugly too.
> I also had an idea about unifying SLAB_DEBUG and SLUB_DEBUG at some future.
> I can't tell right now how hard it will be, but it seems doable.

Well the simple approach is to first unify the reporting functions and
then work the way up to higher levels. The reporting functions could also
be more generalized to be more useful for multiple checking tools.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
@ 2014-07-10 14:07         ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-10 14:07 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Thu, 10 Jul 2014, Andrey Ryabinin wrote:

> On 07/09/14 18:29, Christoph Lameter wrote:
> > On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> >
> >> Remove static and add function declarations to mm/slab.h so they
> >> could be used by kernel address sanitizer.
> >
> > Hmmm... This is allocator specific. At some future point it would be good
> > to move error reporting to slab_common.c and use those from all
> > allocators.
> >
>
> I could move declarations to kasan internals, but it will look ugly too.
> I also had an idea about unifying SLAB_DEBUG and SLUB_DEBUG at some future.
> I can't tell right now how hard it will be, but it seems doable.

Well the simple approach is to first unify the reporting functions and
then work the way up to higher levels. The reporting functions could also
be more generalized to be more useful for multiple checking tools.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions
@ 2014-07-10 14:07         ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-10 14:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 10 Jul 2014, Andrey Ryabinin wrote:

> On 07/09/14 18:29, Christoph Lameter wrote:
> > On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> >
> >> Remove static and add function declarations to mm/slab.h so they
> >> could be used by kernel address sanitizer.
> >
> > Hmmm... This is allocator specific. At some future point it would be good
> > to move error reporting to slab_common.c and use those from all
> > allocators.
> >
>
> I could move declarations to kasan internals, but it will look ugly too.
> I also had an idea about unifying SLAB_DEBUG and SLUB_DEBUG at some future.
> I can't tell right now how hard it will be, but it seems doable.

Well the simple approach is to first unify the reporting functions and
then work the way up to higher levels. The reporting functions could also
be more generalized to be more useful for multiple checking tools.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
  2014-07-10  7:43       ` Andrey Ryabinin
  (?)
@ 2014-07-10 14:08         ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-10 14:08 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Thu, 10 Jul 2014, Andrey Ryabinin wrote:

> Should I send another patch to move this to slab_common.c?

Send one patch that is separte from this patchset to all slab
maintainers and include my ack.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
@ 2014-07-10 14:08         ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-10 14:08 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On Thu, 10 Jul 2014, Andrey Ryabinin wrote:

> Should I send another patch to move this to slab_common.c?

Send one patch that is separte from this patchset to all slab
maintainers and include my ack.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c
@ 2014-07-10 14:08         ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-10 14:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 10 Jul 2014, Andrey Ryabinin wrote:

> Should I send another patch to move this to slab_common.c?

Send one patch that is separte from this patchset to all slab
maintainers and include my ack.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 12:12       ` Andrey Ryabinin
  (?)
@ 2014-07-10 15:55         ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-10 15:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
> On 07/10/14 00:26, Dave Hansen wrote:
>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> mapping with a scale and offset to translate a memory address to its corresponding
>>> shadow address.
>>>
>>> Here is function to translate address to corresponding shadow address:
>>>
>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>      {
>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>                              + kasan_shadow_start;
>>>      }
>>
>> How does this interact with vmalloc() addresses or those from a kmap()?
>> 
> It's used only for lowmem:
> 
> static inline bool addr_is_in_mem(unsigned long addr)
> {
> 	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> }

That's fine, and definitely covers the common cases.  Could you make
sure to call this out explicitly?  Also, there's nothing to _keep_ this
approach working for things out of the direct map, right?  It would just
be a matter of updating the shadow memory to have entries for the other
virtual address ranges.

addr_is_in_mem() is a pretty bad name for what it's doing. :)

I'd probably call it something like kasan_tracks_vaddr().

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 15:55         ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-10 15:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-kbuild, linux-arm-kernel, x86,
	linux-mm

On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
> On 07/10/14 00:26, Dave Hansen wrote:
>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> mapping with a scale and offset to translate a memory address to its corresponding
>>> shadow address.
>>>
>>> Here is function to translate address to corresponding shadow address:
>>>
>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>      {
>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>                              + kasan_shadow_start;
>>>      }
>>
>> How does this interact with vmalloc() addresses or those from a kmap()?
>> 
> It's used only for lowmem:
> 
> static inline bool addr_is_in_mem(unsigned long addr)
> {
> 	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> }

That's fine, and definitely covers the common cases.  Could you make
sure to call this out explicitly?  Also, there's nothing to _keep_ this
approach working for things out of the direct map, right?  It would just
be a matter of updating the shadow memory to have entries for the other
virtual address ranges.

addr_is_in_mem() is a pretty bad name for what it's doing. :)

I'd probably call it something like kasan_tracks_vaddr().

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 15:55         ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-10 15:55 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
> On 07/10/14 00:26, Dave Hansen wrote:
>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> mapping with a scale and offset to translate a memory address to its corresponding
>>> shadow address.
>>>
>>> Here is function to translate address to corresponding shadow address:
>>>
>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>      {
>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>                              + kasan_shadow_start;
>>>      }
>>
>> How does this interact with vmalloc() addresses or those from a kmap()?
>> 
> It's used only for lowmem:
> 
> static inline bool addr_is_in_mem(unsigned long addr)
> {
> 	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> }

That's fine, and definitely covers the common cases.  Could you make
sure to call this out explicitly?  Also, there's nothing to _keep_ this
approach working for things out of the direct map, right?  It would just
be a matter of updating the shadow memory to have entries for the other
virtual address ranges.

addr_is_in_mem() is a pretty bad name for what it's doing. :)

I'd probably call it something like kasan_tracks_vaddr().

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 14:02             ` Sasha Levin
  (?)
@ 2014-07-10 19:04               ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 19:04 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm, Dave Hansen

2014-07-10 18:02 GMT+04:00 Sasha Levin <sasha.levin@oracle.com>:
> On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
>> Also boot cmdline might help.
>>
>
> Sure. It's the .config I use for fuzzing so it's rather big (attached).
>
> The cmdline is:
>
> [    0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init
>
> And the memory map:
>
> [    0.000000] e820: BIOS-provided physical RAM map:
> [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
> [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
> [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
> [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
> [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable
>
>
> On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> It's not boot with the same Failed to allocate error?
>
> I think I misunderstood your question here. With >1GB is triggers a panic() when
> KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
> because 1GB isn't enough to load everything - so it fails in some other random
> spot as it runs on out memory.
>
>
> Thanks,
> Sasha

Looks like I found where is a problem. memblock_alloc cannot allocate
accross numa nodes,
therefore kasan fails for numa=fake>=8.
You should succeed with numa=fake=7 or less.


-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 19:04               ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 19:04 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-kbuild,
	linux-arm-kernel, x86, linux-mm, Dave Hansen

2014-07-10 18:02 GMT+04:00 Sasha Levin <sasha.levin@oracle.com>:
> On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
>> Also boot cmdline might help.
>>
>
> Sure. It's the .config I use for fuzzing so it's rather big (attached).
>
> The cmdline is:
>
> [    0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init
>
> And the memory map:
>
> [    0.000000] e820: BIOS-provided physical RAM map:
> [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
> [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
> [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
> [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
> [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable
>
>
> On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> It's not boot with the same Failed to allocate error?
>
> I think I misunderstood your question here. With >1GB is triggers a panic() when
> KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
> because 1GB isn't enough to load everything - so it fails in some other random
> spot as it runs on out memory.
>
>
> Thanks,
> Sasha

Looks like I found where is a problem. memblock_alloc cannot allocate
accross numa nodes,
therefore kasan fails for numa=fake>=8.
You should succeed with numa=fake=7 or less.


-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 19:04               ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 19:04 UTC (permalink / raw)
  To: linux-arm-kernel

2014-07-10 18:02 GMT+04:00 Sasha Levin <sasha.levin@oracle.com>:
> On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
>> Also boot cmdline might help.
>>
>
> Sure. It's the .config I use for fuzzing so it's rather big (attached).
>
> The cmdline is:
>
> [    0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init
>
> And the memory map:
>
> [    0.000000] e820: BIOS-provided physical RAM map:
> [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
> [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
> [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
> [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
> [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable
>
>
> On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> It's not boot with the same Failed to allocate error?
>
> I think I misunderstood your question here. With >1GB is triggers a panic() when
> KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
> because 1GB isn't enough to load everything - so it fails in some other random
> spot as it runs on out memory.
>
>
> Thanks,
> Sasha

Looks like I found where is a problem. memblock_alloc cannot allocate
accross numa nodes,
therefore kasan fails for numa=fake>=8.
You should succeed with numa=fake=7 or less.


-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 15:55         ` Dave Hansen
  (?)
@ 2014-07-10 19:48           ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 19:48 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-kbuild, linux-arm-kernel, x86, linux-mm

2014-07-10 19:55 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
> On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
>> On 07/10/14 00:26, Dave Hansen wrote:
>>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>> shadow address.
>>>>
>>>> Here is function to translate address to corresponding shadow address:
>>>>
>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>      {
>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>                              + kasan_shadow_start;
>>>>      }
>>>
>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>
>> It's used only for lowmem:
>>
>> static inline bool addr_is_in_mem(unsigned long addr)
>> {
>>       return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> }
>
> That's fine, and definitely covers the common cases.  Could you make
> sure to call this out explicitly?  Also, there's nothing to _keep_ this
> approach working for things out of the direct map, right?  It would just
> be a matter of updating the shadow memory to have entries for the other
> virtual address ranges.

Why do you want shadow for things out of the direct map?
If you want to catch use-after-free in vmalloc than DEBUG_PAGEALLOC
will be enough.
If you want catch out-of-bounds in vmalloc you don't need anything,
because vmalloc
allocates guarding hole in the end.
Or do you want something else?

>
> addr_is_in_mem() is a pretty bad name for what it's doing. :)
>
> I'd probably call it something like kasan_tracks_vaddr().
>
Agree

> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>



-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 19:48           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 19:48 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-kbuild, linux-arm-kernel, x86, linux-mm

2014-07-10 19:55 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
> On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
>> On 07/10/14 00:26, Dave Hansen wrote:
>>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>> shadow address.
>>>>
>>>> Here is function to translate address to corresponding shadow address:
>>>>
>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>      {
>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>                              + kasan_shadow_start;
>>>>      }
>>>
>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>
>> It's used only for lowmem:
>>
>> static inline bool addr_is_in_mem(unsigned long addr)
>> {
>>       return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> }
>
> That's fine, and definitely covers the common cases.  Could you make
> sure to call this out explicitly?  Also, there's nothing to _keep_ this
> approach working for things out of the direct map, right?  It would just
> be a matter of updating the shadow memory to have entries for the other
> virtual address ranges.

Why do you want shadow for things out of the direct map?
If you want to catch use-after-free in vmalloc than DEBUG_PAGEALLOC
will be enough.
If you want catch out-of-bounds in vmalloc you don't need anything,
because vmalloc
allocates guarding hole in the end.
Or do you want something else?

>
> addr_is_in_mem() is a pretty bad name for what it's doing. :)
>
> I'd probably call it something like kasan_tracks_vaddr().
>
Agree

> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>



-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 19:48           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-10 19:48 UTC (permalink / raw)
  To: linux-arm-kernel

2014-07-10 19:55 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
> On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
>> On 07/10/14 00:26, Dave Hansen wrote:
>>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>> shadow address.
>>>>
>>>> Here is function to translate address to corresponding shadow address:
>>>>
>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>      {
>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>                              + kasan_shadow_start;
>>>>      }
>>>
>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>
>> It's used only for lowmem:
>>
>> static inline bool addr_is_in_mem(unsigned long addr)
>> {
>>       return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> }
>
> That's fine, and definitely covers the common cases.  Could you make
> sure to call this out explicitly?  Also, there's nothing to _keep_ this
> approach working for things out of the direct map, right?  It would just
> be a matter of updating the shadow memory to have entries for the other
> virtual address ranges.

Why do you want shadow for things out of the direct map?
If you want to catch use-after-free in vmalloc than DEBUG_PAGEALLOC
will be enough.
If you want catch out-of-bounds in vmalloc you don't need anything,
because vmalloc
allocates guarding hole in the end.
Or do you want something else?

>
> addr_is_in_mem() is a pretty bad name for what it's doing. :)
>
> I'd probably call it something like kasan_tracks_vaddr().
>
Agree

> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo at kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>



-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
  2014-07-10 19:48           ` Andrey Ryabinin
  (?)
@ 2014-07-10 20:04             ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-10 20:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-kbuild, linux-arm-kernel, x86, linux-mm

On 07/10/2014 12:48 PM, Andrey Ryabinin wrote:
>>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>>
>>> It's used only for lowmem:
>>>
>>> static inline bool addr_is_in_mem(unsigned long addr)
>>> {
>>>       return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>>> }
>>
>> That's fine, and definitely covers the common cases.  Could you make
>> sure to call this out explicitly?  Also, there's nothing to _keep_ this
>> approach working for things out of the direct map, right?  It would just
>> be a matter of updating the shadow memory to have entries for the other
>> virtual address ranges.
> 
> Why do you want shadow for things out of the direct map? If you want
> to catch use-after-free in vmalloc than DEBUG_PAGEALLOC will be
> enough. If you want catch out-of-bounds in vmalloc you don't need
> anything, because vmalloc allocates guarding hole in the end. Or do
> you want something else?

That's all true for page-size accesses.  Address sanitizer's biggest
advantage over using the page tables is that it can do checks at
sub-page granularity.  But, we don't have any APIs that I can think of
that _care_ about <PAGE_SIZE outside of the direct map (maybe zsmalloc,
but that's pretty obscure).

So I guess it doesn't matter.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 20:04             ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-10 20:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-kbuild, linux-arm-kernel, x86, linux-mm

On 07/10/2014 12:48 PM, Andrey Ryabinin wrote:
>>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>>
>>> It's used only for lowmem:
>>>
>>> static inline bool addr_is_in_mem(unsigned long addr)
>>> {
>>>       return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>>> }
>>
>> That's fine, and definitely covers the common cases.  Could you make
>> sure to call this out explicitly?  Also, there's nothing to _keep_ this
>> approach working for things out of the direct map, right?  It would just
>> be a matter of updating the shadow memory to have entries for the other
>> virtual address ranges.
> 
> Why do you want shadow for things out of the direct map? If you want
> to catch use-after-free in vmalloc than DEBUG_PAGEALLOC will be
> enough. If you want catch out-of-bounds in vmalloc you don't need
> anything, because vmalloc allocates guarding hole in the end. Or do
> you want something else?

That's all true for page-size accesses.  Address sanitizer's biggest
advantage over using the page tables is that it can do checks at
sub-page granularity.  But, we don't have any APIs that I can think of
that _care_ about <PAGE_SIZE outside of the direct map (maybe zsmalloc,
but that's pretty obscure).

So I guess it doesn't matter.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure.
@ 2014-07-10 20:04             ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-07-10 20:04 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10/2014 12:48 PM, Andrey Ryabinin wrote:
>>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>>
>>> It's used only for lowmem:
>>>
>>> static inline bool addr_is_in_mem(unsigned long addr)
>>> {
>>>       return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>>> }
>>
>> That's fine, and definitely covers the common cases.  Could you make
>> sure to call this out explicitly?  Also, there's nothing to _keep_ this
>> approach working for things out of the direct map, right?  It would just
>> be a matter of updating the shadow memory to have entries for the other
>> virtual address ranges.
> 
> Why do you want shadow for things out of the direct map? If you want
> to catch use-after-free in vmalloc than DEBUG_PAGEALLOC will be
> enough. If you want catch out-of-bounds in vmalloc you don't need
> anything, because vmalloc allocates guarding hole in the end. Or do
> you want something else?

That's all true for page-size accesses.  Address sanitizer's biggest
advantage over using the page tables is that it can do checks at
sub-page granularity.  But, we don't have any APIs that I can think of
that _care_ about <PAGE_SIZE outside of the direct map (maybe zsmalloc,
but that's pretty obscure).

So I guess it doesn't matter.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-15  5:52     ` Joonsoo Kim
  -1 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  5:52 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as unaccessible.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  6 ++++++
>  mm/Makefile           |  2 ++
>  mm/kasan/kasan.c      | 18 ++++++++++++++++++
>  mm/kasan/kasan.h      |  1 +
>  mm/kasan/report.c     |  7 +++++++
>  mm/page_alloc.c       |  4 ++++
>  6 files changed, 38 insertions(+)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 7efc3eb..4adc0a1 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -17,6 +17,9 @@ void kasan_disable_local(void);
>  void kasan_alloc_shadow(void);
>  void kasan_init_shadow(void);
>  
> +void kasan_alloc_pages(struct page *page, unsigned int order);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
>  #else /* CONFIG_KASAN */
>  
>  static inline void unpoison_shadow(const void *address, size_t size) {}
> @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
>  static inline void kasan_init_shadow(void) {}
>  static inline void kasan_alloc_shadow(void) {}
>  
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +
>  #endif /* CONFIG_KASAN */
>  
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index dbe9a22..6a9c3f8 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -2,6 +2,8 @@
>  # Makefile for the linux memory manager.
>  #
>  
> +KASAN_SANITIZE_page_alloc.o := n
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index e2cd345..109478e 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
>  	}
>  }
>  
> +void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (likely(page && !PageHighMem(page)))
> +		unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +}
> +
> +void kasan_free_pages(struct page *page, unsigned int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (likely(!PageHighMem(page)))
> +		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
> +}
> +
>  void *kasan_memcpy(void *dst, const void *src, size_t len)
>  {
>  	if (unlikely(len == 0))
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 711ae4f..be9597e 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -5,6 +5,7 @@
>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>  
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>  
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 2430e05..6ef9e57 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>  		bug_type = "buffer overflow";
>  		break;
> +	case KASAN_FREE_PAGE:
> +		bug_type = "use after free";
> +		break;
>  	case KASAN_SHADOW_GAP:
>  		bug_type = "wild memory access";
>  		break;
> @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
>  	page = virt_to_page(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_FREE_PAGE:
> +		dump_page(page, "kasan error");
> +		dump_stack();
> +		break;
>  	case KASAN_SHADOW_GAP:
>  		pr_err("No metainfo is available for this access.\n");
>  		dump_stack();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8c9eeec..67833d1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -61,6 +61,7 @@
>  #include <linux/page-debug-flags.h>
>  #include <linux/hugetlb.h>
>  #include <linux/sched/rt.h>
> +#include <linux/kasan.h>
>  
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>  
>  	trace_mm_page_free(page, order);
>  	kmemcheck_free_shadow(page, order);
> +	kasan_free_pages(page, order);
>  
>  	if (PageAnon(page))
>  		page->mapping = NULL;
> @@ -2807,6 +2809,7 @@ out:
>  	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
>  		goto retry_cpuset;
>  
> +	kasan_alloc_pages(page, order);
>  	return page;
>  }
>  EXPORT_SYMBOL(__alloc_pages_nodemask);
> @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>  	if (end != outer_end)
>  		free_contig_range(end, outer_end - end);
>  
> +	kasan_alloc_pages(pfn_to_page(start), end - start);
>  done:
>  	undo_isolate_page_range(pfn_max_align_down(start),
>  				pfn_max_align_up(end), migratetype);

Hello,

I don't think that this is right place for this hook.

There is a function, __isolate_free_pages(), which steals buddy pages
from page allocator. So you should put this hook onto that function.

alloc_contig_range() uses that function through below call path, so
adding hook on it solves your issue here.

alloc_contig_range() -> isolate_freepages_range() ->
isolate_freepages_block() -> split_free_page -> __isolate_free_page()

And, this also solves marking issue on compaction logic, since
compaction also steal buddy pages from page allocator through
isolate_freepages() -> isolate_freepages_block() -> split_free_page()
-> __isolate_free_page().

Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
@ 2014-07-15  5:52     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  5:52 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as unaccessible.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  6 ++++++
>  mm/Makefile           |  2 ++
>  mm/kasan/kasan.c      | 18 ++++++++++++++++++
>  mm/kasan/kasan.h      |  1 +
>  mm/kasan/report.c     |  7 +++++++
>  mm/page_alloc.c       |  4 ++++
>  6 files changed, 38 insertions(+)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 7efc3eb..4adc0a1 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -17,6 +17,9 @@ void kasan_disable_local(void);
>  void kasan_alloc_shadow(void);
>  void kasan_init_shadow(void);
>  
> +void kasan_alloc_pages(struct page *page, unsigned int order);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
>  #else /* CONFIG_KASAN */
>  
>  static inline void unpoison_shadow(const void *address, size_t size) {}
> @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
>  static inline void kasan_init_shadow(void) {}
>  static inline void kasan_alloc_shadow(void) {}
>  
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +
>  #endif /* CONFIG_KASAN */
>  
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index dbe9a22..6a9c3f8 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -2,6 +2,8 @@
>  # Makefile for the linux memory manager.
>  #
>  
> +KASAN_SANITIZE_page_alloc.o := n
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index e2cd345..109478e 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
>  	}
>  }
>  
> +void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (likely(page && !PageHighMem(page)))
> +		unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +}
> +
> +void kasan_free_pages(struct page *page, unsigned int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (likely(!PageHighMem(page)))
> +		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
> +}
> +
>  void *kasan_memcpy(void *dst, const void *src, size_t len)
>  {
>  	if (unlikely(len == 0))
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 711ae4f..be9597e 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -5,6 +5,7 @@
>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>  
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>  
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 2430e05..6ef9e57 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>  		bug_type = "buffer overflow";
>  		break;
> +	case KASAN_FREE_PAGE:
> +		bug_type = "use after free";
> +		break;
>  	case KASAN_SHADOW_GAP:
>  		bug_type = "wild memory access";
>  		break;
> @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
>  	page = virt_to_page(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_FREE_PAGE:
> +		dump_page(page, "kasan error");
> +		dump_stack();
> +		break;
>  	case KASAN_SHADOW_GAP:
>  		pr_err("No metainfo is available for this access.\n");
>  		dump_stack();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8c9eeec..67833d1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -61,6 +61,7 @@
>  #include <linux/page-debug-flags.h>
>  #include <linux/hugetlb.h>
>  #include <linux/sched/rt.h>
> +#include <linux/kasan.h>
>  
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>  
>  	trace_mm_page_free(page, order);
>  	kmemcheck_free_shadow(page, order);
> +	kasan_free_pages(page, order);
>  
>  	if (PageAnon(page))
>  		page->mapping = NULL;
> @@ -2807,6 +2809,7 @@ out:
>  	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
>  		goto retry_cpuset;
>  
> +	kasan_alloc_pages(page, order);
>  	return page;
>  }
>  EXPORT_SYMBOL(__alloc_pages_nodemask);
> @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>  	if (end != outer_end)
>  		free_contig_range(end, outer_end - end);
>  
> +	kasan_alloc_pages(pfn_to_page(start), end - start);
>  done:
>  	undo_isolate_page_range(pfn_max_align_down(start),
>  				pfn_max_align_up(end), migratetype);

Hello,

I don't think that this is right place for this hook.

There is a function, __isolate_free_pages(), which steals buddy pages
from page allocator. So you should put this hook onto that function.

alloc_contig_range() uses that function through below call path, so
adding hook on it solves your issue here.

alloc_contig_range() -> isolate_freepages_range() ->
isolate_freepages_block() -> split_free_page -> __isolate_free_page()

And, this also solves marking issue on compaction logic, since
compaction also steal buddy pages from page allocator through
isolate_freepages() -> isolate_freepages_block() -> split_free_page()
-> __isolate_free_page().

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
@ 2014-07-15  5:52     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  5:52 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as unaccessible.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  6 ++++++
>  mm/Makefile           |  2 ++
>  mm/kasan/kasan.c      | 18 ++++++++++++++++++
>  mm/kasan/kasan.h      |  1 +
>  mm/kasan/report.c     |  7 +++++++
>  mm/page_alloc.c       |  4 ++++
>  6 files changed, 38 insertions(+)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 7efc3eb..4adc0a1 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -17,6 +17,9 @@ void kasan_disable_local(void);
>  void kasan_alloc_shadow(void);
>  void kasan_init_shadow(void);
>  
> +void kasan_alloc_pages(struct page *page, unsigned int order);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
>  #else /* CONFIG_KASAN */
>  
>  static inline void unpoison_shadow(const void *address, size_t size) {}
> @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
>  static inline void kasan_init_shadow(void) {}
>  static inline void kasan_alloc_shadow(void) {}
>  
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +
>  #endif /* CONFIG_KASAN */
>  
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index dbe9a22..6a9c3f8 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -2,6 +2,8 @@
>  # Makefile for the linux memory manager.
>  #
>  
> +KASAN_SANITIZE_page_alloc.o := n
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index e2cd345..109478e 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
>  	}
>  }
>  
> +void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (likely(page && !PageHighMem(page)))
> +		unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +}
> +
> +void kasan_free_pages(struct page *page, unsigned int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (likely(!PageHighMem(page)))
> +		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
> +}
> +
>  void *kasan_memcpy(void *dst, const void *src, size_t len)
>  {
>  	if (unlikely(len == 0))
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 711ae4f..be9597e 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -5,6 +5,7 @@
>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>  
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>  
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 2430e05..6ef9e57 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>  		bug_type = "buffer overflow";
>  		break;
> +	case KASAN_FREE_PAGE:
> +		bug_type = "use after free";
> +		break;
>  	case KASAN_SHADOW_GAP:
>  		bug_type = "wild memory access";
>  		break;
> @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
>  	page = virt_to_page(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_FREE_PAGE:
> +		dump_page(page, "kasan error");
> +		dump_stack();
> +		break;
>  	case KASAN_SHADOW_GAP:
>  		pr_err("No metainfo is available for this access.\n");
>  		dump_stack();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8c9eeec..67833d1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -61,6 +61,7 @@
>  #include <linux/page-debug-flags.h>
>  #include <linux/hugetlb.h>
>  #include <linux/sched/rt.h>
> +#include <linux/kasan.h>
>  
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>  
>  	trace_mm_page_free(page, order);
>  	kmemcheck_free_shadow(page, order);
> +	kasan_free_pages(page, order);
>  
>  	if (PageAnon(page))
>  		page->mapping = NULL;
> @@ -2807,6 +2809,7 @@ out:
>  	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
>  		goto retry_cpuset;
>  
> +	kasan_alloc_pages(page, order);
>  	return page;
>  }
>  EXPORT_SYMBOL(__alloc_pages_nodemask);
> @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>  	if (end != outer_end)
>  		free_contig_range(end, outer_end - end);
>  
> +	kasan_alloc_pages(pfn_to_page(start), end - start);
>  done:
>  	undo_isolate_page_range(pfn_max_align_down(start),
>  				pfn_max_align_up(end), migratetype);

Hello,

I don't think that this is right place for this hook.

There is a function, __isolate_free_pages(), which steals buddy pages
from page allocator. So you should put this hook onto that function.

alloc_contig_range() uses that function through below call path, so
adding hook on it solves your issue here.

alloc_contig_range() -> isolate_freepages_range() ->
isolate_freepages_block() -> split_free_page -> __isolate_free_page()

And, this also solves marking issue on compaction logic, since
compaction also steal buddy pages from page allocator through
isolate_freepages() -> isolate_freepages_block() -> split_free_page()
-> __isolate_free_page().

Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-15  5:53     ` Joonsoo Kim
  -1 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  5:53 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
> This patch shares virt_to_cache() between slab and slub and
> it used in cache_from_obj() now.
> Later virt_to_cache() will be kernel address sanitizer also.

I think that this patch won't be needed.
See comment in 15/21.

Thanks.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
@ 2014-07-15  5:53     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  5:53 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
> This patch shares virt_to_cache() between slab and slub and
> it used in cache_from_obj() now.
> Later virt_to_cache() will be kernel address sanitizer also.

I think that this patch won't be needed.
See comment in 15/21.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
@ 2014-07-15  5:53     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  5:53 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
> This patch shares virt_to_cache() between slab and slub and
> it used in cache_from_obj() now.
> Later virt_to_cache() will be kernel address sanitizer also.

I think that this patch won't be needed.
See comment in 15/21.

Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-15  6:04     ` Joonsoo Kim
  -1 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> Some code in slub could validly touch memory marked by kasan as unaccessible.
> Even though slub.c doesn't instrumented, functions called in it are instrumented,
> so to avoid false positive reports such places are protected by
> kasan_disable_local()/kasan_enable_local() calls.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slub.c | 21 +++++++++++++++++++--
>  1 file changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 6ddedf9..c8dbea7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>  	if (!(s->flags & SLAB_STORE_USER))
>  		return;
>  
> +	kasan_disable_local();
>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>  	print_track("Freed", get_track(s, object, TRACK_FREE));
> +	kasan_enable_local();

I don't think that this is needed since print_track() doesn't call
external function with object pointer. print_track() call pr_err(), but,
before calling, it retrieve t->addrs[i] so memory access only occurs
in slub.c.

>  }
>  
>  static void print_page_info(struct page *page)
> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  	unsigned int off;	/* Offset of last byte */
>  	u8 *addr = page_address(page);
>  
> +	kasan_disable_local();
> +
>  	print_tracking(s, p);
>  
>  	print_page_info(page);
> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  		/* Beginning of the filler is the free pointer */
>  		print_section("Padding ", p + off, s->size - off);
>  
> +	kasan_enable_local();
> +
>  	dump_stack();
>  }

And, I recommend that you put this hook on right place.
At a glance, the problematic function is print_section() which have
external function call, print_hex_dump(), with object pointer.
If you disable kasan in print_section, all the below thing won't be
needed, I guess.

Thanks.

>  
> @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>  					struct page *page,
>  					void *object, unsigned long addr)
>  {
> +
> +	kasan_disable_local();
>  	if (!check_slab(s, page))
>  		goto bad;
>  
> @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>  		set_track(s, object, TRACK_ALLOC, addr);
>  	trace(s, page, object, 1);
>  	init_object(s, object, SLUB_RED_ACTIVE);
> +	kasan_enable_local();
>  	return 1;
>  
>  bad:
> @@ -1041,6 +1050,7 @@ bad:
>  		page->inuse = page->objects;
>  		page->freelist = NULL;
>  	}
> +	kasan_enable_local();
>  	return 0;
>  }
>  
> @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>  
>  	spin_lock_irqsave(&n->list_lock, *flags);
>  	slab_lock(page);
> +	kasan_disable_local();
>  
>  	if (!check_slab(s, page))
>  		goto fail;
> @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>  	trace(s, page, object, 0);
>  	init_object(s, object, SLUB_RED_INACTIVE);
>  out:
> +	kasan_enable_local();
>  	slab_unlock(page);
>  	/*
>  	 * Keep node_lock to preserve integrity
> @@ -1096,6 +1108,7 @@ out:
>  	return n;
>  
>  fail:
> +	kasan_enable_local();
>  	slab_unlock(page);
>  	spin_unlock_irqrestore(&n->list_lock, *flags);
>  	slab_fix(s, "Object at 0x%p not freed", object);
> @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>  				void *object)
>  {
>  	setup_object_debug(s, page, object);
> -	if (unlikely(s->ctor))
> +	if (unlikely(s->ctor)) {
> +		kasan_disable_local();
>  		s->ctor(object);
> +		kasan_enable_local();
> +	}
>  }
>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>  
>  	if (kmem_cache_debug(s)) {
>  		void *p;
> -
> +		kasan_disable_local();
>  		slab_pad_check(s, page);
>  		for_each_object(p, s, page_address(page),
>  						page->objects)
>  			check_object(s, page, p, SLUB_RED_INACTIVE);
> +		kasan_enable_local();
>  	}
>  
>  	kmemcheck_free_shadow(page, compound_order(page));
> -- 
> 1.8.5.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15  6:04     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> Some code in slub could validly touch memory marked by kasan as unaccessible.
> Even though slub.c doesn't instrumented, functions called in it are instrumented,
> so to avoid false positive reports such places are protected by
> kasan_disable_local()/kasan_enable_local() calls.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slub.c | 21 +++++++++++++++++++--
>  1 file changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 6ddedf9..c8dbea7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>  	if (!(s->flags & SLAB_STORE_USER))
>  		return;
>  
> +	kasan_disable_local();
>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>  	print_track("Freed", get_track(s, object, TRACK_FREE));
> +	kasan_enable_local();

I don't think that this is needed since print_track() doesn't call
external function with object pointer. print_track() call pr_err(), but,
before calling, it retrieve t->addrs[i] so memory access only occurs
in slub.c.

>  }
>  
>  static void print_page_info(struct page *page)
> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  	unsigned int off;	/* Offset of last byte */
>  	u8 *addr = page_address(page);
>  
> +	kasan_disable_local();
> +
>  	print_tracking(s, p);
>  
>  	print_page_info(page);
> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  		/* Beginning of the filler is the free pointer */
>  		print_section("Padding ", p + off, s->size - off);
>  
> +	kasan_enable_local();
> +
>  	dump_stack();
>  }

And, I recommend that you put this hook on right place.
At a glance, the problematic function is print_section() which have
external function call, print_hex_dump(), with object pointer.
If you disable kasan in print_section, all the below thing won't be
needed, I guess.

Thanks.

>  
> @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>  					struct page *page,
>  					void *object, unsigned long addr)
>  {
> +
> +	kasan_disable_local();
>  	if (!check_slab(s, page))
>  		goto bad;
>  
> @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>  		set_track(s, object, TRACK_ALLOC, addr);
>  	trace(s, page, object, 1);
>  	init_object(s, object, SLUB_RED_ACTIVE);
> +	kasan_enable_local();
>  	return 1;
>  
>  bad:
> @@ -1041,6 +1050,7 @@ bad:
>  		page->inuse = page->objects;
>  		page->freelist = NULL;
>  	}
> +	kasan_enable_local();
>  	return 0;
>  }
>  
> @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>  
>  	spin_lock_irqsave(&n->list_lock, *flags);
>  	slab_lock(page);
> +	kasan_disable_local();
>  
>  	if (!check_slab(s, page))
>  		goto fail;
> @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>  	trace(s, page, object, 0);
>  	init_object(s, object, SLUB_RED_INACTIVE);
>  out:
> +	kasan_enable_local();
>  	slab_unlock(page);
>  	/*
>  	 * Keep node_lock to preserve integrity
> @@ -1096,6 +1108,7 @@ out:
>  	return n;
>  
>  fail:
> +	kasan_enable_local();
>  	slab_unlock(page);
>  	spin_unlock_irqrestore(&n->list_lock, *flags);
>  	slab_fix(s, "Object at 0x%p not freed", object);
> @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>  				void *object)
>  {
>  	setup_object_debug(s, page, object);
> -	if (unlikely(s->ctor))
> +	if (unlikely(s->ctor)) {
> +		kasan_disable_local();
>  		s->ctor(object);
> +		kasan_enable_local();
> +	}
>  }
>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>  
>  	if (kmem_cache_debug(s)) {
>  		void *p;
> -
> +		kasan_disable_local();
>  		slab_pad_check(s, page);
>  		for_each_object(p, s, page_address(page),
>  						page->objects)
>  			check_object(s, page, p, SLUB_RED_INACTIVE);
> +		kasan_enable_local();
>  	}
>  
>  	kmemcheck_free_shadow(page, compound_order(page));
> -- 
> 1.8.5.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15  6:04     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> Some code in slub could validly touch memory marked by kasan as unaccessible.
> Even though slub.c doesn't instrumented, functions called in it are instrumented,
> so to avoid false positive reports such places are protected by
> kasan_disable_local()/kasan_enable_local() calls.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slub.c | 21 +++++++++++++++++++--
>  1 file changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 6ddedf9..c8dbea7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>  	if (!(s->flags & SLAB_STORE_USER))
>  		return;
>  
> +	kasan_disable_local();
>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>  	print_track("Freed", get_track(s, object, TRACK_FREE));
> +	kasan_enable_local();

I don't think that this is needed since print_track() doesn't call
external function with object pointer. print_track() call pr_err(), but,
before calling, it retrieve t->addrs[i] so memory access only occurs
in slub.c.

>  }
>  
>  static void print_page_info(struct page *page)
> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  	unsigned int off;	/* Offset of last byte */
>  	u8 *addr = page_address(page);
>  
> +	kasan_disable_local();
> +
>  	print_tracking(s, p);
>  
>  	print_page_info(page);
> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  		/* Beginning of the filler is the free pointer */
>  		print_section("Padding ", p + off, s->size - off);
>  
> +	kasan_enable_local();
> +
>  	dump_stack();
>  }

And, I recommend that you put this hook on right place.
At a glance, the problematic function is print_section() which have
external function call, print_hex_dump(), with object pointer.
If you disable kasan in print_section, all the below thing won't be
needed, I guess.

Thanks.

>  
> @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>  					struct page *page,
>  					void *object, unsigned long addr)
>  {
> +
> +	kasan_disable_local();
>  	if (!check_slab(s, page))
>  		goto bad;
>  
> @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>  		set_track(s, object, TRACK_ALLOC, addr);
>  	trace(s, page, object, 1);
>  	init_object(s, object, SLUB_RED_ACTIVE);
> +	kasan_enable_local();
>  	return 1;
>  
>  bad:
> @@ -1041,6 +1050,7 @@ bad:
>  		page->inuse = page->objects;
>  		page->freelist = NULL;
>  	}
> +	kasan_enable_local();
>  	return 0;
>  }
>  
> @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>  
>  	spin_lock_irqsave(&n->list_lock, *flags);
>  	slab_lock(page);
> +	kasan_disable_local();
>  
>  	if (!check_slab(s, page))
>  		goto fail;
> @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>  	trace(s, page, object, 0);
>  	init_object(s, object, SLUB_RED_INACTIVE);
>  out:
> +	kasan_enable_local();
>  	slab_unlock(page);
>  	/*
>  	 * Keep node_lock to preserve integrity
> @@ -1096,6 +1108,7 @@ out:
>  	return n;
>  
>  fail:
> +	kasan_enable_local();
>  	slab_unlock(page);
>  	spin_unlock_irqrestore(&n->list_lock, *flags);
>  	slab_fix(s, "Object at 0x%p not freed", object);
> @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>  				void *object)
>  {
>  	setup_object_debug(s, page, object);
> -	if (unlikely(s->ctor))
> +	if (unlikely(s->ctor)) {
> +		kasan_disable_local();
>  		s->ctor(object);
> +		kasan_enable_local();
> +	}
>  }
>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>  
>  	if (kmem_cache_debug(s)) {
>  		void *p;
> -
> +		kasan_disable_local();
>  		slab_pad_check(s, page);
>  		for_each_object(p, s, page_address(page),
>  						page->objects)
>  			check_object(s, page, p, SLUB_RED_INACTIVE);
> +		kasan_enable_local();
>  	}
>  
>  	kmemcheck_free_shadow(page, compound_order(page));
> -- 
> 1.8.5.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo at kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-07-15  6:12     ` Joonsoo Kim
  (?)
@ 2014-07-15  6:08       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-07-15  6:08 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Tue, Jul 15, 2014 at 10:12 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>> We need to manually unpoison rounded up allocation size for dname
>> to avoid kasan's reports in __d_lookup_rcu.
>> __d_lookup_rcu may validly read a little beyound allocated size.
>
> If it read a little beyond allocated size, IMHO, it is better to
> allocate correct size.
>
> kmalloc(name->len + 1, GFP_KERNEL); -->
> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>
> Isn't it?


I absolutely agree!


> Thanks.
>
>>
>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  fs/dcache.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index b7e8b20..dff64f2 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -38,6 +38,7 @@
>>  #include <linux/prefetch.h>
>>  #include <linux/ratelimit.h>
>>  #include <linux/list_lru.h>
>> +#include <linux/kasan.h>
>>  #include "internal.h"
>>  #include "mount.h"
>>
>> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>>                       kmem_cache_free(dentry_cache, dentry);
>>                       return NULL;
>>               }
>> +             unpoison_shadow(dname,
>> +                             roundup(name->len + 1, sizeof(unsigned long)));
>>       } else  {
>>               dname = dentry->d_iname;
>>       }
>> --
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-15  6:08       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-07-15  6:08 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Tue, Jul 15, 2014 at 10:12 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>> We need to manually unpoison rounded up allocation size for dname
>> to avoid kasan's reports in __d_lookup_rcu.
>> __d_lookup_rcu may validly read a little beyound allocated size.
>
> If it read a little beyond allocated size, IMHO, it is better to
> allocate correct size.
>
> kmalloc(name->len + 1, GFP_KERNEL); -->
> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>
> Isn't it?


I absolutely agree!


> Thanks.
>
>>
>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  fs/dcache.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index b7e8b20..dff64f2 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -38,6 +38,7 @@
>>  #include <linux/prefetch.h>
>>  #include <linux/ratelimit.h>
>>  #include <linux/list_lru.h>
>> +#include <linux/kasan.h>
>>  #include "internal.h"
>>  #include "mount.h"
>>
>> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>>                       kmem_cache_free(dentry_cache, dentry);
>>                       return NULL;
>>               }
>> +             unpoison_shadow(dname,
>> +                             roundup(name->len + 1, sizeof(unsigned long)));
>>       } else  {
>>               dname = dentry->d_iname;
>>       }
>> --
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-15  6:08       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-07-15  6:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 15, 2014 at 10:12 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>> We need to manually unpoison rounded up allocation size for dname
>> to avoid kasan's reports in __d_lookup_rcu.
>> __d_lookup_rcu may validly read a little beyound allocated size.
>
> If it read a little beyond allocated size, IMHO, it is better to
> allocate correct size.
>
> kmalloc(name->len + 1, GFP_KERNEL); -->
> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>
> Isn't it?


I absolutely agree!


> Thanks.
>
>>
>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  fs/dcache.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index b7e8b20..dff64f2 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -38,6 +38,7 @@
>>  #include <linux/prefetch.h>
>>  #include <linux/ratelimit.h>
>>  #include <linux/list_lru.h>
>> +#include <linux/kasan.h>
>>  #include "internal.h"
>>  #include "mount.h"
>>
>> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>>                       kmem_cache_free(dentry_cache, dentry);
>>                       return NULL;
>>               }
>> +             unpoison_shadow(dname,
>> +                             roundup(name->len + 1, sizeof(unsigned long)));
>>       } else  {
>>               dname = dentry->d_iname;
>>       }
>> --
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo at kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-15  6:09     ` Joonsoo Kim
  -1 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:09 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size marked as
> accessible, and the rest of the object (including slub's metadata)
> marked as redzone (unaccessible).
> 
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible by kasan_krealloc call.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  22 ++++++++++
>  include/linux/slab.h  |  19 +++++++--
>  lib/Kconfig.kasan     |   2 +
>  mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h      |   5 +++
>  mm/kasan/report.c     |  23 +++++++++++
>  mm/slab.h             |   2 +-
>  mm/slab_common.c      |   9 +++--
>  mm/slub.c             |  24 ++++++++++-
>  9 files changed, 208 insertions(+), 8 deletions(-)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 4adc0a1..583c011 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -20,6 +20,17 @@ void kasan_init_shadow(void);
>  void kasan_alloc_pages(struct page *page, unsigned int order);
>  void kasan_free_pages(struct page *page, unsigned int order);
>  
> +void kasan_kmalloc_large(const void *ptr, size_t size);
> +void kasan_kfree_large(const void *ptr);
> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
> +void kasan_krealloc(const void *object, size_t new_size);
> +
> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
> +void kasan_slab_free(struct kmem_cache *s, void *object);
> +
> +void kasan_alloc_slab_pages(struct page *page, int order);
> +void kasan_free_slab_pages(struct page *page, int order);
> +
>  #else /* CONFIG_KASAN */
>  
>  static inline void unpoison_shadow(const void *address, size_t size) {}
> @@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>  
> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
> +static inline void kasan_kfree_large(const void *ptr) {}
> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
> +
> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
> +
> +static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
> +
>  #endif /* CONFIG_KASAN */
>  
>  #endif /* LINUX_KASAN_H */
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 68b1feab..a9513e9 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -104,6 +104,7 @@
>  				(unsigned long)ZERO_SIZE_PTR)
>  
>  #include <linux/kmemleak.h>
> +#include <linux/kasan.h>
>  
>  struct mem_cgroup;
>  /*
> @@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
>   */
>  static __always_inline void *kmalloc(size_t size, gfp_t flags)
>  {
> +	void *ret;
> +
>  	if (__builtin_constant_p(size)) {
>  		if (size > KMALLOC_MAX_CACHE_SIZE)
>  			return kmalloc_large(size, flags);
> @@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>  			if (!index)
>  				return ZERO_SIZE_PTR;
>  
> -			return kmem_cache_alloc_trace(kmalloc_caches[index],
> +			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
>  					flags, size);
> +
> +			kasan_kmalloc(kmalloc_caches[index], ret, size);
> +
> +			return ret;
>  		}
>  #endif
>  	}
> @@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
>  static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>  {
>  #ifndef CONFIG_SLOB
> +	void *ret;
> +
>  	if (__builtin_constant_p(size) &&
>  		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
>  		int i = kmalloc_index(size);
> @@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>  		if (!i)
>  			return ZERO_SIZE_PTR;
>  
> -		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
> -						flags, node, size);
> +		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
> +						  flags, node, size);
> +
> +		kasan_kmalloc(kmalloc_caches[i], ret, size);
> +
> +		return ret;
>  	}
>  #endif
>  	return __kmalloc_node(size, flags, node);
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 2bfff78..289a624 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
>  
>  config KASAN
>  	bool "AddressSanitizer: dynamic memory error detector"
> +	depends on SLUB
> +	select STACKTRACE
>  	default n
>  	help
>  	  Enables AddressSanitizer - dynamic memory error detector,
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 109478e..9b5182a 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
>  	}
>  }
>  
> +void kasan_alloc_slab_pages(struct page *page, int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
> +}
> +
> +void kasan_free_slab_pages(struct page *page, int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
> +}
> +
> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(object == NULL))
> +		return;
> +
> +	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
> +	unpoison_shadow(object, cache->alloc_size);
> +}
> +
> +void kasan_slab_free(struct kmem_cache *cache, void *object)
> +{
> +	unsigned long size = cache->size;
> +	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
> +}
> +
> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
> +{
> +	unsigned long redzone_start;
> +	unsigned long redzone_end;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(object == NULL))
> +		return;
> +
> +	redzone_start = round_up((unsigned long)(object + size),
> +				KASAN_SHADOW_SCALE_SIZE);
> +	redzone_end = (unsigned long)object + cache->size;
> +
> +	unpoison_shadow(object, size);
> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +		KASAN_KMALLOC_REDZONE);
> +
> +}
> +EXPORT_SYMBOL(kasan_kmalloc);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size)
> +{
> +	struct page *page;
> +	unsigned long redzone_start;
> +	unsigned long redzone_end;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(ptr == NULL))
> +		return;
> +
> +	page = virt_to_page(ptr);
> +	redzone_start = round_up((unsigned long)(ptr + size),
> +				KASAN_SHADOW_SCALE_SIZE);
> +	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> +
> +	unpoison_shadow(ptr, size);
> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +		KASAN_PAGE_REDZONE);
> +}
> +EXPORT_SYMBOL(kasan_kmalloc_large);
> +
> +void kasan_krealloc(const void *object, size_t size)
> +{
> +	struct page *page;
> +
> +	if (unlikely(object == ZERO_SIZE_PTR))
> +		return;
> +
> +	page = virt_to_head_page(object);
> +
> +	if (unlikely(!PageSlab(page)))
> +		kasan_kmalloc_large(object, size);
> +	else
> +		kasan_kmalloc(page->slab_cache, object, size);
> +}
> +
> +void kasan_kfree_large(const void *ptr)
> +{
> +	struct page *page;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	page = virt_to_page(ptr);
> +	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
> +}
> +
>  void kasan_alloc_pages(struct page *page, unsigned int order)
>  {
>  	if (unlikely(!kasan_initialized))
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index be9597e..f925d03 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,11 @@
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>  
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>  
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 6ef9e57..6d829af 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
>  	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_PAGE_REDZONE:
> +	case KASAN_SLAB_REDZONE:
> +	case KASAN_KMALLOC_REDZONE:
>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>  		bug_type = "buffer overflow";
>  		break;
>  	case KASAN_FREE_PAGE:
> +	case KASAN_SLAB_FREE:
> +	case KASAN_KMALLOC_FREE:
>  		bug_type = "use after free";
>  		break;
>  	case KASAN_SHADOW_GAP:
> @@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
>  	page = virt_to_page(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_SLAB_REDZONE:
> +		cache = virt_to_cache((void *)info->access_addr);
> +		slab_err(cache, page, "access to slab redzone");

We need head page of invalid access address for slab_err() since head
page has all meta data of this slab. So, instead of, virt_to_cache,
use virt_to_head_page() and page->slab_cache.

> +		dump_stack();
> +		break;
> +	case KASAN_KMALLOC_FREE:
> +	case KASAN_KMALLOC_REDZONE:
> +	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +		if (PageSlab(page)) {
> +			cache = virt_to_cache((void *)info->access_addr);
> +			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
> +			object = virt_to_obj(cache, slab_start,
> +					(void *)info->access_addr);
> +			object_err(cache, page, object, "kasan error");
> +			break;
> +		}

Same here, page should be head page.

Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-15  6:09     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:09 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size marked as
> accessible, and the rest of the object (including slub's metadata)
> marked as redzone (unaccessible).
> 
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible by kasan_krealloc call.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  22 ++++++++++
>  include/linux/slab.h  |  19 +++++++--
>  lib/Kconfig.kasan     |   2 +
>  mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h      |   5 +++
>  mm/kasan/report.c     |  23 +++++++++++
>  mm/slab.h             |   2 +-
>  mm/slab_common.c      |   9 +++--
>  mm/slub.c             |  24 ++++++++++-
>  9 files changed, 208 insertions(+), 8 deletions(-)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 4adc0a1..583c011 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -20,6 +20,17 @@ void kasan_init_shadow(void);
>  void kasan_alloc_pages(struct page *page, unsigned int order);
>  void kasan_free_pages(struct page *page, unsigned int order);
>  
> +void kasan_kmalloc_large(const void *ptr, size_t size);
> +void kasan_kfree_large(const void *ptr);
> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
> +void kasan_krealloc(const void *object, size_t new_size);
> +
> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
> +void kasan_slab_free(struct kmem_cache *s, void *object);
> +
> +void kasan_alloc_slab_pages(struct page *page, int order);
> +void kasan_free_slab_pages(struct page *page, int order);
> +
>  #else /* CONFIG_KASAN */
>  
>  static inline void unpoison_shadow(const void *address, size_t size) {}
> @@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>  
> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
> +static inline void kasan_kfree_large(const void *ptr) {}
> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
> +
> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
> +
> +static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
> +
>  #endif /* CONFIG_KASAN */
>  
>  #endif /* LINUX_KASAN_H */
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 68b1feab..a9513e9 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -104,6 +104,7 @@
>  				(unsigned long)ZERO_SIZE_PTR)
>  
>  #include <linux/kmemleak.h>
> +#include <linux/kasan.h>
>  
>  struct mem_cgroup;
>  /*
> @@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
>   */
>  static __always_inline void *kmalloc(size_t size, gfp_t flags)
>  {
> +	void *ret;
> +
>  	if (__builtin_constant_p(size)) {
>  		if (size > KMALLOC_MAX_CACHE_SIZE)
>  			return kmalloc_large(size, flags);
> @@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>  			if (!index)
>  				return ZERO_SIZE_PTR;
>  
> -			return kmem_cache_alloc_trace(kmalloc_caches[index],
> +			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
>  					flags, size);
> +
> +			kasan_kmalloc(kmalloc_caches[index], ret, size);
> +
> +			return ret;
>  		}
>  #endif
>  	}
> @@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
>  static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>  {
>  #ifndef CONFIG_SLOB
> +	void *ret;
> +
>  	if (__builtin_constant_p(size) &&
>  		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
>  		int i = kmalloc_index(size);
> @@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>  		if (!i)
>  			return ZERO_SIZE_PTR;
>  
> -		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
> -						flags, node, size);
> +		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
> +						  flags, node, size);
> +
> +		kasan_kmalloc(kmalloc_caches[i], ret, size);
> +
> +		return ret;
>  	}
>  #endif
>  	return __kmalloc_node(size, flags, node);
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 2bfff78..289a624 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
>  
>  config KASAN
>  	bool "AddressSanitizer: dynamic memory error detector"
> +	depends on SLUB
> +	select STACKTRACE
>  	default n
>  	help
>  	  Enables AddressSanitizer - dynamic memory error detector,
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 109478e..9b5182a 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
>  	}
>  }
>  
> +void kasan_alloc_slab_pages(struct page *page, int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
> +}
> +
> +void kasan_free_slab_pages(struct page *page, int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
> +}
> +
> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(object == NULL))
> +		return;
> +
> +	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
> +	unpoison_shadow(object, cache->alloc_size);
> +}
> +
> +void kasan_slab_free(struct kmem_cache *cache, void *object)
> +{
> +	unsigned long size = cache->size;
> +	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
> +}
> +
> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
> +{
> +	unsigned long redzone_start;
> +	unsigned long redzone_end;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(object == NULL))
> +		return;
> +
> +	redzone_start = round_up((unsigned long)(object + size),
> +				KASAN_SHADOW_SCALE_SIZE);
> +	redzone_end = (unsigned long)object + cache->size;
> +
> +	unpoison_shadow(object, size);
> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +		KASAN_KMALLOC_REDZONE);
> +
> +}
> +EXPORT_SYMBOL(kasan_kmalloc);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size)
> +{
> +	struct page *page;
> +	unsigned long redzone_start;
> +	unsigned long redzone_end;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(ptr == NULL))
> +		return;
> +
> +	page = virt_to_page(ptr);
> +	redzone_start = round_up((unsigned long)(ptr + size),
> +				KASAN_SHADOW_SCALE_SIZE);
> +	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> +
> +	unpoison_shadow(ptr, size);
> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +		KASAN_PAGE_REDZONE);
> +}
> +EXPORT_SYMBOL(kasan_kmalloc_large);
> +
> +void kasan_krealloc(const void *object, size_t size)
> +{
> +	struct page *page;
> +
> +	if (unlikely(object == ZERO_SIZE_PTR))
> +		return;
> +
> +	page = virt_to_head_page(object);
> +
> +	if (unlikely(!PageSlab(page)))
> +		kasan_kmalloc_large(object, size);
> +	else
> +		kasan_kmalloc(page->slab_cache, object, size);
> +}
> +
> +void kasan_kfree_large(const void *ptr)
> +{
> +	struct page *page;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	page = virt_to_page(ptr);
> +	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
> +}
> +
>  void kasan_alloc_pages(struct page *page, unsigned int order)
>  {
>  	if (unlikely(!kasan_initialized))
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index be9597e..f925d03 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,11 @@
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>  
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>  
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 6ef9e57..6d829af 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
>  	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_PAGE_REDZONE:
> +	case KASAN_SLAB_REDZONE:
> +	case KASAN_KMALLOC_REDZONE:
>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>  		bug_type = "buffer overflow";
>  		break;
>  	case KASAN_FREE_PAGE:
> +	case KASAN_SLAB_FREE:
> +	case KASAN_KMALLOC_FREE:
>  		bug_type = "use after free";
>  		break;
>  	case KASAN_SHADOW_GAP:
> @@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
>  	page = virt_to_page(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_SLAB_REDZONE:
> +		cache = virt_to_cache((void *)info->access_addr);
> +		slab_err(cache, page, "access to slab redzone");

We need head page of invalid access address for slab_err() since head
page has all meta data of this slab. So, instead of, virt_to_cache,
use virt_to_head_page() and page->slab_cache.

> +		dump_stack();
> +		break;
> +	case KASAN_KMALLOC_FREE:
> +	case KASAN_KMALLOC_REDZONE:
> +	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +		if (PageSlab(page)) {
> +			cache = virt_to_cache((void *)info->access_addr);
> +			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
> +			object = virt_to_obj(cache, slab_start,
> +					(void *)info->access_addr);
> +			object_err(cache, page, object, "kasan error");
> +			break;
> +		}

Same here, page should be head page.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-15  6:09     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size marked as
> accessible, and the rest of the object (including slub's metadata)
> marked as redzone (unaccessible).
> 
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible by kasan_krealloc call.
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  22 ++++++++++
>  include/linux/slab.h  |  19 +++++++--
>  lib/Kconfig.kasan     |   2 +
>  mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h      |   5 +++
>  mm/kasan/report.c     |  23 +++++++++++
>  mm/slab.h             |   2 +-
>  mm/slab_common.c      |   9 +++--
>  mm/slub.c             |  24 ++++++++++-
>  9 files changed, 208 insertions(+), 8 deletions(-)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 4adc0a1..583c011 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -20,6 +20,17 @@ void kasan_init_shadow(void);
>  void kasan_alloc_pages(struct page *page, unsigned int order);
>  void kasan_free_pages(struct page *page, unsigned int order);
>  
> +void kasan_kmalloc_large(const void *ptr, size_t size);
> +void kasan_kfree_large(const void *ptr);
> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
> +void kasan_krealloc(const void *object, size_t new_size);
> +
> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
> +void kasan_slab_free(struct kmem_cache *s, void *object);
> +
> +void kasan_alloc_slab_pages(struct page *page, int order);
> +void kasan_free_slab_pages(struct page *page, int order);
> +
>  #else /* CONFIG_KASAN */
>  
>  static inline void unpoison_shadow(const void *address, size_t size) {}
> @@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>  
> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
> +static inline void kasan_kfree_large(const void *ptr) {}
> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
> +
> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
> +
> +static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
> +
>  #endif /* CONFIG_KASAN */
>  
>  #endif /* LINUX_KASAN_H */
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 68b1feab..a9513e9 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -104,6 +104,7 @@
>  				(unsigned long)ZERO_SIZE_PTR)
>  
>  #include <linux/kmemleak.h>
> +#include <linux/kasan.h>
>  
>  struct mem_cgroup;
>  /*
> @@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
>   */
>  static __always_inline void *kmalloc(size_t size, gfp_t flags)
>  {
> +	void *ret;
> +
>  	if (__builtin_constant_p(size)) {
>  		if (size > KMALLOC_MAX_CACHE_SIZE)
>  			return kmalloc_large(size, flags);
> @@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>  			if (!index)
>  				return ZERO_SIZE_PTR;
>  
> -			return kmem_cache_alloc_trace(kmalloc_caches[index],
> +			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
>  					flags, size);
> +
> +			kasan_kmalloc(kmalloc_caches[index], ret, size);
> +
> +			return ret;
>  		}
>  #endif
>  	}
> @@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
>  static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>  {
>  #ifndef CONFIG_SLOB
> +	void *ret;
> +
>  	if (__builtin_constant_p(size) &&
>  		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
>  		int i = kmalloc_index(size);
> @@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>  		if (!i)
>  			return ZERO_SIZE_PTR;
>  
> -		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
> -						flags, node, size);
> +		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
> +						  flags, node, size);
> +
> +		kasan_kmalloc(kmalloc_caches[i], ret, size);
> +
> +		return ret;
>  	}
>  #endif
>  	return __kmalloc_node(size, flags, node);
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 2bfff78..289a624 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
>  
>  config KASAN
>  	bool "AddressSanitizer: dynamic memory error detector"
> +	depends on SLUB
> +	select STACKTRACE
>  	default n
>  	help
>  	  Enables AddressSanitizer - dynamic memory error detector,
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 109478e..9b5182a 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
>  	}
>  }
>  
> +void kasan_alloc_slab_pages(struct page *page, int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
> +}
> +
> +void kasan_free_slab_pages(struct page *page, int order)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
> +}
> +
> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
> +{
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(object == NULL))
> +		return;
> +
> +	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
> +	unpoison_shadow(object, cache->alloc_size);
> +}
> +
> +void kasan_slab_free(struct kmem_cache *cache, void *object)
> +{
> +	unsigned long size = cache->size;
> +	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
> +}
> +
> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
> +{
> +	unsigned long redzone_start;
> +	unsigned long redzone_end;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(object == NULL))
> +		return;
> +
> +	redzone_start = round_up((unsigned long)(object + size),
> +				KASAN_SHADOW_SCALE_SIZE);
> +	redzone_end = (unsigned long)object + cache->size;
> +
> +	unpoison_shadow(object, size);
> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +		KASAN_KMALLOC_REDZONE);
> +
> +}
> +EXPORT_SYMBOL(kasan_kmalloc);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size)
> +{
> +	struct page *page;
> +	unsigned long redzone_start;
> +	unsigned long redzone_end;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	if (unlikely(ptr == NULL))
> +		return;
> +
> +	page = virt_to_page(ptr);
> +	redzone_start = round_up((unsigned long)(ptr + size),
> +				KASAN_SHADOW_SCALE_SIZE);
> +	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> +
> +	unpoison_shadow(ptr, size);
> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +		KASAN_PAGE_REDZONE);
> +}
> +EXPORT_SYMBOL(kasan_kmalloc_large);
> +
> +void kasan_krealloc(const void *object, size_t size)
> +{
> +	struct page *page;
> +
> +	if (unlikely(object == ZERO_SIZE_PTR))
> +		return;
> +
> +	page = virt_to_head_page(object);
> +
> +	if (unlikely(!PageSlab(page)))
> +		kasan_kmalloc_large(object, size);
> +	else
> +		kasan_kmalloc(page->slab_cache, object, size);
> +}
> +
> +void kasan_kfree_large(const void *ptr)
> +{
> +	struct page *page;
> +
> +	if (unlikely(!kasan_initialized))
> +		return;
> +
> +	page = virt_to_page(ptr);
> +	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
> +}
> +
>  void kasan_alloc_pages(struct page *page, unsigned int order)
>  {
>  	if (unlikely(!kasan_initialized))
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index be9597e..f925d03 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,11 @@
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>  
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>  
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 6ef9e57..6d829af 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
>  	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_PAGE_REDZONE:
> +	case KASAN_SLAB_REDZONE:
> +	case KASAN_KMALLOC_REDZONE:
>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>  		bug_type = "buffer overflow";
>  		break;
>  	case KASAN_FREE_PAGE:
> +	case KASAN_SLAB_FREE:
> +	case KASAN_KMALLOC_FREE:
>  		bug_type = "use after free";
>  		break;
>  	case KASAN_SHADOW_GAP:
> @@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
>  	page = virt_to_page(info->access_addr);
>  
>  	switch (shadow_val) {
> +	case KASAN_SLAB_REDZONE:
> +		cache = virt_to_cache((void *)info->access_addr);
> +		slab_err(cache, page, "access to slab redzone");

We need head page of invalid access address for slab_err() since head
page has all meta data of this slab. So, instead of, virt_to_cache,
use virt_to_head_page() and page->slab_cache.

> +		dump_stack();
> +		break;
> +	case KASAN_KMALLOC_FREE:
> +	case KASAN_KMALLOC_REDZONE:
> +	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +		if (PageSlab(page)) {
> +			cache = virt_to_cache((void *)info->access_addr);
> +			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
> +			object = virt_to_obj(cache, slab_start,
> +					(void *)info->access_addr);
> +			object_err(cache, page, object, "kasan error");
> +			break;
> +		}

Same here, page should be head page.

Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-07-09 11:30   ` Andrey Ryabinin
  (?)
@ 2014-07-15  6:12     ` Joonsoo Kim
  -1 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
> We need to manually unpoison rounded up allocation size for dname
> to avoid kasan's reports in __d_lookup_rcu.
> __d_lookup_rcu may validly read a little beyound allocated size.

If it read a little beyond allocated size, IMHO, it is better to
allocate correct size.

kmalloc(name->len + 1, GFP_KERNEL); -->
kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);

Isn't it?

Thanks.

> 
> Reported-by: Dmitry Vyukov <dvyukov@google.com>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  fs/dcache.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/fs/dcache.c b/fs/dcache.c
> index b7e8b20..dff64f2 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -38,6 +38,7 @@
>  #include <linux/prefetch.h>
>  #include <linux/ratelimit.h>
>  #include <linux/list_lru.h>
> +#include <linux/kasan.h>
>  #include "internal.h"
>  #include "mount.h"
>  
> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>  			kmem_cache_free(dentry_cache, dentry); 
>  			return NULL;
>  		}
> +		unpoison_shadow(dname,
> +				roundup(name->len + 1, sizeof(unsigned long)));
>  	} else  {
>  		dname = dentry->d_iname;
>  	}	
> -- 
> 1.8.5.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-15  6:12     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
> We need to manually unpoison rounded up allocation size for dname
> to avoid kasan's reports in __d_lookup_rcu.
> __d_lookup_rcu may validly read a little beyound allocated size.

If it read a little beyond allocated size, IMHO, it is better to
allocate correct size.

kmalloc(name->len + 1, GFP_KERNEL); -->
kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);

Isn't it?

Thanks.

> 
> Reported-by: Dmitry Vyukov <dvyukov@google.com>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  fs/dcache.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/fs/dcache.c b/fs/dcache.c
> index b7e8b20..dff64f2 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -38,6 +38,7 @@
>  #include <linux/prefetch.h>
>  #include <linux/ratelimit.h>
>  #include <linux/list_lru.h>
> +#include <linux/kasan.h>
>  #include "internal.h"
>  #include "mount.h"
>  
> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>  			kmem_cache_free(dentry_cache, dentry); 
>  			return NULL;
>  		}
> +		unpoison_shadow(dname,
> +				roundup(name->len + 1, sizeof(unsigned long)));
>  	} else  {
>  		dname = dentry->d_iname;
>  	}	
> -- 
> 1.8.5.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-15  6:12     ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  6:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
> We need to manually unpoison rounded up allocation size for dname
> to avoid kasan's reports in __d_lookup_rcu.
> __d_lookup_rcu may validly read a little beyound allocated size.

If it read a little beyond allocated size, IMHO, it is better to
allocate correct size.

kmalloc(name->len + 1, GFP_KERNEL); -->
kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);

Isn't it?

Thanks.

> 
> Reported-by: Dmitry Vyukov <dvyukov@google.com>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  fs/dcache.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/fs/dcache.c b/fs/dcache.c
> index b7e8b20..dff64f2 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -38,6 +38,7 @@
>  #include <linux/prefetch.h>
>  #include <linux/ratelimit.h>
>  #include <linux/list_lru.h>
> +#include <linux/kasan.h>
>  #include "internal.h"
>  #include "mount.h"
>  
> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>  			kmem_cache_free(dentry_cache, dentry); 
>  			return NULL;
>  		}
> +		unpoison_shadow(dname,
> +				roundup(name->len + 1, sizeof(unsigned long)));
>  	} else  {
>  		dname = dentry->d_iname;
>  	}	
> -- 
> 1.8.5.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo at kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
  2014-07-15  5:52     ` Joonsoo Kim
  (?)
@ 2014-07-15  6:54       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  6:54 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 09:52, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
>> Add kernel address sanitizer hooks to mark allocated page's addresses
>> as accessible in corresponding shadow region.
>> Mark freed pages as unaccessible.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  include/linux/kasan.h |  6 ++++++
>>  mm/Makefile           |  2 ++
>>  mm/kasan/kasan.c      | 18 ++++++++++++++++++
>>  mm/kasan/kasan.h      |  1 +
>>  mm/kasan/report.c     |  7 +++++++
>>  mm/page_alloc.c       |  4 ++++
>>  6 files changed, 38 insertions(+)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 7efc3eb..4adc0a1 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -17,6 +17,9 @@ void kasan_disable_local(void);
>>  void kasan_alloc_shadow(void);
>>  void kasan_init_shadow(void);
>>  
>> +void kasan_alloc_pages(struct page *page, unsigned int order);
>> +void kasan_free_pages(struct page *page, unsigned int order);
>> +
>>  #else /* CONFIG_KASAN */
>>  
>>  static inline void unpoison_shadow(const void *address, size_t size) {}
>> @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
>>  static inline void kasan_init_shadow(void) {}
>>  static inline void kasan_alloc_shadow(void) {}
>>  
>> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>> +
>>  #endif /* CONFIG_KASAN */
>>  
>>  #endif /* LINUX_KASAN_H */
>> diff --git a/mm/Makefile b/mm/Makefile
>> index dbe9a22..6a9c3f8 100644
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -2,6 +2,8 @@
>>  # Makefile for the linux memory manager.
>>  #
>>  
>> +KASAN_SANITIZE_page_alloc.o := n
>> +
>>  mmu-y			:= nommu.o
>>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
>>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index e2cd345..109478e 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
>>  	}
>>  }
>>  
>> +void kasan_alloc_pages(struct page *page, unsigned int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (likely(page && !PageHighMem(page)))
>> +		unpoison_shadow(page_address(page), PAGE_SIZE << order);
>> +}
>> +
>> +void kasan_free_pages(struct page *page, unsigned int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (likely(!PageHighMem(page)))
>> +		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
>> +}
>> +
>>  void *kasan_memcpy(void *dst, const void *src, size_t len)
>>  {
>>  	if (unlikely(len == 0))
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index 711ae4f..be9597e 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -5,6 +5,7 @@
>>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>>  
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>>  
>>  struct access_info {
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 2430e05..6ef9e57 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
>>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>>  		bug_type = "buffer overflow";
>>  		break;
>> +	case KASAN_FREE_PAGE:
>> +		bug_type = "use after free";
>> +		break;
>>  	case KASAN_SHADOW_GAP:
>>  		bug_type = "wild memory access";
>>  		break;
>> @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
>>  	page = virt_to_page(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_FREE_PAGE:
>> +		dump_page(page, "kasan error");
>> +		dump_stack();
>> +		break;
>>  	case KASAN_SHADOW_GAP:
>>  		pr_err("No metainfo is available for this access.\n");
>>  		dump_stack();
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 8c9eeec..67833d1 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -61,6 +61,7 @@
>>  #include <linux/page-debug-flags.h>
>>  #include <linux/hugetlb.h>
>>  #include <linux/sched/rt.h>
>> +#include <linux/kasan.h>
>>  
>>  #include <asm/sections.h>
>>  #include <asm/tlbflush.h>
>> @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>>  
>>  	trace_mm_page_free(page, order);
>>  	kmemcheck_free_shadow(page, order);
>> +	kasan_free_pages(page, order);
>>  
>>  	if (PageAnon(page))
>>  		page->mapping = NULL;
>> @@ -2807,6 +2809,7 @@ out:
>>  	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
>>  		goto retry_cpuset;
>>  
>> +	kasan_alloc_pages(page, order);
>>  	return page;
>>  }
>>  EXPORT_SYMBOL(__alloc_pages_nodemask);
>> @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>>  	if (end != outer_end)
>>  		free_contig_range(end, outer_end - end);
>>  
>> +	kasan_alloc_pages(pfn_to_page(start), end - start);
>>  done:
>>  	undo_isolate_page_range(pfn_max_align_down(start),
>>  				pfn_max_align_up(end), migratetype);
> 
> Hello,
> 
> I don't think that this is right place for this hook.
>

I'm also made a stupid mistake here. kasan_alloc_pages() expects page order here,
not count of pages.

> There is a function, __isolate_free_pages(), which steals buddy pages
> from page allocator. So you should put this hook onto that function.
> 
> alloc_contig_range() uses that function through below call path, so
> adding hook on it solves your issue here.
> 
> alloc_contig_range() -> isolate_freepages_range() ->
> isolate_freepages_block() -> split_free_page -> __isolate_free_page()
> 
> And, this also solves marking issue on compaction logic, since
> compaction also steal buddy pages from page allocator through
> isolate_freepages() -> isolate_freepages_block() -> split_free_page()
> -> __isolate_free_page().
> 
Yep, I've seen some false positives when compaction
was enabled and just yesterday I've fixed it as your suggested.

I'm also going to move kasan_alloc_pages hook from alloc_pages_nodemask()
to prep_new_page. I think this is more right place for such hook and will
make possible to enable instrumentation for page_alloc.c

Thanks

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
@ 2014-07-15  6:54       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  6:54 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 09:52, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
>> Add kernel address sanitizer hooks to mark allocated page's addresses
>> as accessible in corresponding shadow region.
>> Mark freed pages as unaccessible.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  include/linux/kasan.h |  6 ++++++
>>  mm/Makefile           |  2 ++
>>  mm/kasan/kasan.c      | 18 ++++++++++++++++++
>>  mm/kasan/kasan.h      |  1 +
>>  mm/kasan/report.c     |  7 +++++++
>>  mm/page_alloc.c       |  4 ++++
>>  6 files changed, 38 insertions(+)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 7efc3eb..4adc0a1 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -17,6 +17,9 @@ void kasan_disable_local(void);
>>  void kasan_alloc_shadow(void);
>>  void kasan_init_shadow(void);
>>  
>> +void kasan_alloc_pages(struct page *page, unsigned int order);
>> +void kasan_free_pages(struct page *page, unsigned int order);
>> +
>>  #else /* CONFIG_KASAN */
>>  
>>  static inline void unpoison_shadow(const void *address, size_t size) {}
>> @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
>>  static inline void kasan_init_shadow(void) {}
>>  static inline void kasan_alloc_shadow(void) {}
>>  
>> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>> +
>>  #endif /* CONFIG_KASAN */
>>  
>>  #endif /* LINUX_KASAN_H */
>> diff --git a/mm/Makefile b/mm/Makefile
>> index dbe9a22..6a9c3f8 100644
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -2,6 +2,8 @@
>>  # Makefile for the linux memory manager.
>>  #
>>  
>> +KASAN_SANITIZE_page_alloc.o := n
>> +
>>  mmu-y			:= nommu.o
>>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
>>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index e2cd345..109478e 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
>>  	}
>>  }
>>  
>> +void kasan_alloc_pages(struct page *page, unsigned int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (likely(page && !PageHighMem(page)))
>> +		unpoison_shadow(page_address(page), PAGE_SIZE << order);
>> +}
>> +
>> +void kasan_free_pages(struct page *page, unsigned int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (likely(!PageHighMem(page)))
>> +		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
>> +}
>> +
>>  void *kasan_memcpy(void *dst, const void *src, size_t len)
>>  {
>>  	if (unlikely(len == 0))
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index 711ae4f..be9597e 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -5,6 +5,7 @@
>>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>>  
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>>  
>>  struct access_info {
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 2430e05..6ef9e57 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
>>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>>  		bug_type = "buffer overflow";
>>  		break;
>> +	case KASAN_FREE_PAGE:
>> +		bug_type = "use after free";
>> +		break;
>>  	case KASAN_SHADOW_GAP:
>>  		bug_type = "wild memory access";
>>  		break;
>> @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
>>  	page = virt_to_page(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_FREE_PAGE:
>> +		dump_page(page, "kasan error");
>> +		dump_stack();
>> +		break;
>>  	case KASAN_SHADOW_GAP:
>>  		pr_err("No metainfo is available for this access.\n");
>>  		dump_stack();
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 8c9eeec..67833d1 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -61,6 +61,7 @@
>>  #include <linux/page-debug-flags.h>
>>  #include <linux/hugetlb.h>
>>  #include <linux/sched/rt.h>
>> +#include <linux/kasan.h>
>>  
>>  #include <asm/sections.h>
>>  #include <asm/tlbflush.h>
>> @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>>  
>>  	trace_mm_page_free(page, order);
>>  	kmemcheck_free_shadow(page, order);
>> +	kasan_free_pages(page, order);
>>  
>>  	if (PageAnon(page))
>>  		page->mapping = NULL;
>> @@ -2807,6 +2809,7 @@ out:
>>  	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
>>  		goto retry_cpuset;
>>  
>> +	kasan_alloc_pages(page, order);
>>  	return page;
>>  }
>>  EXPORT_SYMBOL(__alloc_pages_nodemask);
>> @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>>  	if (end != outer_end)
>>  		free_contig_range(end, outer_end - end);
>>  
>> +	kasan_alloc_pages(pfn_to_page(start), end - start);
>>  done:
>>  	undo_isolate_page_range(pfn_max_align_down(start),
>>  				pfn_max_align_up(end), migratetype);
> 
> Hello,
> 
> I don't think that this is right place for this hook.
>

I'm also made a stupid mistake here. kasan_alloc_pages() expects page order here,
not count of pages.

> There is a function, __isolate_free_pages(), which steals buddy pages
> from page allocator. So you should put this hook onto that function.
> 
> alloc_contig_range() uses that function through below call path, so
> adding hook on it solves your issue here.
> 
> alloc_contig_range() -> isolate_freepages_range() ->
> isolate_freepages_block() -> split_free_page -> __isolate_free_page()
> 
> And, this also solves marking issue on compaction logic, since
> compaction also steal buddy pages from page allocator through
> isolate_freepages() -> isolate_freepages_block() -> split_free_page()
> -> __isolate_free_page().
> 
Yep, I've seen some false positives when compaction
was enabled and just yesterday I've fixed it as your suggested.

I'm also going to move kasan_alloc_pages hook from alloc_pages_nodemask()
to prep_new_page. I think this is more right place for such hook and will
make possible to enable instrumentation for page_alloc.c

Thanks

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes
@ 2014-07-15  6:54       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  6:54 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/15/14 09:52, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
>> Add kernel address sanitizer hooks to mark allocated page's addresses
>> as accessible in corresponding shadow region.
>> Mark freed pages as unaccessible.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  include/linux/kasan.h |  6 ++++++
>>  mm/Makefile           |  2 ++
>>  mm/kasan/kasan.c      | 18 ++++++++++++++++++
>>  mm/kasan/kasan.h      |  1 +
>>  mm/kasan/report.c     |  7 +++++++
>>  mm/page_alloc.c       |  4 ++++
>>  6 files changed, 38 insertions(+)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 7efc3eb..4adc0a1 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -17,6 +17,9 @@ void kasan_disable_local(void);
>>  void kasan_alloc_shadow(void);
>>  void kasan_init_shadow(void);
>>  
>> +void kasan_alloc_pages(struct page *page, unsigned int order);
>> +void kasan_free_pages(struct page *page, unsigned int order);
>> +
>>  #else /* CONFIG_KASAN */
>>  
>>  static inline void unpoison_shadow(const void *address, size_t size) {}
>> @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
>>  static inline void kasan_init_shadow(void) {}
>>  static inline void kasan_alloc_shadow(void) {}
>>  
>> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>> +
>>  #endif /* CONFIG_KASAN */
>>  
>>  #endif /* LINUX_KASAN_H */
>> diff --git a/mm/Makefile b/mm/Makefile
>> index dbe9a22..6a9c3f8 100644
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -2,6 +2,8 @@
>>  # Makefile for the linux memory manager.
>>  #
>>  
>> +KASAN_SANITIZE_page_alloc.o := n
>> +
>>  mmu-y			:= nommu.o
>>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
>>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index e2cd345..109478e 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
>>  	}
>>  }
>>  
>> +void kasan_alloc_pages(struct page *page, unsigned int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (likely(page && !PageHighMem(page)))
>> +		unpoison_shadow(page_address(page), PAGE_SIZE << order);
>> +}
>> +
>> +void kasan_free_pages(struct page *page, unsigned int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (likely(!PageHighMem(page)))
>> +		poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
>> +}
>> +
>>  void *kasan_memcpy(void *dst, const void *src, size_t len)
>>  {
>>  	if (unlikely(len == 0))
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index 711ae4f..be9597e 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -5,6 +5,7 @@
>>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>>  
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>>  
>>  struct access_info {
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 2430e05..6ef9e57 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
>>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>>  		bug_type = "buffer overflow";
>>  		break;
>> +	case KASAN_FREE_PAGE:
>> +		bug_type = "use after free";
>> +		break;
>>  	case KASAN_SHADOW_GAP:
>>  		bug_type = "wild memory access";
>>  		break;
>> @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
>>  	page = virt_to_page(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_FREE_PAGE:
>> +		dump_page(page, "kasan error");
>> +		dump_stack();
>> +		break;
>>  	case KASAN_SHADOW_GAP:
>>  		pr_err("No metainfo is available for this access.\n");
>>  		dump_stack();
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 8c9eeec..67833d1 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -61,6 +61,7 @@
>>  #include <linux/page-debug-flags.h>
>>  #include <linux/hugetlb.h>
>>  #include <linux/sched/rt.h>
>> +#include <linux/kasan.h>
>>  
>>  #include <asm/sections.h>
>>  #include <asm/tlbflush.h>
>> @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>>  
>>  	trace_mm_page_free(page, order);
>>  	kmemcheck_free_shadow(page, order);
>> +	kasan_free_pages(page, order);
>>  
>>  	if (PageAnon(page))
>>  		page->mapping = NULL;
>> @@ -2807,6 +2809,7 @@ out:
>>  	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
>>  		goto retry_cpuset;
>>  
>> +	kasan_alloc_pages(page, order);
>>  	return page;
>>  }
>>  EXPORT_SYMBOL(__alloc_pages_nodemask);
>> @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>>  	if (end != outer_end)
>>  		free_contig_range(end, outer_end - end);
>>  
>> +	kasan_alloc_pages(pfn_to_page(start), end - start);
>>  done:
>>  	undo_isolate_page_range(pfn_max_align_down(start),
>>  				pfn_max_align_up(end), migratetype);
> 
> Hello,
> 
> I don't think that this is right place for this hook.
>

I'm also made a stupid mistake here. kasan_alloc_pages() expects page order here,
not count of pages.

> There is a function, __isolate_free_pages(), which steals buddy pages
> from page allocator. So you should put this hook onto that function.
> 
> alloc_contig_range() uses that function through below call path, so
> adding hook on it solves your issue here.
> 
> alloc_contig_range() -> isolate_freepages_range() ->
> isolate_freepages_block() -> split_free_page -> __isolate_free_page()
> 
> And, this also solves marking issue on compaction logic, since
> compaction also steal buddy pages from page allocator through
> isolate_freepages() -> isolate_freepages_block() -> split_free_page()
> -> __isolate_free_page().
> 
Yep, I've seen some false positives when compaction
was enabled and just yesterday I've fixed it as your suggested.

I'm also going to move kasan_alloc_pages hook from alloc_pages_nodemask()
to prep_new_page. I think this is more right place for such hook and will
make possible to enable instrumentation for page_alloc.c

Thanks

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
  2014-07-15  5:53     ` Joonsoo Kim
  (?)
@ 2014-07-15  6:56       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  6:56 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 09:53, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
>> This patch shares virt_to_cache() between slab and slub and
>> it used in cache_from_obj() now.
>> Later virt_to_cache() will be kernel address sanitizer also.
> 
> I think that this patch won't be needed.
> See comment in 15/21.
> 

Ok, I'll drop it.

> Thanks.
> 
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
@ 2014-07-15  6:56       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  6:56 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 09:53, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
>> This patch shares virt_to_cache() between slab and slub and
>> it used in cache_from_obj() now.
>> Later virt_to_cache() will be kernel address sanitizer also.
> 
> I think that this patch won't be needed.
> See comment in 15/21.
> 

Ok, I'll drop it.

> Thanks.
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub
@ 2014-07-15  6:56       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  6:56 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/15/14 09:53, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
>> This patch shares virt_to_cache() between slab and slub and
>> it used in cache_from_obj() now.
>> Later virt_to_cache() will be kernel address sanitizer also.
> 
> I think that this patch won't be needed.
> See comment in 15/21.
> 

Ok, I'll drop it.

> Thanks.
> 
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
  2014-07-15  6:04     ` Joonsoo Kim
  (?)
@ 2014-07-15  7:37       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  7:37 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 10:04, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
>> Some code in slub could validly touch memory marked by kasan as unaccessible.
>> Even though slub.c doesn't instrumented, functions called in it are instrumented,
>> so to avoid false positive reports such places are protected by
>> kasan_disable_local()/kasan_enable_local() calls.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  mm/slub.c | 21 +++++++++++++++++++--
>>  1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 6ddedf9..c8dbea7 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>>  	if (!(s->flags & SLAB_STORE_USER))
>>  		return;
>>  
>> +	kasan_disable_local();
>>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>>  	print_track("Freed", get_track(s, object, TRACK_FREE));
>> +	kasan_enable_local();
> 
> I don't think that this is needed since print_track() doesn't call
> external function with object pointer. print_track() call pr_err(), but,
> before calling, it retrieve t->addrs[i] so memory access only occurs
> in slub.c.
> 
Agree.

>>  }
>>  
>>  static void print_page_info(struct page *page)
>> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>  	unsigned int off;	/* Offset of last byte */
>>  	u8 *addr = page_address(page);
>>  
>> +	kasan_disable_local();
>> +
>>  	print_tracking(s, p);
>>  
>>  	print_page_info(page);
>> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>  		/* Beginning of the filler is the free pointer */
>>  		print_section("Padding ", p + off, s->size - off);
>>  
>> +	kasan_enable_local();
>> +
>>  	dump_stack();
>>  }
> 
> And, I recommend that you put this hook on right place.
> At a glance, the problematic function is print_section() which have
> external function call, print_hex_dump(), with object pointer.
> If you disable kasan in print_section, all the below thing won't be
> needed, I guess.
> 

Nope, at least memchr_inv() call in slab_pad_check will be a problem.

I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
for kasan.



> Thanks.
> 
>>  
>> @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>>  					struct page *page,
>>  					void *object, unsigned long addr)
>>  {
>> +
>> +	kasan_disable_local();
>>  	if (!check_slab(s, page))
>>  		goto bad;
>>  
>> @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>>  		set_track(s, object, TRACK_ALLOC, addr);
>>  	trace(s, page, object, 1);
>>  	init_object(s, object, SLUB_RED_ACTIVE);
>> +	kasan_enable_local();
>>  	return 1;
>>  
>>  bad:
>> @@ -1041,6 +1050,7 @@ bad:
>>  		page->inuse = page->objects;
>>  		page->freelist = NULL;
>>  	}
>> +	kasan_enable_local();
>>  	return 0;
>>  }
>>  
>> @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>>  
>>  	spin_lock_irqsave(&n->list_lock, *flags);
>>  	slab_lock(page);
>> +	kasan_disable_local();
>>  
>>  	if (!check_slab(s, page))
>>  		goto fail;
>> @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>>  	trace(s, page, object, 0);
>>  	init_object(s, object, SLUB_RED_INACTIVE);
>>  out:
>> +	kasan_enable_local();
>>  	slab_unlock(page);
>>  	/*
>>  	 * Keep node_lock to preserve integrity
>> @@ -1096,6 +1108,7 @@ out:
>>  	return n;
>>  
>>  fail:
>> +	kasan_enable_local();
>>  	slab_unlock(page);
>>  	spin_unlock_irqrestore(&n->list_lock, *flags);
>>  	slab_fix(s, "Object at 0x%p not freed", object);
>> @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>>  				void *object)
>>  {
>>  	setup_object_debug(s, page, object);
>> -	if (unlikely(s->ctor))
>> +	if (unlikely(s->ctor)) {
>> +		kasan_disable_local();
>>  		s->ctor(object);
>> +		kasan_enable_local();
>> +	}
>>  }
>>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>>  
>>  	if (kmem_cache_debug(s)) {
>>  		void *p;
>> -
>> +		kasan_disable_local();
>>  		slab_pad_check(s, page);
>>  		for_each_object(p, s, page_address(page),
>>  						page->objects)
>>  			check_object(s, page, p, SLUB_RED_INACTIVE);
>> +		kasan_enable_local();
>>  	}
>>  
>>  	kmemcheck_free_shadow(page, compound_order(page));
>> -- 
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15  7:37       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  7:37 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 10:04, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
>> Some code in slub could validly touch memory marked by kasan as unaccessible.
>> Even though slub.c doesn't instrumented, functions called in it are instrumented,
>> so to avoid false positive reports such places are protected by
>> kasan_disable_local()/kasan_enable_local() calls.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  mm/slub.c | 21 +++++++++++++++++++--
>>  1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 6ddedf9..c8dbea7 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>>  	if (!(s->flags & SLAB_STORE_USER))
>>  		return;
>>  
>> +	kasan_disable_local();
>>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>>  	print_track("Freed", get_track(s, object, TRACK_FREE));
>> +	kasan_enable_local();
> 
> I don't think that this is needed since print_track() doesn't call
> external function with object pointer. print_track() call pr_err(), but,
> before calling, it retrieve t->addrs[i] so memory access only occurs
> in slub.c.
> 
Agree.

>>  }
>>  
>>  static void print_page_info(struct page *page)
>> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>  	unsigned int off;	/* Offset of last byte */
>>  	u8 *addr = page_address(page);
>>  
>> +	kasan_disable_local();
>> +
>>  	print_tracking(s, p);
>>  
>>  	print_page_info(page);
>> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>  		/* Beginning of the filler is the free pointer */
>>  		print_section("Padding ", p + off, s->size - off);
>>  
>> +	kasan_enable_local();
>> +
>>  	dump_stack();
>>  }
> 
> And, I recommend that you put this hook on right place.
> At a glance, the problematic function is print_section() which have
> external function call, print_hex_dump(), with object pointer.
> If you disable kasan in print_section, all the below thing won't be
> needed, I guess.
> 

Nope, at least memchr_inv() call in slab_pad_check will be a problem.

I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
for kasan.



> Thanks.
> 
>>  
>> @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>>  					struct page *page,
>>  					void *object, unsigned long addr)
>>  {
>> +
>> +	kasan_disable_local();
>>  	if (!check_slab(s, page))
>>  		goto bad;
>>  
>> @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>>  		set_track(s, object, TRACK_ALLOC, addr);
>>  	trace(s, page, object, 1);
>>  	init_object(s, object, SLUB_RED_ACTIVE);
>> +	kasan_enable_local();
>>  	return 1;
>>  
>>  bad:
>> @@ -1041,6 +1050,7 @@ bad:
>>  		page->inuse = page->objects;
>>  		page->freelist = NULL;
>>  	}
>> +	kasan_enable_local();
>>  	return 0;
>>  }
>>  
>> @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>>  
>>  	spin_lock_irqsave(&n->list_lock, *flags);
>>  	slab_lock(page);
>> +	kasan_disable_local();
>>  
>>  	if (!check_slab(s, page))
>>  		goto fail;
>> @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>>  	trace(s, page, object, 0);
>>  	init_object(s, object, SLUB_RED_INACTIVE);
>>  out:
>> +	kasan_enable_local();
>>  	slab_unlock(page);
>>  	/*
>>  	 * Keep node_lock to preserve integrity
>> @@ -1096,6 +1108,7 @@ out:
>>  	return n;
>>  
>>  fail:
>> +	kasan_enable_local();
>>  	slab_unlock(page);
>>  	spin_unlock_irqrestore(&n->list_lock, *flags);
>>  	slab_fix(s, "Object at 0x%p not freed", object);
>> @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>>  				void *object)
>>  {
>>  	setup_object_debug(s, page, object);
>> -	if (unlikely(s->ctor))
>> +	if (unlikely(s->ctor)) {
>> +		kasan_disable_local();
>>  		s->ctor(object);
>> +		kasan_enable_local();
>> +	}
>>  }
>>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>>  
>>  	if (kmem_cache_debug(s)) {
>>  		void *p;
>> -
>> +		kasan_disable_local();
>>  		slab_pad_check(s, page);
>>  		for_each_object(p, s, page_address(page),
>>  						page->objects)
>>  			check_object(s, page, p, SLUB_RED_INACTIVE);
>> +		kasan_enable_local();
>>  	}
>>  
>>  	kmemcheck_free_shadow(page, compound_order(page));
>> -- 
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15  7:37       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  7:37 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/15/14 10:04, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
>> Some code in slub could validly touch memory marked by kasan as unaccessible.
>> Even though slub.c doesn't instrumented, functions called in it are instrumented,
>> so to avoid false positive reports such places are protected by
>> kasan_disable_local()/kasan_enable_local() calls.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  mm/slub.c | 21 +++++++++++++++++++--
>>  1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 6ddedf9..c8dbea7 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>>  	if (!(s->flags & SLAB_STORE_USER))
>>  		return;
>>  
>> +	kasan_disable_local();
>>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>>  	print_track("Freed", get_track(s, object, TRACK_FREE));
>> +	kasan_enable_local();
> 
> I don't think that this is needed since print_track() doesn't call
> external function with object pointer. print_track() call pr_err(), but,
> before calling, it retrieve t->addrs[i] so memory access only occurs
> in slub.c.
> 
Agree.

>>  }
>>  
>>  static void print_page_info(struct page *page)
>> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>  	unsigned int off;	/* Offset of last byte */
>>  	u8 *addr = page_address(page);
>>  
>> +	kasan_disable_local();
>> +
>>  	print_tracking(s, p);
>>  
>>  	print_page_info(page);
>> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>  		/* Beginning of the filler is the free pointer */
>>  		print_section("Padding ", p + off, s->size - off);
>>  
>> +	kasan_enable_local();
>> +
>>  	dump_stack();
>>  }
> 
> And, I recommend that you put this hook on right place.
> At a glance, the problematic function is print_section() which have
> external function call, print_hex_dump(), with object pointer.
> If you disable kasan in print_section, all the below thing won't be
> needed, I guess.
> 

Nope, at least memchr_inv() call in slab_pad_check will be a problem.

I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
for kasan.



> Thanks.
> 
>>  
>> @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>>  					struct page *page,
>>  					void *object, unsigned long addr)
>>  {
>> +
>> +	kasan_disable_local();
>>  	if (!check_slab(s, page))
>>  		goto bad;
>>  
>> @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
>>  		set_track(s, object, TRACK_ALLOC, addr);
>>  	trace(s, page, object, 1);
>>  	init_object(s, object, SLUB_RED_ACTIVE);
>> +	kasan_enable_local();
>>  	return 1;
>>  
>>  bad:
>> @@ -1041,6 +1050,7 @@ bad:
>>  		page->inuse = page->objects;
>>  		page->freelist = NULL;
>>  	}
>> +	kasan_enable_local();
>>  	return 0;
>>  }
>>  
>> @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>>  
>>  	spin_lock_irqsave(&n->list_lock, *flags);
>>  	slab_lock(page);
>> +	kasan_disable_local();
>>  
>>  	if (!check_slab(s, page))
>>  		goto fail;
>> @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
>>  	trace(s, page, object, 0);
>>  	init_object(s, object, SLUB_RED_INACTIVE);
>>  out:
>> +	kasan_enable_local();
>>  	slab_unlock(page);
>>  	/*
>>  	 * Keep node_lock to preserve integrity
>> @@ -1096,6 +1108,7 @@ out:
>>  	return n;
>>  
>>  fail:
>> +	kasan_enable_local();
>>  	slab_unlock(page);
>>  	spin_unlock_irqrestore(&n->list_lock, *flags);
>>  	slab_fix(s, "Object at 0x%p not freed", object);
>> @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>>  				void *object)
>>  {
>>  	setup_object_debug(s, page, object);
>> -	if (unlikely(s->ctor))
>> +	if (unlikely(s->ctor)) {
>> +		kasan_disable_local();
>>  		s->ctor(object);
>> +		kasan_enable_local();
>> +	}
>>  }
>>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>>  
>>  	if (kmem_cache_debug(s)) {
>>  		void *p;
>> -
>> +		kasan_disable_local();
>>  		slab_pad_check(s, page);
>>  		for_each_object(p, s, page_address(page),
>>  						page->objects)
>>  			check_object(s, page, p, SLUB_RED_INACTIVE);
>> +		kasan_enable_local();
>>  	}
>>  
>>  	kmemcheck_free_shadow(page, compound_order(page));
>> -- 
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo at kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
  2014-07-15  6:09     ` Joonsoo Kim
  (?)
@ 2014-07-15  7:45       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  7:45 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 10:09, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Allocated slab page, this whole page marked as unaccessible
>> in corresponding shadow memory.
>> On allocation of slub object requested allocation size marked as
>> accessible, and the rest of the object (including slub's metadata)
>> marked as redzone (unaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible by kasan_krealloc call.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  include/linux/kasan.h |  22 ++++++++++
>>  include/linux/slab.h  |  19 +++++++--
>>  lib/Kconfig.kasan     |   2 +
>>  mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
>>  mm/kasan/kasan.h      |   5 +++
>>  mm/kasan/report.c     |  23 +++++++++++
>>  mm/slab.h             |   2 +-
>>  mm/slab_common.c      |   9 +++--
>>  mm/slub.c             |  24 ++++++++++-
>>  9 files changed, 208 insertions(+), 8 deletions(-)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 4adc0a1..583c011 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -20,6 +20,17 @@ void kasan_init_shadow(void);
>>  void kasan_alloc_pages(struct page *page, unsigned int order);
>>  void kasan_free_pages(struct page *page, unsigned int order);
>>  
>> +void kasan_kmalloc_large(const void *ptr, size_t size);
>> +void kasan_kfree_large(const void *ptr);
>> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
>> +void kasan_krealloc(const void *object, size_t new_size);
>> +
>> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
>> +void kasan_slab_free(struct kmem_cache *s, void *object);
>> +
>> +void kasan_alloc_slab_pages(struct page *page, int order);
>> +void kasan_free_slab_pages(struct page *page, int order);
>> +
>>  #else /* CONFIG_KASAN */
>>  
>>  static inline void unpoison_shadow(const void *address, size_t size) {}
>> @@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
>>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>>  
>> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
>> +static inline void kasan_kfree_large(const void *ptr) {}
>> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
>> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
>> +
>> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
>> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
>> +
>> +static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
>> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
>> +
>>  #endif /* CONFIG_KASAN */
>>  
>>  #endif /* LINUX_KASAN_H */
>> diff --git a/include/linux/slab.h b/include/linux/slab.h
>> index 68b1feab..a9513e9 100644
>> --- a/include/linux/slab.h
>> +++ b/include/linux/slab.h
>> @@ -104,6 +104,7 @@
>>  				(unsigned long)ZERO_SIZE_PTR)
>>  
>>  #include <linux/kmemleak.h>
>> +#include <linux/kasan.h>
>>  
>>  struct mem_cgroup;
>>  /*
>> @@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
>>   */
>>  static __always_inline void *kmalloc(size_t size, gfp_t flags)
>>  {
>> +	void *ret;
>> +
>>  	if (__builtin_constant_p(size)) {
>>  		if (size > KMALLOC_MAX_CACHE_SIZE)
>>  			return kmalloc_large(size, flags);
>> @@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>>  			if (!index)
>>  				return ZERO_SIZE_PTR;
>>  
>> -			return kmem_cache_alloc_trace(kmalloc_caches[index],
>> +			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
>>  					flags, size);
>> +
>> +			kasan_kmalloc(kmalloc_caches[index], ret, size);
>> +
>> +			return ret;
>>  		}
>>  #endif
>>  	}
>> @@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
>>  static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>>  {
>>  #ifndef CONFIG_SLOB
>> +	void *ret;
>> +
>>  	if (__builtin_constant_p(size) &&
>>  		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
>>  		int i = kmalloc_index(size);
>> @@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>>  		if (!i)
>>  			return ZERO_SIZE_PTR;
>>  
>> -		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
>> -						flags, node, size);
>> +		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
>> +						  flags, node, size);
>> +
>> +		kasan_kmalloc(kmalloc_caches[i], ret, size);
>> +
>> +		return ret;
>>  	}
>>  #endif
>>  	return __kmalloc_node(size, flags, node);
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> index 2bfff78..289a624 100644
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
>>  
>>  config KASAN
>>  	bool "AddressSanitizer: dynamic memory error detector"
>> +	depends on SLUB
>> +	select STACKTRACE
>>  	default n
>>  	help
>>  	  Enables AddressSanitizer - dynamic memory error detector,
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index 109478e..9b5182a 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
>>  	}
>>  }
>>  
>> +void kasan_alloc_slab_pages(struct page *page, int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
>> +}
>> +
>> +void kasan_free_slab_pages(struct page *page, int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
>> +}
>> +
>> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(object == NULL))
>> +		return;
>> +
>> +	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
>> +	unpoison_shadow(object, cache->alloc_size);
>> +}
>> +
>> +void kasan_slab_free(struct kmem_cache *cache, void *object)
>> +{
>> +	unsigned long size = cache->size;
>> +	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>> +}
>> +
>> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
>> +{
>> +	unsigned long redzone_start;
>> +	unsigned long redzone_end;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(object == NULL))
>> +		return;
>> +
>> +	redzone_start = round_up((unsigned long)(object + size),
>> +				KASAN_SHADOW_SCALE_SIZE);
>> +	redzone_end = (unsigned long)object + cache->size;
>> +
>> +	unpoison_shadow(object, size);
>> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +		KASAN_KMALLOC_REDZONE);
>> +
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc);
>> +
>> +void kasan_kmalloc_large(const void *ptr, size_t size)
>> +{
>> +	struct page *page;
>> +	unsigned long redzone_start;
>> +	unsigned long redzone_end;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(ptr == NULL))
>> +		return;
>> +
>> +	page = virt_to_page(ptr);
>> +	redzone_start = round_up((unsigned long)(ptr + size),
>> +				KASAN_SHADOW_SCALE_SIZE);
>> +	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
>> +
>> +	unpoison_shadow(ptr, size);
>> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +		KASAN_PAGE_REDZONE);
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc_large);
>> +
>> +void kasan_krealloc(const void *object, size_t size)
>> +{
>> +	struct page *page;
>> +
>> +	if (unlikely(object == ZERO_SIZE_PTR))
>> +		return;
>> +
>> +	page = virt_to_head_page(object);
>> +
>> +	if (unlikely(!PageSlab(page)))
>> +		kasan_kmalloc_large(object, size);
>> +	else
>> +		kasan_kmalloc(page->slab_cache, object, size);
>> +}
>> +
>> +void kasan_kfree_large(const void *ptr)
>> +{
>> +	struct page *page;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	page = virt_to_page(ptr);
>> +	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
>> +}
>> +
>>  void kasan_alloc_pages(struct page *page, unsigned int order)
>>  {
>>  	if (unlikely(!kasan_initialized))
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index be9597e..f925d03 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -6,6 +6,11 @@
>>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>>  
>>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
>> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
>> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
>> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>>  
>>  struct access_info {
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 6ef9e57..6d829af 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
>>  	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_PAGE_REDZONE:
>> +	case KASAN_SLAB_REDZONE:
>> +	case KASAN_KMALLOC_REDZONE:
>>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>>  		bug_type = "buffer overflow";
>>  		break;
>>  	case KASAN_FREE_PAGE:
>> +	case KASAN_SLAB_FREE:
>> +	case KASAN_KMALLOC_FREE:
>>  		bug_type = "use after free";
>>  		break;
>>  	case KASAN_SHADOW_GAP:
>> @@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
>>  	page = virt_to_page(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_SLAB_REDZONE:
>> +		cache = virt_to_cache((void *)info->access_addr);
>> +		slab_err(cache, page, "access to slab redzone");
> 
> We need head page of invalid access address for slab_err() since head
> page has all meta data of this slab. So, instead of, virt_to_cache,
> use virt_to_head_page() and page->slab_cache.
> 
>> +		dump_stack();
>> +		break;
>> +	case KASAN_KMALLOC_FREE:
>> +	case KASAN_KMALLOC_REDZONE:
>> +	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
>> +		if (PageSlab(page)) {
>> +			cache = virt_to_cache((void *)info->access_addr);
>> +			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
>> +			object = virt_to_obj(cache, slab_start,
>> +					(void *)info->access_addr);
>> +			object_err(cache, page, object, "kasan error");
>> +			break;
>> +		}
> 
> Same here, page should be head page.
> 

Correct, I'll fix it.

Thanks.

> Thanks.
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-15  7:45       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  7:45 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 10:09, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Allocated slab page, this whole page marked as unaccessible
>> in corresponding shadow memory.
>> On allocation of slub object requested allocation size marked as
>> accessible, and the rest of the object (including slub's metadata)
>> marked as redzone (unaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible by kasan_krealloc call.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  include/linux/kasan.h |  22 ++++++++++
>>  include/linux/slab.h  |  19 +++++++--
>>  lib/Kconfig.kasan     |   2 +
>>  mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
>>  mm/kasan/kasan.h      |   5 +++
>>  mm/kasan/report.c     |  23 +++++++++++
>>  mm/slab.h             |   2 +-
>>  mm/slab_common.c      |   9 +++--
>>  mm/slub.c             |  24 ++++++++++-
>>  9 files changed, 208 insertions(+), 8 deletions(-)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 4adc0a1..583c011 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -20,6 +20,17 @@ void kasan_init_shadow(void);
>>  void kasan_alloc_pages(struct page *page, unsigned int order);
>>  void kasan_free_pages(struct page *page, unsigned int order);
>>  
>> +void kasan_kmalloc_large(const void *ptr, size_t size);
>> +void kasan_kfree_large(const void *ptr);
>> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
>> +void kasan_krealloc(const void *object, size_t new_size);
>> +
>> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
>> +void kasan_slab_free(struct kmem_cache *s, void *object);
>> +
>> +void kasan_alloc_slab_pages(struct page *page, int order);
>> +void kasan_free_slab_pages(struct page *page, int order);
>> +
>>  #else /* CONFIG_KASAN */
>>  
>>  static inline void unpoison_shadow(const void *address, size_t size) {}
>> @@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
>>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>>  
>> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
>> +static inline void kasan_kfree_large(const void *ptr) {}
>> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
>> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
>> +
>> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
>> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
>> +
>> +static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
>> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
>> +
>>  #endif /* CONFIG_KASAN */
>>  
>>  #endif /* LINUX_KASAN_H */
>> diff --git a/include/linux/slab.h b/include/linux/slab.h
>> index 68b1feab..a9513e9 100644
>> --- a/include/linux/slab.h
>> +++ b/include/linux/slab.h
>> @@ -104,6 +104,7 @@
>>  				(unsigned long)ZERO_SIZE_PTR)
>>  
>>  #include <linux/kmemleak.h>
>> +#include <linux/kasan.h>
>>  
>>  struct mem_cgroup;
>>  /*
>> @@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
>>   */
>>  static __always_inline void *kmalloc(size_t size, gfp_t flags)
>>  {
>> +	void *ret;
>> +
>>  	if (__builtin_constant_p(size)) {
>>  		if (size > KMALLOC_MAX_CACHE_SIZE)
>>  			return kmalloc_large(size, flags);
>> @@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>>  			if (!index)
>>  				return ZERO_SIZE_PTR;
>>  
>> -			return kmem_cache_alloc_trace(kmalloc_caches[index],
>> +			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
>>  					flags, size);
>> +
>> +			kasan_kmalloc(kmalloc_caches[index], ret, size);
>> +
>> +			return ret;
>>  		}
>>  #endif
>>  	}
>> @@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
>>  static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>>  {
>>  #ifndef CONFIG_SLOB
>> +	void *ret;
>> +
>>  	if (__builtin_constant_p(size) &&
>>  		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
>>  		int i = kmalloc_index(size);
>> @@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>>  		if (!i)
>>  			return ZERO_SIZE_PTR;
>>  
>> -		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
>> -						flags, node, size);
>> +		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
>> +						  flags, node, size);
>> +
>> +		kasan_kmalloc(kmalloc_caches[i], ret, size);
>> +
>> +		return ret;
>>  	}
>>  #endif
>>  	return __kmalloc_node(size, flags, node);
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> index 2bfff78..289a624 100644
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
>>  
>>  config KASAN
>>  	bool "AddressSanitizer: dynamic memory error detector"
>> +	depends on SLUB
>> +	select STACKTRACE
>>  	default n
>>  	help
>>  	  Enables AddressSanitizer - dynamic memory error detector,
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index 109478e..9b5182a 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
>>  	}
>>  }
>>  
>> +void kasan_alloc_slab_pages(struct page *page, int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
>> +}
>> +
>> +void kasan_free_slab_pages(struct page *page, int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
>> +}
>> +
>> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(object == NULL))
>> +		return;
>> +
>> +	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
>> +	unpoison_shadow(object, cache->alloc_size);
>> +}
>> +
>> +void kasan_slab_free(struct kmem_cache *cache, void *object)
>> +{
>> +	unsigned long size = cache->size;
>> +	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>> +}
>> +
>> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
>> +{
>> +	unsigned long redzone_start;
>> +	unsigned long redzone_end;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(object == NULL))
>> +		return;
>> +
>> +	redzone_start = round_up((unsigned long)(object + size),
>> +				KASAN_SHADOW_SCALE_SIZE);
>> +	redzone_end = (unsigned long)object + cache->size;
>> +
>> +	unpoison_shadow(object, size);
>> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +		KASAN_KMALLOC_REDZONE);
>> +
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc);
>> +
>> +void kasan_kmalloc_large(const void *ptr, size_t size)
>> +{
>> +	struct page *page;
>> +	unsigned long redzone_start;
>> +	unsigned long redzone_end;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(ptr == NULL))
>> +		return;
>> +
>> +	page = virt_to_page(ptr);
>> +	redzone_start = round_up((unsigned long)(ptr + size),
>> +				KASAN_SHADOW_SCALE_SIZE);
>> +	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
>> +
>> +	unpoison_shadow(ptr, size);
>> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +		KASAN_PAGE_REDZONE);
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc_large);
>> +
>> +void kasan_krealloc(const void *object, size_t size)
>> +{
>> +	struct page *page;
>> +
>> +	if (unlikely(object == ZERO_SIZE_PTR))
>> +		return;
>> +
>> +	page = virt_to_head_page(object);
>> +
>> +	if (unlikely(!PageSlab(page)))
>> +		kasan_kmalloc_large(object, size);
>> +	else
>> +		kasan_kmalloc(page->slab_cache, object, size);
>> +}
>> +
>> +void kasan_kfree_large(const void *ptr)
>> +{
>> +	struct page *page;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	page = virt_to_page(ptr);
>> +	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
>> +}
>> +
>>  void kasan_alloc_pages(struct page *page, unsigned int order)
>>  {
>>  	if (unlikely(!kasan_initialized))
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index be9597e..f925d03 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -6,6 +6,11 @@
>>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>>  
>>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
>> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
>> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
>> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>>  
>>  struct access_info {
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 6ef9e57..6d829af 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
>>  	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_PAGE_REDZONE:
>> +	case KASAN_SLAB_REDZONE:
>> +	case KASAN_KMALLOC_REDZONE:
>>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>>  		bug_type = "buffer overflow";
>>  		break;
>>  	case KASAN_FREE_PAGE:
>> +	case KASAN_SLAB_FREE:
>> +	case KASAN_KMALLOC_FREE:
>>  		bug_type = "use after free";
>>  		break;
>>  	case KASAN_SHADOW_GAP:
>> @@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
>>  	page = virt_to_page(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_SLAB_REDZONE:
>> +		cache = virt_to_cache((void *)info->access_addr);
>> +		slab_err(cache, page, "access to slab redzone");
> 
> We need head page of invalid access address for slab_err() since head
> page has all meta data of this slab. So, instead of, virt_to_cache,
> use virt_to_head_page() and page->slab_cache.
> 
>> +		dump_stack();
>> +		break;
>> +	case KASAN_KMALLOC_FREE:
>> +	case KASAN_KMALLOC_REDZONE:
>> +	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
>> +		if (PageSlab(page)) {
>> +			cache = virt_to_cache((void *)info->access_addr);
>> +			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
>> +			object = virt_to_obj(cache, slab_start,
>> +					(void *)info->access_addr);
>> +			object_err(cache, page, object, "kasan error");
>> +			break;
>> +		}
> 
> Same here, page should be head page.
> 

Correct, I'll fix it.

Thanks.

> Thanks.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator
@ 2014-07-15  7:45       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  7:45 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/15/14 10:09, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Allocated slab page, this whole page marked as unaccessible
>> in corresponding shadow memory.
>> On allocation of slub object requested allocation size marked as
>> accessible, and the rest of the object (including slub's metadata)
>> marked as redzone (unaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible by kasan_krealloc call.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  include/linux/kasan.h |  22 ++++++++++
>>  include/linux/slab.h  |  19 +++++++--
>>  lib/Kconfig.kasan     |   2 +
>>  mm/kasan/kasan.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
>>  mm/kasan/kasan.h      |   5 +++
>>  mm/kasan/report.c     |  23 +++++++++++
>>  mm/slab.h             |   2 +-
>>  mm/slab_common.c      |   9 +++--
>>  mm/slub.c             |  24 ++++++++++-
>>  9 files changed, 208 insertions(+), 8 deletions(-)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 4adc0a1..583c011 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -20,6 +20,17 @@ void kasan_init_shadow(void);
>>  void kasan_alloc_pages(struct page *page, unsigned int order);
>>  void kasan_free_pages(struct page *page, unsigned int order);
>>  
>> +void kasan_kmalloc_large(const void *ptr, size_t size);
>> +void kasan_kfree_large(const void *ptr);
>> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
>> +void kasan_krealloc(const void *object, size_t new_size);
>> +
>> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
>> +void kasan_slab_free(struct kmem_cache *s, void *object);
>> +
>> +void kasan_alloc_slab_pages(struct page *page, int order);
>> +void kasan_free_slab_pages(struct page *page, int order);
>> +
>>  #else /* CONFIG_KASAN */
>>  
>>  static inline void unpoison_shadow(const void *address, size_t size) {}
>> @@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
>>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>>  
>> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
>> +static inline void kasan_kfree_large(const void *ptr) {}
>> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
>> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
>> +
>> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
>> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
>> +
>> +static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
>> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
>> +
>>  #endif /* CONFIG_KASAN */
>>  
>>  #endif /* LINUX_KASAN_H */
>> diff --git a/include/linux/slab.h b/include/linux/slab.h
>> index 68b1feab..a9513e9 100644
>> --- a/include/linux/slab.h
>> +++ b/include/linux/slab.h
>> @@ -104,6 +104,7 @@
>>  				(unsigned long)ZERO_SIZE_PTR)
>>  
>>  #include <linux/kmemleak.h>
>> +#include <linux/kasan.h>
>>  
>>  struct mem_cgroup;
>>  /*
>> @@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
>>   */
>>  static __always_inline void *kmalloc(size_t size, gfp_t flags)
>>  {
>> +	void *ret;
>> +
>>  	if (__builtin_constant_p(size)) {
>>  		if (size > KMALLOC_MAX_CACHE_SIZE)
>>  			return kmalloc_large(size, flags);
>> @@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>>  			if (!index)
>>  				return ZERO_SIZE_PTR;
>>  
>> -			return kmem_cache_alloc_trace(kmalloc_caches[index],
>> +			ret = kmem_cache_alloc_trace(kmalloc_caches[index],
>>  					flags, size);
>> +
>> +			kasan_kmalloc(kmalloc_caches[index], ret, size);
>> +
>> +			return ret;
>>  		}
>>  #endif
>>  	}
>> @@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
>>  static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>>  {
>>  #ifndef CONFIG_SLOB
>> +	void *ret;
>> +
>>  	if (__builtin_constant_p(size) &&
>>  		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
>>  		int i = kmalloc_index(size);
>> @@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
>>  		if (!i)
>>  			return ZERO_SIZE_PTR;
>>  
>> -		return kmem_cache_alloc_node_trace(kmalloc_caches[i],
>> -						flags, node, size);
>> +		ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
>> +						  flags, node, size);
>> +
>> +		kasan_kmalloc(kmalloc_caches[i], ret, size);
>> +
>> +		return ret;
>>  	}
>>  #endif
>>  	return __kmalloc_node(size, flags, node);
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> index 2bfff78..289a624 100644
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
>>  
>>  config KASAN
>>  	bool "AddressSanitizer: dynamic memory error detector"
>> +	depends on SLUB
>> +	select STACKTRACE
>>  	default n
>>  	help
>>  	  Enables AddressSanitizer - dynamic memory error detector,
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index 109478e..9b5182a 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
>>  	}
>>  }
>>  
>> +void kasan_alloc_slab_pages(struct page *page, int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
>> +}
>> +
>> +void kasan_free_slab_pages(struct page *page, int order)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
>> +}
>> +
>> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
>> +{
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(object == NULL))
>> +		return;
>> +
>> +	poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
>> +	unpoison_shadow(object, cache->alloc_size);
>> +}
>> +
>> +void kasan_slab_free(struct kmem_cache *cache, void *object)
>> +{
>> +	unsigned long size = cache->size;
>> +	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>> +}
>> +
>> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
>> +{
>> +	unsigned long redzone_start;
>> +	unsigned long redzone_end;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(object == NULL))
>> +		return;
>> +
>> +	redzone_start = round_up((unsigned long)(object + size),
>> +				KASAN_SHADOW_SCALE_SIZE);
>> +	redzone_end = (unsigned long)object + cache->size;
>> +
>> +	unpoison_shadow(object, size);
>> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +		KASAN_KMALLOC_REDZONE);
>> +
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc);
>> +
>> +void kasan_kmalloc_large(const void *ptr, size_t size)
>> +{
>> +	struct page *page;
>> +	unsigned long redzone_start;
>> +	unsigned long redzone_end;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	if (unlikely(ptr == NULL))
>> +		return;
>> +
>> +	page = virt_to_page(ptr);
>> +	redzone_start = round_up((unsigned long)(ptr + size),
>> +				KASAN_SHADOW_SCALE_SIZE);
>> +	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
>> +
>> +	unpoison_shadow(ptr, size);
>> +	poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +		KASAN_PAGE_REDZONE);
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc_large);
>> +
>> +void kasan_krealloc(const void *object, size_t size)
>> +{
>> +	struct page *page;
>> +
>> +	if (unlikely(object == ZERO_SIZE_PTR))
>> +		return;
>> +
>> +	page = virt_to_head_page(object);
>> +
>> +	if (unlikely(!PageSlab(page)))
>> +		kasan_kmalloc_large(object, size);
>> +	else
>> +		kasan_kmalloc(page->slab_cache, object, size);
>> +}
>> +
>> +void kasan_kfree_large(const void *ptr)
>> +{
>> +	struct page *page;
>> +
>> +	if (unlikely(!kasan_initialized))
>> +		return;
>> +
>> +	page = virt_to_page(ptr);
>> +	poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
>> +}
>> +
>>  void kasan_alloc_pages(struct page *page, unsigned int order)
>>  {
>>  	if (unlikely(!kasan_initialized))
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index be9597e..f925d03 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -6,6 +6,11 @@
>>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>>  
>>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
>> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
>> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
>> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>>  
>>  struct access_info {
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 6ef9e57..6d829af 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
>>  	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_PAGE_REDZONE:
>> +	case KASAN_SLAB_REDZONE:
>> +	case KASAN_KMALLOC_REDZONE:
>>  	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>>  		bug_type = "buffer overflow";
>>  		break;
>>  	case KASAN_FREE_PAGE:
>> +	case KASAN_SLAB_FREE:
>> +	case KASAN_KMALLOC_FREE:
>>  		bug_type = "use after free";
>>  		break;
>>  	case KASAN_SHADOW_GAP:
>> @@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
>>  	page = virt_to_page(info->access_addr);
>>  
>>  	switch (shadow_val) {
>> +	case KASAN_SLAB_REDZONE:
>> +		cache = virt_to_cache((void *)info->access_addr);
>> +		slab_err(cache, page, "access to slab redzone");
> 
> We need head page of invalid access address for slab_err() since head
> page has all meta data of this slab. So, instead of, virt_to_cache,
> use virt_to_head_page() and page->slab_cache.
> 
>> +		dump_stack();
>> +		break;
>> +	case KASAN_KMALLOC_FREE:
>> +	case KASAN_KMALLOC_REDZONE:
>> +	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
>> +		if (PageSlab(page)) {
>> +			cache = virt_to_cache((void *)info->access_addr);
>> +			slab_start = page_address(virt_to_head_page((void *)info->access_addr));
>> +			object = virt_to_obj(cache, slab_start,
>> +					(void *)info->access_addr);
>> +			object_err(cache, page, object, "kasan error");
>> +			break;
>> +		}
> 
> Same here, page should be head page.
> 

Correct, I'll fix it.

Thanks.

> Thanks.
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
  2014-07-15  7:37       ` Andrey Ryabinin
  (?)
@ 2014-07-15  8:18         ` Joonsoo Kim
  -1 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  8:18 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
> On 07/15/14 10:04, Joonsoo Kim wrote:
> > On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> >> Some code in slub could validly touch memory marked by kasan as unaccessible.
> >> Even though slub.c doesn't instrumented, functions called in it are instrumented,
> >> so to avoid false positive reports such places are protected by
> >> kasan_disable_local()/kasan_enable_local() calls.
> >>
> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> >> ---
> >>  mm/slub.c | 21 +++++++++++++++++++--
> >>  1 file changed, 19 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 6ddedf9..c8dbea7 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
> >>  	if (!(s->flags & SLAB_STORE_USER))
> >>  		return;
> >>  
> >> +	kasan_disable_local();
> >>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
> >>  	print_track("Freed", get_track(s, object, TRACK_FREE));
> >> +	kasan_enable_local();
> > 
> > I don't think that this is needed since print_track() doesn't call
> > external function with object pointer. print_track() call pr_err(), but,
> > before calling, it retrieve t->addrs[i] so memory access only occurs
> > in slub.c.
> > 
> Agree.
> 
> >>  }
> >>  
> >>  static void print_page_info(struct page *page)
> >> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> >>  	unsigned int off;	/* Offset of last byte */
> >>  	u8 *addr = page_address(page);
> >>  
> >> +	kasan_disable_local();
> >> +
> >>  	print_tracking(s, p);
> >>  
> >>  	print_page_info(page);
> >> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> >>  		/* Beginning of the filler is the free pointer */
> >>  		print_section("Padding ", p + off, s->size - off);
> >>  
> >> +	kasan_enable_local();
> >> +
> >>  	dump_stack();
> >>  }
> > 
> > And, I recommend that you put this hook on right place.
> > At a glance, the problematic function is print_section() which have
> > external function call, print_hex_dump(), with object pointer.
> > If you disable kasan in print_section, all the below thing won't be
> > needed, I guess.
> > 
> 
> Nope, at least memchr_inv() call in slab_pad_check will be a problem.
> 
> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
> for kasan.

I don't agree with this.

If someone is going to add a slab_pad_check() in other places in
slub.c, we should disable/enable kasan there, too. This looks same
maintenance problem to me. Putting disable/enable only where we
strictly need at least ensures that we don't need to care when using
slub internal functions.

And, if memchr_inv() is problem, I think that you also need to add hook
into validate_slab_cache().

validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
check_object() -> check_bytes_and_report() -> memchr_inv()

Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15  8:18         ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  8:18 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
> On 07/15/14 10:04, Joonsoo Kim wrote:
> > On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> >> Some code in slub could validly touch memory marked by kasan as unaccessible.
> >> Even though slub.c doesn't instrumented, functions called in it are instrumented,
> >> so to avoid false positive reports such places are protected by
> >> kasan_disable_local()/kasan_enable_local() calls.
> >>
> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> >> ---
> >>  mm/slub.c | 21 +++++++++++++++++++--
> >>  1 file changed, 19 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 6ddedf9..c8dbea7 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
> >>  	if (!(s->flags & SLAB_STORE_USER))
> >>  		return;
> >>  
> >> +	kasan_disable_local();
> >>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
> >>  	print_track("Freed", get_track(s, object, TRACK_FREE));
> >> +	kasan_enable_local();
> > 
> > I don't think that this is needed since print_track() doesn't call
> > external function with object pointer. print_track() call pr_err(), but,
> > before calling, it retrieve t->addrs[i] so memory access only occurs
> > in slub.c.
> > 
> Agree.
> 
> >>  }
> >>  
> >>  static void print_page_info(struct page *page)
> >> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> >>  	unsigned int off;	/* Offset of last byte */
> >>  	u8 *addr = page_address(page);
> >>  
> >> +	kasan_disable_local();
> >> +
> >>  	print_tracking(s, p);
> >>  
> >>  	print_page_info(page);
> >> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> >>  		/* Beginning of the filler is the free pointer */
> >>  		print_section("Padding ", p + off, s->size - off);
> >>  
> >> +	kasan_enable_local();
> >> +
> >>  	dump_stack();
> >>  }
> > 
> > And, I recommend that you put this hook on right place.
> > At a glance, the problematic function is print_section() which have
> > external function call, print_hex_dump(), with object pointer.
> > If you disable kasan in print_section, all the below thing won't be
> > needed, I guess.
> > 
> 
> Nope, at least memchr_inv() call in slab_pad_check will be a problem.
> 
> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
> for kasan.

I don't agree with this.

If someone is going to add a slab_pad_check() in other places in
slub.c, we should disable/enable kasan there, too. This looks same
maintenance problem to me. Putting disable/enable only where we
strictly need at least ensures that we don't need to care when using
slub internal functions.

And, if memchr_inv() is problem, I think that you also need to add hook
into validate_slab_cache().

validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
check_object() -> check_bytes_and_report() -> memchr_inv()

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15  8:18         ` Joonsoo Kim
  0 siblings, 0 replies; 862+ messages in thread
From: Joonsoo Kim @ 2014-07-15  8:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
> On 07/15/14 10:04, Joonsoo Kim wrote:
> > On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> >> Some code in slub could validly touch memory marked by kasan as unaccessible.
> >> Even though slub.c doesn't instrumented, functions called in it are instrumented,
> >> so to avoid false positive reports such places are protected by
> >> kasan_disable_local()/kasan_enable_local() calls.
> >>
> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> >> ---
> >>  mm/slub.c | 21 +++++++++++++++++++--
> >>  1 file changed, 19 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 6ddedf9..c8dbea7 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
> >>  	if (!(s->flags & SLAB_STORE_USER))
> >>  		return;
> >>  
> >> +	kasan_disable_local();
> >>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
> >>  	print_track("Freed", get_track(s, object, TRACK_FREE));
> >> +	kasan_enable_local();
> > 
> > I don't think that this is needed since print_track() doesn't call
> > external function with object pointer. print_track() call pr_err(), but,
> > before calling, it retrieve t->addrs[i] so memory access only occurs
> > in slub.c.
> > 
> Agree.
> 
> >>  }
> >>  
> >>  static void print_page_info(struct page *page)
> >> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> >>  	unsigned int off;	/* Offset of last byte */
> >>  	u8 *addr = page_address(page);
> >>  
> >> +	kasan_disable_local();
> >> +
> >>  	print_tracking(s, p);
> >>  
> >>  	print_page_info(page);
> >> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
> >>  		/* Beginning of the filler is the free pointer */
> >>  		print_section("Padding ", p + off, s->size - off);
> >>  
> >> +	kasan_enable_local();
> >> +
> >>  	dump_stack();
> >>  }
> > 
> > And, I recommend that you put this hook on right place.
> > At a glance, the problematic function is print_section() which have
> > external function call, print_hex_dump(), with object pointer.
> > If you disable kasan in print_section, all the below thing won't be
> > needed, I guess.
> > 
> 
> Nope, at least memchr_inv() call in slab_pad_check will be a problem.
> 
> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
> for kasan.

I don't agree with this.

If someone is going to add a slab_pad_check() in other places in
slub.c, we should disable/enable kasan there, too. This looks same
maintenance problem to me. Putting disable/enable only where we
strictly need at least ensures that we don't need to care when using
slub internal functions.

And, if memchr_inv() is problem, I think that you also need to add hook
into validate_slab_cache().

validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
check_object() -> check_bytes_and_report() -> memchr_inv()

Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-07-15  6:12     ` Joonsoo Kim
  (?)
@ 2014-07-15  9:34       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  9:34 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 10:12, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>> We need to manually unpoison rounded up allocation size for dname
>> to avoid kasan's reports in __d_lookup_rcu.
>> __d_lookup_rcu may validly read a little beyound allocated size.
> 
> If it read a little beyond allocated size, IMHO, it is better to
> allocate correct size.
> 
> kmalloc(name->len + 1, GFP_KERNEL); -->
> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
> 
> Isn't it?
> 

It's not needed here because kmalloc always roundup allocation size.

This out of bound access happens in dentry_string_cmp() if CONFIG_DCACHE_WORD_ACCESS=y.
dentry_string_cmp() relays on fact that kmalloc always round up allocation size,
in other words it's by design.

That was discussed some time ago here - https://lkml.org/lkml/2013/10/3/493.
Since filesystem's maintainer don't want to add needless round up here, I'm not going to do it.

I think this patch needs only more detailed description why we not simply allocate more.
Also I think it would be better to rename unpoisoin_shadow to something like kasan_mark_allocated().


> Thanks.
> 
>>
>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  fs/dcache.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index b7e8b20..dff64f2 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -38,6 +38,7 @@
>>  #include <linux/prefetch.h>
>>  #include <linux/ratelimit.h>
>>  #include <linux/list_lru.h>
>> +#include <linux/kasan.h>
>>  #include "internal.h"
>>  #include "mount.h"
>>  
>> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>>  			kmem_cache_free(dentry_cache, dentry); 
>>  			return NULL;
>>  		}
>> +		unpoison_shadow(dname,
>> +				roundup(name->len + 1, sizeof(unsigned long)));
>>  	} else  {
>>  		dname = dentry->d_iname;
>>  	}	
>> -- 
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-15  9:34       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  9:34 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 10:12, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>> We need to manually unpoison rounded up allocation size for dname
>> to avoid kasan's reports in __d_lookup_rcu.
>> __d_lookup_rcu may validly read a little beyound allocated size.
> 
> If it read a little beyond allocated size, IMHO, it is better to
> allocate correct size.
> 
> kmalloc(name->len + 1, GFP_KERNEL); -->
> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
> 
> Isn't it?
> 

It's not needed here because kmalloc always roundup allocation size.

This out of bound access happens in dentry_string_cmp() if CONFIG_DCACHE_WORD_ACCESS=y.
dentry_string_cmp() relays on fact that kmalloc always round up allocation size,
in other words it's by design.

That was discussed some time ago here - https://lkml.org/lkml/2013/10/3/493.
Since filesystem's maintainer don't want to add needless round up here, I'm not going to do it.

I think this patch needs only more detailed description why we not simply allocate more.
Also I think it would be better to rename unpoisoin_shadow to something like kasan_mark_allocated().


> Thanks.
> 
>>
>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  fs/dcache.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index b7e8b20..dff64f2 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -38,6 +38,7 @@
>>  #include <linux/prefetch.h>
>>  #include <linux/ratelimit.h>
>>  #include <linux/list_lru.h>
>> +#include <linux/kasan.h>
>>  #include "internal.h"
>>  #include "mount.h"
>>  
>> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>>  			kmem_cache_free(dentry_cache, dentry); 
>>  			return NULL;
>>  		}
>> +		unpoison_shadow(dname,
>> +				roundup(name->len + 1, sizeof(unsigned long)));
>>  	} else  {
>>  		dname = dentry->d_iname;
>>  	}	
>> -- 
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-15  9:34       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  9:34 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/15/14 10:12, Joonsoo Kim wrote:
> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>> We need to manually unpoison rounded up allocation size for dname
>> to avoid kasan's reports in __d_lookup_rcu.
>> __d_lookup_rcu may validly read a little beyound allocated size.
> 
> If it read a little beyond allocated size, IMHO, it is better to
> allocate correct size.
> 
> kmalloc(name->len + 1, GFP_KERNEL); -->
> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
> 
> Isn't it?
> 

It's not needed here because kmalloc always roundup allocation size.

This out of bound access happens in dentry_string_cmp() if CONFIG_DCACHE_WORD_ACCESS=y.
dentry_string_cmp() relays on fact that kmalloc always round up allocation size,
in other words it's by design.

That was discussed some time ago here - https://lkml.org/lkml/2013/10/3/493.
Since filesystem's maintainer don't want to add needless round up here, I'm not going to do it.

I think this patch needs only more detailed description why we not simply allocate more.
Also I think it would be better to rename unpoisoin_shadow to something like kasan_mark_allocated().


> Thanks.
> 
>>
>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  fs/dcache.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index b7e8b20..dff64f2 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -38,6 +38,7 @@
>>  #include <linux/prefetch.h>
>>  #include <linux/ratelimit.h>
>>  #include <linux/list_lru.h>
>> +#include <linux/kasan.h>
>>  #include "internal.h"
>>  #include "mount.h"
>>  
>> @@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
>>  			kmem_cache_free(dentry_cache, dentry); 
>>  			return NULL;
>>  		}
>> +		unpoison_shadow(dname,
>> +				roundup(name->len + 1, sizeof(unsigned long)));
>>  	} else  {
>>  		dname = dentry->d_iname;
>>  	}	
>> -- 
>> 1.8.5.5
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo at kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a>
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-07-15  9:34       ` Andrey Ryabinin
  (?)
@ 2014-07-15  9:45         ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-07-15  9:45 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Joonsoo Kim, LKML, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm

On Tue, Jul 15, 2014 at 1:34 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> On 07/15/14 10:12, Joonsoo Kim wrote:
>> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>>> We need to manually unpoison rounded up allocation size for dname
>>> to avoid kasan's reports in __d_lookup_rcu.
>>> __d_lookup_rcu may validly read a little beyound allocated size.
>>
>> If it read a little beyond allocated size, IMHO, it is better to
>> allocate correct size.
>>
>> kmalloc(name->len + 1, GFP_KERNEL); -->
>> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>>
>> Isn't it?
>>
>
> It's not needed here because kmalloc always roundup allocation size.
>
> This out of bound access happens in dentry_string_cmp() if CONFIG_DCACHE_WORD_ACCESS=y.
> dentry_string_cmp() relays on fact that kmalloc always round up allocation size,
> in other words it's by design.
>
> That was discussed some time ago here - https://lkml.org/lkml/2013/10/3/493.
> Since filesystem's maintainer don't want to add needless round up here, I'm not going to do it.
>
> I think this patch needs only more detailed description why we not simply allocate more.
> Also I think it would be better to rename unpoisoin_shadow to something like kasan_mark_allocated().


Note that this poison/unpoison functionality can be used in other
contexts. E.g. when you allocate a bunch of pages, then at some point
poison a part of it to ensure that nobody touches it, then unpoison it
back. Allocated/unallocated looks like a bad fit here, because it has
nothing to do with allocation state. Poison/unpoison is also what we
use in user-space.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-15  9:45         ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-07-15  9:45 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Joonsoo Kim, LKML, Konstantin Serebryany, Alexey Preobrazhensky,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Russell King, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm

On Tue, Jul 15, 2014 at 1:34 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> On 07/15/14 10:12, Joonsoo Kim wrote:
>> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>>> We need to manually unpoison rounded up allocation size for dname
>>> to avoid kasan's reports in __d_lookup_rcu.
>>> __d_lookup_rcu may validly read a little beyound allocated size.
>>
>> If it read a little beyond allocated size, IMHO, it is better to
>> allocate correct size.
>>
>> kmalloc(name->len + 1, GFP_KERNEL); -->
>> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>>
>> Isn't it?
>>
>
> It's not needed here because kmalloc always roundup allocation size.
>
> This out of bound access happens in dentry_string_cmp() if CONFIG_DCACHE_WORD_ACCESS=y.
> dentry_string_cmp() relays on fact that kmalloc always round up allocation size,
> in other words it's by design.
>
> That was discussed some time ago here - https://lkml.org/lkml/2013/10/3/493.
> Since filesystem's maintainer don't want to add needless round up here, I'm not going to do it.
>
> I think this patch needs only more detailed description why we not simply allocate more.
> Also I think it would be better to rename unpoisoin_shadow to something like kasan_mark_allocated().


Note that this poison/unpoison functionality can be used in other
contexts. E.g. when you allocate a bunch of pages, then at some point
poison a part of it to ensure that nobody touches it, then unpoison it
back. Allocated/unallocated looks like a bad fit here, because it has
nothing to do with allocation state. Poison/unpoison is also what we
use in user-space.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-07-15  9:45         ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-07-15  9:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 15, 2014 at 1:34 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> On 07/15/14 10:12, Joonsoo Kim wrote:
>> On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
>>> We need to manually unpoison rounded up allocation size for dname
>>> to avoid kasan's reports in __d_lookup_rcu.
>>> __d_lookup_rcu may validly read a little beyound allocated size.
>>
>> If it read a little beyond allocated size, IMHO, it is better to
>> allocate correct size.
>>
>> kmalloc(name->len + 1, GFP_KERNEL); -->
>> kmalloc(roundup(name->len + 1, sizeof(unsigned long ), GFP_KERNEL);
>>
>> Isn't it?
>>
>
> It's not needed here because kmalloc always roundup allocation size.
>
> This out of bound access happens in dentry_string_cmp() if CONFIG_DCACHE_WORD_ACCESS=y.
> dentry_string_cmp() relays on fact that kmalloc always round up allocation size,
> in other words it's by design.
>
> That was discussed some time ago here - https://lkml.org/lkml/2013/10/3/493.
> Since filesystem's maintainer don't want to add needless round up here, I'm not going to do it.
>
> I think this patch needs only more detailed description why we not simply allocate more.
> Also I think it would be better to rename unpoisoin_shadow to something like kasan_mark_allocated().


Note that this poison/unpoison functionality can be used in other
contexts. E.g. when you allocate a bunch of pages, then at some point
poison a part of it to ensure that nobody touches it, then unpoison it
back. Allocated/unallocated looks like a bad fit here, because it has
nothing to do with allocation state. Poison/unpoison is also what we
use in user-space.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
  2014-07-15  8:18         ` Joonsoo Kim
  (?)
@ 2014-07-15  9:51           ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  9:51 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 12:18, Joonsoo Kim wrote:
> On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
>> On 07/15/14 10:04, Joonsoo Kim wrote:
>>> On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
>>>> Some code in slub could validly touch memory marked by kasan as unaccessible.
>>>> Even though slub.c doesn't instrumented, functions called in it are instrumented,
>>>> so to avoid false positive reports such places are protected by
>>>> kasan_disable_local()/kasan_enable_local() calls.
>>>>
>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>> ---
>>>>  mm/slub.c | 21 +++++++++++++++++++--
>>>>  1 file changed, 19 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/slub.c b/mm/slub.c
>>>> index 6ddedf9..c8dbea7 100644
>>>> --- a/mm/slub.c
>>>> +++ b/mm/slub.c
>>>> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>>>>  	if (!(s->flags & SLAB_STORE_USER))
>>>>  		return;
>>>>  
>>>> +	kasan_disable_local();
>>>>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>>>>  	print_track("Freed", get_track(s, object, TRACK_FREE));
>>>> +	kasan_enable_local();
>>>
>>> I don't think that this is needed since print_track() doesn't call
>>> external function with object pointer. print_track() call pr_err(), but,
>>> before calling, it retrieve t->addrs[i] so memory access only occurs
>>> in slub.c.
>>>
>> Agree.
>>
>>>>  }
>>>>  
>>>>  static void print_page_info(struct page *page)
>>>> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>>>  	unsigned int off;	/* Offset of last byte */
>>>>  	u8 *addr = page_address(page);
>>>>  
>>>> +	kasan_disable_local();
>>>> +
>>>>  	print_tracking(s, p);
>>>>  
>>>>  	print_page_info(page);
>>>> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>>>  		/* Beginning of the filler is the free pointer */
>>>>  		print_section("Padding ", p + off, s->size - off);
>>>>  
>>>> +	kasan_enable_local();
>>>> +
>>>>  	dump_stack();
>>>>  }
>>>
>>> And, I recommend that you put this hook on right place.
>>> At a glance, the problematic function is print_section() which have
>>> external function call, print_hex_dump(), with object pointer.
>>> If you disable kasan in print_section, all the below thing won't be
>>> needed, I guess.
>>>
>>
>> Nope, at least memchr_inv() call in slab_pad_check will be a problem.
>>
>> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
>> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
>> for kasan.
> 
> I don't agree with this.
> 
> If someone is going to add a slab_pad_check() in other places in
> slub.c, we should disable/enable kasan there, too. This looks same
> maintenance problem to me. Putting disable/enable only where we
> strictly need at least ensures that we don't need to care when using
> slub internal functions.
> 
> And, if memchr_inv() is problem, I think that you also need to add hook
> into validate_slab_cache().
> 
> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
> check_object() -> check_bytes_and_report() -> memchr_inv()
> 
> Thanks.
> 

Ok, you convinced me. I'll do it.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15  9:51           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  9:51 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On 07/15/14 12:18, Joonsoo Kim wrote:
> On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
>> On 07/15/14 10:04, Joonsoo Kim wrote:
>>> On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
>>>> Some code in slub could validly touch memory marked by kasan as unaccessible.
>>>> Even though slub.c doesn't instrumented, functions called in it are instrumented,
>>>> so to avoid false positive reports such places are protected by
>>>> kasan_disable_local()/kasan_enable_local() calls.
>>>>
>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>> ---
>>>>  mm/slub.c | 21 +++++++++++++++++++--
>>>>  1 file changed, 19 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/slub.c b/mm/slub.c
>>>> index 6ddedf9..c8dbea7 100644
>>>> --- a/mm/slub.c
>>>> +++ b/mm/slub.c
>>>> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>>>>  	if (!(s->flags & SLAB_STORE_USER))
>>>>  		return;
>>>>  
>>>> +	kasan_disable_local();
>>>>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>>>>  	print_track("Freed", get_track(s, object, TRACK_FREE));
>>>> +	kasan_enable_local();
>>>
>>> I don't think that this is needed since print_track() doesn't call
>>> external function with object pointer. print_track() call pr_err(), but,
>>> before calling, it retrieve t->addrs[i] so memory access only occurs
>>> in slub.c.
>>>
>> Agree.
>>
>>>>  }
>>>>  
>>>>  static void print_page_info(struct page *page)
>>>> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>>>  	unsigned int off;	/* Offset of last byte */
>>>>  	u8 *addr = page_address(page);
>>>>  
>>>> +	kasan_disable_local();
>>>> +
>>>>  	print_tracking(s, p);
>>>>  
>>>>  	print_page_info(page);
>>>> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>>>  		/* Beginning of the filler is the free pointer */
>>>>  		print_section("Padding ", p + off, s->size - off);
>>>>  
>>>> +	kasan_enable_local();
>>>> +
>>>>  	dump_stack();
>>>>  }
>>>
>>> And, I recommend that you put this hook on right place.
>>> At a glance, the problematic function is print_section() which have
>>> external function call, print_hex_dump(), with object pointer.
>>> If you disable kasan in print_section, all the below thing won't be
>>> needed, I guess.
>>>
>>
>> Nope, at least memchr_inv() call in slab_pad_check will be a problem.
>>
>> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
>> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
>> for kasan.
> 
> I don't agree with this.
> 
> If someone is going to add a slab_pad_check() in other places in
> slub.c, we should disable/enable kasan there, too. This looks same
> maintenance problem to me. Putting disable/enable only where we
> strictly need at least ensures that we don't need to care when using
> slub internal functions.
> 
> And, if memchr_inv() is problem, I think that you also need to add hook
> into validate_slab_cache().
> 
> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
> check_object() -> check_bytes_and_report() -> memchr_inv()
> 
> Thanks.
> 

Ok, you convinced me. I'll do it.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15  9:51           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15  9:51 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/15/14 12:18, Joonsoo Kim wrote:
> On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
>> On 07/15/14 10:04, Joonsoo Kim wrote:
>>> On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
>>>> Some code in slub could validly touch memory marked by kasan as unaccessible.
>>>> Even though slub.c doesn't instrumented, functions called in it are instrumented,
>>>> so to avoid false positive reports such places are protected by
>>>> kasan_disable_local()/kasan_enable_local() calls.
>>>>
>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>> ---
>>>>  mm/slub.c | 21 +++++++++++++++++++--
>>>>  1 file changed, 19 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/slub.c b/mm/slub.c
>>>> index 6ddedf9..c8dbea7 100644
>>>> --- a/mm/slub.c
>>>> +++ b/mm/slub.c
>>>> @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
>>>>  	if (!(s->flags & SLAB_STORE_USER))
>>>>  		return;
>>>>  
>>>> +	kasan_disable_local();
>>>>  	print_track("Allocated", get_track(s, object, TRACK_ALLOC));
>>>>  	print_track("Freed", get_track(s, object, TRACK_FREE));
>>>> +	kasan_enable_local();
>>>
>>> I don't think that this is needed since print_track() doesn't call
>>> external function with object pointer. print_track() call pr_err(), but,
>>> before calling, it retrieve t->addrs[i] so memory access only occurs
>>> in slub.c.
>>>
>> Agree.
>>
>>>>  }
>>>>  
>>>>  static void print_page_info(struct page *page)
>>>> @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>>>  	unsigned int off;	/* Offset of last byte */
>>>>  	u8 *addr = page_address(page);
>>>>  
>>>> +	kasan_disable_local();
>>>> +
>>>>  	print_tracking(s, p);
>>>>  
>>>>  	print_page_info(page);
>>>> @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>>>>  		/* Beginning of the filler is the free pointer */
>>>>  		print_section("Padding ", p + off, s->size - off);
>>>>  
>>>> +	kasan_enable_local();
>>>> +
>>>>  	dump_stack();
>>>>  }
>>>
>>> And, I recommend that you put this hook on right place.
>>> At a glance, the problematic function is print_section() which have
>>> external function call, print_hex_dump(), with object pointer.
>>> If you disable kasan in print_section, all the below thing won't be
>>> needed, I guess.
>>>
>>
>> Nope, at least memchr_inv() call in slab_pad_check will be a problem.
>>
>> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
>> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
>> for kasan.
> 
> I don't agree with this.
> 
> If someone is going to add a slab_pad_check() in other places in
> slub.c, we should disable/enable kasan there, too. This looks same
> maintenance problem to me. Putting disable/enable only where we
> strictly need at least ensures that we don't need to care when using
> slub internal functions.
> 
> And, if memchr_inv() is problem, I think that you also need to add hook
> into validate_slab_cache().
> 
> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
> check_object() -> check_bytes_and_report() -> memchr_inv()
> 
> Thanks.
> 

Ok, you convinced me. I'll do it.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
  2014-07-15  8:18         ` Joonsoo Kim
  (?)
@ 2014-07-15 14:26           ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-15 14:26 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Tue, 15 Jul 2014, Joonsoo Kim wrote:

> > I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
> > If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
> > for kasan.
>
> I don't agree with this.
>
> If someone is going to add a slab_pad_check() in other places in
> slub.c, we should disable/enable kasan there, too. This looks same
> maintenance problem to me. Putting disable/enable only where we
> strictly need at least ensures that we don't need to care when using
> slub internal functions.
>
> And, if memchr_inv() is problem, I think that you also need to add hook
> into validate_slab_cache().
>
> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
> check_object() -> check_bytes_and_report() -> memchr_inv()

I think adding disable/enable is good because it separates the payload
access from metadata accesses. This may be useful for future checkers.
Maybe call it something different so that this is more generic.

metadata_access_enable()

metadata_access_disable()

?

Maybe someone else has a better idea?



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15 14:26           ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-15 14:26 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Alexey Preobrazhensky, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Russell King, Thomas Gleixner, Ingo Molnar, Pekka Enberg,
	David Rientjes, Andrew Morton, linux-kbuild, linux-arm-kernel,
	x86, linux-mm

On Tue, 15 Jul 2014, Joonsoo Kim wrote:

> > I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
> > If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
> > for kasan.
>
> I don't agree with this.
>
> If someone is going to add a slab_pad_check() in other places in
> slub.c, we should disable/enable kasan there, too. This looks same
> maintenance problem to me. Putting disable/enable only where we
> strictly need at least ensures that we don't need to care when using
> slub internal functions.
>
> And, if memchr_inv() is problem, I think that you also need to add hook
> into validate_slab_cache().
>
> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
> check_object() -> check_bytes_and_report() -> memchr_inv()

I think adding disable/enable is good because it separates the payload
access from metadata accesses. This may be useful for future checkers.
Maybe call it something different so that this is more generic.

metadata_access_enable()

metadata_access_disable()

?

Maybe someone else has a better idea?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15 14:26           ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-07-15 14:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 15 Jul 2014, Joonsoo Kim wrote:

> > I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
> > If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
> > for kasan.
>
> I don't agree with this.
>
> If someone is going to add a slab_pad_check() in other places in
> slub.c, we should disable/enable kasan there, too. This looks same
> maintenance problem to me. Putting disable/enable only where we
> strictly need at least ensures that we don't need to care when using
> slub internal functions.
>
> And, if memchr_inv() is problem, I think that you also need to add hook
> into validate_slab_cache().
>
> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
> check_object() -> check_bytes_and_report() -> memchr_inv()

I think adding disable/enable is good because it separates the payload
access from metadata accesses. This may be useful for future checkers.
Maybe call it something different so that this is more generic.

metadata_access_enable()

metadata_access_disable()

?

Maybe someone else has a better idea?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
  2014-07-15 14:26           ` Christoph Lameter
  (?)
@ 2014-07-15 15:02             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 15:02 UTC (permalink / raw)
  To: Christoph Lameter, Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm

On 07/15/14 18:26, Christoph Lameter wrote:
> On Tue, 15 Jul 2014, Joonsoo Kim wrote:
> 
>>> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
>>> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
>>> for kasan.
>>
>> I don't agree with this.
>>
>> If someone is going to add a slab_pad_check() in other places in
>> slub.c, we should disable/enable kasan there, too. This looks same
>> maintenance problem to me. Putting disable/enable only where we
>> strictly need at least ensures that we don't need to care when using
>> slub internal functions.
>>
>> And, if memchr_inv() is problem, I think that you also need to add hook
>> into validate_slab_cache().
>>
>> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
>> check_object() -> check_bytes_and_report() -> memchr_inv()
> 
> I think adding disable/enable is good because it separates the payload
> access from metadata accesses. This may be useful for future checkers.
> Maybe call it something different so that this is more generic.
> 
> metadata_access_enable()
> 
> metadata_access_disable()
> 
> ?
> 
It sounds like a good idea to me. However in this patch, besides from protecting metadata accesses,
this calls also used in setup_objects for wrapping ctor call. It used there because all pages in allocate_slab
are poisoned, so at the time when ctors are called all object's memory marked as poisoned.

I think this could be solved by removing kasan_alloc_slab_pages() hook form allocate_slab() and adding
kasan_slab_free() hook after ctor call.
But I guess in that case padding at the end of slab will be unpoisoined.

> Maybe someone else has a better idea?
> 
> 
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15 15:02             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 15:02 UTC (permalink / raw)
  To: Christoph Lameter, Joonsoo Kim
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Alexey Preobrazhensky, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek, Russell King,
	Thomas Gleixner, Ingo Molnar, Pekka Enberg, David Rientjes,
	Andrew Morton, linux-kbuild, linux-arm-kernel, x86, linux-mm

On 07/15/14 18:26, Christoph Lameter wrote:
> On Tue, 15 Jul 2014, Joonsoo Kim wrote:
> 
>>> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
>>> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
>>> for kasan.
>>
>> I don't agree with this.
>>
>> If someone is going to add a slab_pad_check() in other places in
>> slub.c, we should disable/enable kasan there, too. This looks same
>> maintenance problem to me. Putting disable/enable only where we
>> strictly need at least ensures that we don't need to care when using
>> slub internal functions.
>>
>> And, if memchr_inv() is problem, I think that you also need to add hook
>> into validate_slab_cache().
>>
>> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
>> check_object() -> check_bytes_and_report() -> memchr_inv()
> 
> I think adding disable/enable is good because it separates the payload
> access from metadata accesses. This may be useful for future checkers.
> Maybe call it something different so that this is more generic.
> 
> metadata_access_enable()
> 
> metadata_access_disable()
> 
> ?
> 
It sounds like a good idea to me. However in this patch, besides from protecting metadata accesses,
this calls also used in setup_objects for wrapping ctor call. It used there because all pages in allocate_slab
are poisoned, so at the time when ctors are called all object's memory marked as poisoned.

I think this could be solved by removing kasan_alloc_slab_pages() hook form allocate_slab() and adding
kasan_slab_free() hook after ctor call.
But I guess in that case padding at the end of slab will be unpoisoined.

> Maybe someone else has a better idea?
> 
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory
@ 2014-07-15 15:02             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-07-15 15:02 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/15/14 18:26, Christoph Lameter wrote:
> On Tue, 15 Jul 2014, Joonsoo Kim wrote:
> 
>>> I think putting disable/enable only where we strictly need them might be a problem for future maintenance of slub.
>>> If someone is going to add a new function call somewhere, he must ensure that it this call won't be a problem
>>> for kasan.
>>
>> I don't agree with this.
>>
>> If someone is going to add a slab_pad_check() in other places in
>> slub.c, we should disable/enable kasan there, too. This looks same
>> maintenance problem to me. Putting disable/enable only where we
>> strictly need at least ensures that we don't need to care when using
>> slub internal functions.
>>
>> And, if memchr_inv() is problem, I think that you also need to add hook
>> into validate_slab_cache().
>>
>> validate_slab_cache() -> validate_slab_slab() -> validate_slab() ->
>> check_object() -> check_bytes_and_report() -> memchr_inv()
> 
> I think adding disable/enable is good because it separates the payload
> access from metadata accesses. This may be useful for future checkers.
> Maybe call it something different so that this is more generic.
> 
> metadata_access_enable()
> 
> metadata_access_disable()
> 
> ?
> 
It sounds like a good idea to me. However in this patch, besides from protecting metadata accesses,
this calls also used in setup_objects for wrapping ctor call. It used there because all pages in allocate_slab
are poisoned, so at the time when ctors are called all object's memory marked as poisoned.

I think this could be solved by removing kasan_alloc_slab_pages() hook form allocate_slab() and adding
kasan_slab_free() hook after ctor call.
But I guess in that case padding at the end of slab will be unpoisoined.

> Maybe someone else has a better idea?
> 
> 
> 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2014-09-10 14:31   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, linux-kbuild, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Catalin Marinas

Hi,
This is a second iteration of kerenel address sanitizer (KASan).

KASan is a dynamic memory error detector designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v5.0.0.

Patches are aplied on mmotm/next trees and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v2

A lot of people asked about how kasan is different from other debuggin features,
so here is a short comparison:

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
                                 + KASAN_SHADOW_START;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are unaccessible.
    Different negative values used to distinguish between different kinds of
    unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

        There are several reasons for such change.
         - Shadow for every available kernel address allows us to get rid of checks like that:
             if (addr >= PAGE_OFFSET && addr < high_memory)
                 // check shadow ...

         - Latter we want to catch out of bounds accesses in global variables, so we will need shadow
           to cover kernel image and modules address ranges

         - Such shadow allows us easily to deal with sparse memory configurations, and memory hotplug (not supported
	   yet, though should be easy to do).

         - The last and the main reason is that we want to keep simple 'real address' -> 'shadow address' translation:

                    (addr >> 3) + some_offset

            because it is fast, and because that's how inline instrumentation works in GCC.
            Inline instrumentation means that compiler directly insert code checking shadow
            instead of function calls __asan_load/__asan_store (outline instrumentation).1f41351A

             BTW, with a few changes in this patches and this two patches for GCC
             ( https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html ,
               https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html )
              inline instrumentation is already possible.


     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <linux-kbuild@vger.kernel.org>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>

Andrey Ryabinin (10):
  Add kernel address sanitizer infrastructure.
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free pathes
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module

 Documentation/kasan.txt              | 180 +++++++++++++++++++++
 Makefile                             |  10 +-
 arch/x86/Kconfig                     |   1 +
 arch/x86/boot/Makefile               |   2 +
 arch/x86/boot/compressed/Makefile    |   2 +
 arch/x86/include/asm/kasan.h         |  20 +++
 arch/x86/include/asm/page_64_types.h |   4 +
 arch/x86/include/asm/pgtable.h       |   7 +-
 arch/x86/kernel/Makefile             |   2 +
 arch/x86/kernel/dumpstack.c          |   5 +-
 arch/x86/kernel/head64.c             |   6 +
 arch/x86/kernel/head_64.S            |  16 ++
 arch/x86/mm/Makefile                 |   3 +
 arch/x86/mm/init.c                   |   3 +
 arch/x86/mm/kasan_init_64.c          |  59 +++++++
 arch/x86/realmode/Makefile           |   2 +-
 arch/x86/realmode/rm/Makefile        |   1 +
 arch/x86/vdso/Makefile               |   1 +
 fs/dcache.c                          |   5 +
 include/linux/kasan.h                |  75 +++++++++
 include/linux/sched.h                |   3 +
 include/linux/slab.h                 |  11 +-
 lib/Kconfig.debug                    |  10 ++
 lib/Kconfig.kasan                    |  18 +++
 lib/Makefile                         |   1 +
 lib/test_kasan.c                     | 254 +++++++++++++++++++++++++++++
 mm/Makefile                          |   4 +
 mm/compaction.c                      |   2 +
 mm/kasan/Makefile                    |   3 +
 mm/kasan/kasan.c                     | 299 +++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                     |  38 +++++
 mm/kasan/report.c                    | 214 +++++++++++++++++++++++++
 mm/kmemleak.c                        |   6 +
 mm/page_alloc.c                      |   3 +
 mm/slab.h                            |  11 ++
 mm/slab_common.c                     |   5 +-
 mm/slub.c                            |  56 ++++++-
 scripts/Makefile.lib                 |  10 ++
 38 files changed, 1340 insertions(+), 12 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
1.8.5.5


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector.
@ 2014-09-10 14:31   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, linux-kbuild, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Catalin Marinas

Hi,
This is a second iteration of kerenel address sanitizer (KASan).

KASan is a dynamic memory error detector designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v5.0.0.

Patches are aplied on mmotm/next trees and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v2

A lot of people asked about how kasan is different from other debuggin features,
so here is a short comparison:

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
                                 + KASAN_SHADOW_START;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are unaccessible.
    Different negative values used to distinguish between different kinds of
    unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

        There are several reasons for such change.
         - Shadow for every available kernel address allows us to get rid of checks like that:
             if (addr >= PAGE_OFFSET && addr < high_memory)
                 // check shadow ...

         - Latter we want to catch out of bounds accesses in global variables, so we will need shadow
           to cover kernel image and modules address ranges

         - Such shadow allows us easily to deal with sparse memory configurations, and memory hotplug (not supported
	   yet, though should be easy to do).

         - The last and the main reason is that we want to keep simple 'real address' -> 'shadow address' translation:

                    (addr >> 3) + some_offset

            because it is fast, and because that's how inline instrumentation works in GCC.
            Inline instrumentation means that compiler directly insert code checking shadow
            instead of function calls __asan_load/__asan_store (outline instrumentation).1f41351A

             BTW, with a few changes in this patches and this two patches for GCC
             ( https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html ,
               https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html )
              inline instrumentation is already possible.


     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <linux-kbuild@vger.kernel.org>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>

Andrey Ryabinin (10):
  Add kernel address sanitizer infrastructure.
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free pathes
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module

 Documentation/kasan.txt              | 180 +++++++++++++++++++++
 Makefile                             |  10 +-
 arch/x86/Kconfig                     |   1 +
 arch/x86/boot/Makefile               |   2 +
 arch/x86/boot/compressed/Makefile    |   2 +
 arch/x86/include/asm/kasan.h         |  20 +++
 arch/x86/include/asm/page_64_types.h |   4 +
 arch/x86/include/asm/pgtable.h       |   7 +-
 arch/x86/kernel/Makefile             |   2 +
 arch/x86/kernel/dumpstack.c          |   5 +-
 arch/x86/kernel/head64.c             |   6 +
 arch/x86/kernel/head_64.S            |  16 ++
 arch/x86/mm/Makefile                 |   3 +
 arch/x86/mm/init.c                   |   3 +
 arch/x86/mm/kasan_init_64.c          |  59 +++++++
 arch/x86/realmode/Makefile           |   2 +-
 arch/x86/realmode/rm/Makefile        |   1 +
 arch/x86/vdso/Makefile               |   1 +
 fs/dcache.c                          |   5 +
 include/linux/kasan.h                |  75 +++++++++
 include/linux/sched.h                |   3 +
 include/linux/slab.h                 |  11 +-
 lib/Kconfig.debug                    |  10 ++
 lib/Kconfig.kasan                    |  18 +++
 lib/Makefile                         |   1 +
 lib/test_kasan.c                     | 254 +++++++++++++++++++++++++++++
 mm/Makefile                          |   4 +
 mm/compaction.c                      |   2 +
 mm/kasan/Makefile                    |   3 +
 mm/kasan/kasan.c                     | 299 +++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                     |  38 +++++
 mm/kasan/report.c                    | 214 +++++++++++++++++++++++++
 mm/kmemleak.c                        |   6 +
 mm/page_alloc.c                      |   3 +
 mm/slab.h                            |  11 ++
 mm/slab_common.c                     |   5 +-
 mm/slub.c                            |  56 ++++++-
 scripts/Makefile.lib                 |  10 ++
 38 files changed, 1340 insertions(+), 12 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Michal Marek, Ingo Molnar, Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore fresh GCC >= v5.0.0 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
                             + KASAN_SHADOW_START;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 180 ++++++++++++++++++++++++++++++++++++++++++++++
 Makefile                |  10 ++-
 include/linux/kasan.h   |  42 +++++++++++
 include/linux/sched.h   |   3 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  16 +++++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 188 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  32 +++++++++
 mm/kasan/report.c       | 183 ++++++++++++++++++++++++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 +++
 12 files changed, 669 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..5a9d903
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,180 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN uses compile-time instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 5.0.0.
+
+Currently KASAN supported only for x86_64 architecture and requires kernel
+to be build with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+Currently KASAN works only with SLUB.
+For better bug detection and nicer report enable CONFIG_STACKTRACE, CONFIG_SLUB_DEBUG
+and put 'slub_debug=FU' to boot cmdline.
+Please don't use slab poisoning with KASan (slub_debug=P), beacuse if KASan will
+detects use after free allocation and free stacktraces will be overwritten by
+poison bytes, and KASan won't be able to print this backtraces.
+
+To exclude files from being instrumented by compiler, add a line
+similar to the following to the respective kernel Makefile:
+
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow on each memory
+access.
+
+AddressSanitizer dedicates 1/8 of the addressable in kernel memory to its shadow
+memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
+scale and offset to translate a memory address to its corresponding shadow address.
+
+Here is function witch translate address to corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_START;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
diff --git a/Makefile b/Makefile
index adc1884..8013146 100644
--- a/Makefile
+++ b/Makefile
@@ -388,6 +388,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
+CFLAGS_KASAN	= -fsanitize=kernel-address
 
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
@@ -432,7 +433,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -763,6 +764,13 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+  ifeq ($(call cc-option, $(CFLAGS_KASAN)),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..6055f64
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_START;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 7d799ea..7239425 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1658,6 +1658,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 8a04a4e..09824b5 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -635,6 +635,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..22fec2d
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,16 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: dynamic memory error detector"
+	default n
+	help
+	  Enables address sanitizer - dynamic memory error detector,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 perfomance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+endif
diff --git a/mm/Makefile b/mm/Makefile
index b2f18dc..b3c8b77 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -65,3 +65,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..65f8145
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,188 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool address_is_poisoned(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (shadow_value != 0) {
+		s8 last_byte = addr & KASAN_SHADOW_MASK;
+
+		return last_byte >= shadow_value;
+	}
+	return false;
+}
+
+static __always_inline unsigned long memory_is_poisoned(unsigned long addr,
+							size_t size)
+{
+	unsigned long end = addr + size;
+
+	for (; addr < end; addr++)
+		if (unlikely(address_is_poisoned(addr)))
+			return addr;
+	return 0;
+}
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	unsigned long access_addr;
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < PAGE_OFFSET)) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	access_addr = memory_is_poisoned(addr, size);
+	if (likely(access_addr == 0))
+		return;
+
+	info.access_addr = access_addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complains */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..2ea2ed7
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,32 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return ((shadow_addr - KASAN_SHADOW_START)
+		<< KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_START;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..3bfc8b6
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,183 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	void *object;
+	struct kmem_cache *cache;
+	void *slab_start;
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..7e2c9f8 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(call cc-option, $(CFLAGS_KASAN)))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Michal Marek, Ingo Molnar, Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore fresh GCC >= v5.0.0 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
                             + KASAN_SHADOW_START;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 180 ++++++++++++++++++++++++++++++++++++++++++++++
 Makefile                |  10 ++-
 include/linux/kasan.h   |  42 +++++++++++
 include/linux/sched.h   |   3 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  16 +++++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 188 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  32 +++++++++
 mm/kasan/report.c       | 183 ++++++++++++++++++++++++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 +++
 12 files changed, 669 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..5a9d903
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,180 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN uses compile-time instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 5.0.0.
+
+Currently KASAN supported only for x86_64 architecture and requires kernel
+to be build with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+Currently KASAN works only with SLUB.
+For better bug detection and nicer report enable CONFIG_STACKTRACE, CONFIG_SLUB_DEBUG
+and put 'slub_debug=FU' to boot cmdline.
+Please don't use slab poisoning with KASan (slub_debug=P), beacuse if KASan will
+detects use after free allocation and free stacktraces will be overwritten by
+poison bytes, and KASan won't be able to print this backtraces.
+
+To exclude files from being instrumented by compiler, add a line
+similar to the following to the respective kernel Makefile:
+
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow on each memory
+access.
+
+AddressSanitizer dedicates 1/8 of the addressable in kernel memory to its shadow
+memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
+scale and offset to translate a memory address to its corresponding shadow address.
+
+Here is function witch translate address to corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_START;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
diff --git a/Makefile b/Makefile
index adc1884..8013146 100644
--- a/Makefile
+++ b/Makefile
@@ -388,6 +388,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
+CFLAGS_KASAN	= -fsanitize=kernel-address
 
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
@@ -432,7 +433,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -763,6 +764,13 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+  ifeq ($(call cc-option, $(CFLAGS_KASAN)),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..6055f64
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_START;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 7d799ea..7239425 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1658,6 +1658,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 8a04a4e..09824b5 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -635,6 +635,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..22fec2d
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,16 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: dynamic memory error detector"
+	default n
+	help
+	  Enables address sanitizer - dynamic memory error detector,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 perfomance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+endif
diff --git a/mm/Makefile b/mm/Makefile
index b2f18dc..b3c8b77 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -65,3 +65,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..65f8145
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,188 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool address_is_poisoned(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (shadow_value != 0) {
+		s8 last_byte = addr & KASAN_SHADOW_MASK;
+
+		return last_byte >= shadow_value;
+	}
+	return false;
+}
+
+static __always_inline unsigned long memory_is_poisoned(unsigned long addr,
+							size_t size)
+{
+	unsigned long end = addr + size;
+
+	for (; addr < end; addr++)
+		if (unlikely(address_is_poisoned(addr)))
+			return addr;
+	return 0;
+}
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	unsigned long access_addr;
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < PAGE_OFFSET)) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	access_addr = memory_is_poisoned(addr, size);
+	if (likely(access_addr == 0))
+		return;
+
+	info.access_addr = access_addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complains */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..2ea2ed7
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,32 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return ((shadow_addr - KASAN_SHADOW_START)
+		<< KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_START;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..3bfc8b6
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,183 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	void *object;
+	struct kmem_cache *cache;
+	void *slab_start;
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..7e2c9f8 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(call cc-option, $(CFLAGS_KASAN)))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar

This patch add arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffff800000000000 - 0xffff900000000000]
Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
to 0xffff900000000000.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>

---

It would be nice to not have different PAGE_OFFSET with and without CONFIG_KASAN.
We have big enough hole between vmemmap and esp fixup stacks.
So how about moving all direct mapping, vmalloc and vmemmap 8TB up without
hiding it under CONFIG_KASAN?
---
 arch/x86/Kconfig                     |  1 +
 arch/x86/boot/Makefile               |  2 ++
 arch/x86/boot/compressed/Makefile    |  2 ++
 arch/x86/include/asm/kasan.h         | 20 ++++++++++++
 arch/x86/include/asm/page_64_types.h |  4 +++
 arch/x86/include/asm/pgtable.h       |  7 ++++-
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/kernel/dumpstack.c          |  5 ++-
 arch/x86/kernel/head64.c             |  6 ++++
 arch/x86/kernel/head_64.S            | 16 ++++++++++
 arch/x86/mm/Makefile                 |  3 ++
 arch/x86/mm/init.c                   |  3 ++
 arch/x86/mm/kasan_init_64.c          | 59 ++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile           |  2 +-
 arch/x86/realmode/rm/Makefile        |  1 +
 arch/x86/vdso/Makefile               |  1 +
 include/linux/kasan.h                |  3 ++
 lib/Kconfig.kasan                    |  1 +
 18 files changed, 135 insertions(+), 3 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5b1b180..3b8770e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -135,6 +135,7 @@ config X86
 	select HAVE_ACPI_APEI if ACPI
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
+	select HAVE_ARCH_KASAN if X86_64 && !XEN
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 7a801a3..8e5b9b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..bff6a1a
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,20 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffff800000000000UL
+#define KASAN_SHADOW_END	0xffff900000000000UL
+
+#ifndef __ASSEMBLY__
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 6782051..ed98909 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -30,7 +30,11 @@
  * hypervisor to fit.  Choosing 16 slots here is arbitrary, but it's
  * what Xen requires.
  */
+#ifdef CONFIG_KASAN
+#define __PAGE_OFFSET           _AC(0xffff900000000000, UL)
+#else
 #define __PAGE_OFFSET           _AC(0xffff880000000000, UL)
+#endif
 
 #define __START_KERNEL_map	_AC(0xffffffff80000000, UL)
 
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index aa97a07..295263e 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -671,9 +671,14 @@ static inline int pgd_none(pgd_t pgd)
  */
 #define pgd_offset_k(address) pgd_offset(&init_mm, (address))
 
-
+#ifndef CONFIG_KASAN
 #define KERNEL_PGD_BOUNDARY	pgd_index(PAGE_OFFSET)
 #define KERNEL_PGD_PTRS		(PTRS_PER_PGD - KERNEL_PGD_BOUNDARY)
+#else
+#include <asm/kasan.h>
+#define KERNEL_PGD_BOUNDARY	pgd_index(KASAN_SHADOW_START)
+#define KERNEL_PGD_PTRS		(PTRS_PER_PGD - KERNEL_PGD_BOUNDARY)
+#endif
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index ada2e2d..4c59d7f 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..9d97e3a 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -158,6 +159,9 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+	write_cr3(__pa(early_level4_pgt));
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +183,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..6be3af7 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,6 +514,22 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..ef017a7 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
 #include <linux/swap.h>
 #include <linux/memblock.h>
 #include <linux/bootmem.h>	/* for max_low_pfn */
+#include <linux/kasan.h>
 
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..1efda37
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,59 @@
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/sched.h>
+#include <linux/kasan.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+static int __init map_range(struct range *range)
+{
+	int ret;
+	unsigned long start = kasan_mem_to_shadow(pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(pfn_to_kaddr(range->end));
+
+	ret = vmemmap_populate(start, end, NUMA_NO_NODE);
+
+	return ret;
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	memcpy(early_level4_pgt, init_level4_pgt, 4096);
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(0xffffc80000000000UL));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 6055f64..f957ee9 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -29,6 +29,7 @@ static inline void kasan_disable_local(void)
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size);
+void kasan_map_shadow(void);
 
 #else /* CONFIG_KASAN */
 
@@ -37,6 +38,8 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_map_shadow(void) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 22fec2d..156d3e6 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: dynamic memory error detector"
+	depends on !MEMORY_HOTPLUG
 	default n
 	help
 	  Enables address sanitizer - dynamic memory error detector,
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar

This patch add arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffff800000000000 - 0xffff900000000000]
Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
to 0xffff900000000000.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>

---

It would be nice to not have different PAGE_OFFSET with and without CONFIG_KASAN.
We have big enough hole between vmemmap and esp fixup stacks.
So how about moving all direct mapping, vmalloc and vmemmap 8TB up without
hiding it under CONFIG_KASAN?
---
 arch/x86/Kconfig                     |  1 +
 arch/x86/boot/Makefile               |  2 ++
 arch/x86/boot/compressed/Makefile    |  2 ++
 arch/x86/include/asm/kasan.h         | 20 ++++++++++++
 arch/x86/include/asm/page_64_types.h |  4 +++
 arch/x86/include/asm/pgtable.h       |  7 ++++-
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/kernel/dumpstack.c          |  5 ++-
 arch/x86/kernel/head64.c             |  6 ++++
 arch/x86/kernel/head_64.S            | 16 ++++++++++
 arch/x86/mm/Makefile                 |  3 ++
 arch/x86/mm/init.c                   |  3 ++
 arch/x86/mm/kasan_init_64.c          | 59 ++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile           |  2 +-
 arch/x86/realmode/rm/Makefile        |  1 +
 arch/x86/vdso/Makefile               |  1 +
 include/linux/kasan.h                |  3 ++
 lib/Kconfig.kasan                    |  1 +
 18 files changed, 135 insertions(+), 3 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5b1b180..3b8770e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -135,6 +135,7 @@ config X86
 	select HAVE_ACPI_APEI if ACPI
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
+	select HAVE_ARCH_KASAN if X86_64 && !XEN
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 7a801a3..8e5b9b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..bff6a1a
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,20 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffff800000000000UL
+#define KASAN_SHADOW_END	0xffff900000000000UL
+
+#ifndef __ASSEMBLY__
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 6782051..ed98909 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -30,7 +30,11 @@
  * hypervisor to fit.  Choosing 16 slots here is arbitrary, but it's
  * what Xen requires.
  */
+#ifdef CONFIG_KASAN
+#define __PAGE_OFFSET           _AC(0xffff900000000000, UL)
+#else
 #define __PAGE_OFFSET           _AC(0xffff880000000000, UL)
+#endif
 
 #define __START_KERNEL_map	_AC(0xffffffff80000000, UL)
 
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index aa97a07..295263e 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -671,9 +671,14 @@ static inline int pgd_none(pgd_t pgd)
  */
 #define pgd_offset_k(address) pgd_offset(&init_mm, (address))
 
-
+#ifndef CONFIG_KASAN
 #define KERNEL_PGD_BOUNDARY	pgd_index(PAGE_OFFSET)
 #define KERNEL_PGD_PTRS		(PTRS_PER_PGD - KERNEL_PGD_BOUNDARY)
+#else
+#include <asm/kasan.h>
+#define KERNEL_PGD_BOUNDARY	pgd_index(KASAN_SHADOW_START)
+#define KERNEL_PGD_PTRS		(PTRS_PER_PGD - KERNEL_PGD_BOUNDARY)
+#endif
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index ada2e2d..4c59d7f 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..9d97e3a 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -158,6 +159,9 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+	write_cr3(__pa(early_level4_pgt));
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +183,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..6be3af7 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,6 +514,22 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..ef017a7 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
 #include <linux/swap.h>
 #include <linux/memblock.h>
 #include <linux/bootmem.h>	/* for max_low_pfn */
+#include <linux/kasan.h>
 
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..1efda37
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,59 @@
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/sched.h>
+#include <linux/kasan.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+static int __init map_range(struct range *range)
+{
+	int ret;
+	unsigned long start = kasan_mem_to_shadow(pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(pfn_to_kaddr(range->end));
+
+	ret = vmemmap_populate(start, end, NUMA_NO_NODE);
+
+	return ret;
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	memcpy(early_level4_pgt, init_level4_pgt, 4096);
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(0xffffc80000000000UL));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 6055f64..f957ee9 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -29,6 +29,7 @@ static inline void kasan_disable_local(void)
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size);
+void kasan_map_shadow(void);
 
 #else /* CONFIG_KASAN */
 
@@ -37,6 +38,8 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_map_shadow(void) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 22fec2d..156d3e6 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: dynamic memory error detector"
+	depends on !MEMORY_HOTPLUG
 	default n
 	help
 	  Enables address sanitizer - dynamic memory error detector,
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 03/10] mm: page_alloc: add kasan hooks on alloc and free pathes
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f957ee9..c5ae971 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -31,6 +31,9 @@ static inline void kasan_disable_local(void)
 void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_map_shadow(void);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -40,6 +43,9 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_map_shadow(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 7d9d92e..a8c5d6d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 65f8145..ed4e925 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -109,6 +109,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report_error(&info);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2ea2ed7..227e9c6 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 3bfc8b6..94d79e7 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -78,6 +81,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3935c9a..63c55c9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -61,6 +61,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -753,6 +754,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -932,6 +934,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 03/10] mm: page_alloc: add kasan hooks on alloc and free pathes
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f957ee9..c5ae971 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -31,6 +31,9 @@ static inline void kasan_disable_local(void)
 void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_map_shadow(void);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -40,6 +43,9 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_map_shadow(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 7d9d92e..a8c5d6d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 65f8145..ed4e925 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -109,6 +109,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report_error(&info);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2ea2ed7..227e9c6 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 3bfc8b6..94d79e7 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -78,6 +81,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3935c9a..63c55c9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -61,6 +61,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -753,6 +754,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -932,6 +934,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function.
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/slab.h b/mm/slab.h
index 026e7c3..3e3a6ae 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -346,4 +346,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
+
 #endif /* MM_SLAB_H */
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function.
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/slab.h b/mm/slab.h
index 026e7c3..3e3a6ae 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -346,4 +346,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
+
 #endif /* MM_SLAB_H */
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 05/10] mm: slub: share slab_err and object_err functions
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.h | 5 +++++
 mm/slub.c | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 3e3a6ae..87491dd 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -345,6 +345,11 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
 
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
+void slab_err(struct kmem_cache *s, struct page *page,
+		const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 
 static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 {
diff --git a/mm/slub.c b/mm/slub.c
index fa86e58..c4158b2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -639,14 +639,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 05/10] mm: slub: share slab_err and object_err functions
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slab.h | 5 +++++
 mm/slub.c | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 3e3a6ae..87491dd 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -345,6 +345,11 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
 
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
+void slab_err(struct kmem_cache *s, struct page *page,
+		const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 
 static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 {
diff --git a/mm/slub.c b/mm/slub.c
index fa86e58..c4158b2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -639,14 +639,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 06/10] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index c4158b2..f3603d2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -477,13 +477,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -513,7 +523,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -687,7 +699,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -780,7 +794,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 06/10] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index c4158b2..f3603d2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -477,13 +477,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -513,7 +523,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -687,7 +699,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -780,7 +794,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 07/10] mm: slub: add kernel address sanitizer support for slub allocator
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (unaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  24 ++++++++++++
 include/linux/slab.h  |  11 +++++-
 lib/Kconfig.kasan     |   1 +
 mm/Makefile           |   3 ++
 mm/kasan/kasan.c      | 101 +++++++++++++++++++++++++++++++++++++++++++++++++-
 mm/kasan/kasan.h      |   5 +++
 mm/kasan/report.c     |  26 ++++++++++++-
 mm/slab_common.c      |   5 ++-
 mm/slub.c             |  36 ++++++++++++++++--
 9 files changed, 203 insertions(+), 9 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index c5ae971..728c046 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -33,6 +33,17 @@ void kasan_map_shadow(void);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_free_slab_pages(struct page *page, int order);
 
 #else /* CONFIG_KASAN */
 
@@ -45,6 +56,19 @@ static inline void kasan_map_shadow(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 156d3e6..69ea0d0 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: dynamic memory error detector"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	default n
 	help
 	  Enables address sanitizer - dynamic memory error detector,
diff --git a/mm/Makefile b/mm/Makefile
index b3c8b77..ff191dd5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index ed4e925..cf4feb3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -119,8 +120,104 @@ void kasan_free_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
 		kasan_poison_shadow(page_address(page),
-				PAGE_SIZE << order,
-				KASAN_FREE_PAGE);
+				PAGE_SIZE << order, KASAN_FREE_PAGE);
+}
+
+void kasan_free_slab_pages(struct page *page, int order)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_end = round_up(object_end, PAGE_SIZE);
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	size_t size = padding_end - padding_start;
+
+	if (size)
+		kasan_poison_shadow((void *)padding_start,
+				size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc_large);
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
 }
 
 void __asan_load1(unsigned long addr)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 227e9c6..1dd8ec7 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 94d79e7..34ba46d 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,15 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -74,14 +80,32 @@ static void print_address_description(struct access_info *info)
 {
 	void *object;
 	struct kmem_cache *cache;
-	void *slab_start;
 	struct page *page;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
 		dump_page(page, "kasan error");
 		dump_stack();
 		break;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d7d8ffd..4a4dd59 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -636,6 +636,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -810,8 +811,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index f3603d2..8f5bb71 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -479,10 +480,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1252,11 +1255,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1274,11 +1279,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1391,8 +1398,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1426,8 +1436,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p);
+		}
 	}
 
 	page->freelist = start;
@@ -1452,6 +1464,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
+	kasan_free_slab_pages(page, compound_order(page));
 
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2486,6 +2499,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2512,6 +2526,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2901,6 +2917,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3281,6 +3298,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3324,12 +3343,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3345,6 +3366,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 07/10] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (unaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  24 ++++++++++++
 include/linux/slab.h  |  11 +++++-
 lib/Kconfig.kasan     |   1 +
 mm/Makefile           |   3 ++
 mm/kasan/kasan.c      | 101 +++++++++++++++++++++++++++++++++++++++++++++++++-
 mm/kasan/kasan.h      |   5 +++
 mm/kasan/report.c     |  26 ++++++++++++-
 mm/slab_common.c      |   5 ++-
 mm/slub.c             |  36 ++++++++++++++++--
 9 files changed, 203 insertions(+), 9 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index c5ae971..728c046 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -33,6 +33,17 @@ void kasan_map_shadow(void);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_free_slab_pages(struct page *page, int order);
 
 #else /* CONFIG_KASAN */
 
@@ -45,6 +56,19 @@ static inline void kasan_map_shadow(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 156d3e6..69ea0d0 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: dynamic memory error detector"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	default n
 	help
 	  Enables address sanitizer - dynamic memory error detector,
diff --git a/mm/Makefile b/mm/Makefile
index b3c8b77..ff191dd5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index ed4e925..cf4feb3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -119,8 +120,104 @@ void kasan_free_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
 		kasan_poison_shadow(page_address(page),
-				PAGE_SIZE << order,
-				KASAN_FREE_PAGE);
+				PAGE_SIZE << order, KASAN_FREE_PAGE);
+}
+
+void kasan_free_slab_pages(struct page *page, int order)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_end = round_up(object_end, PAGE_SIZE);
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	size_t size = padding_end - padding_start;
+
+	if (size)
+		kasan_poison_shadow((void *)padding_start,
+				size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc_large);
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
 }
 
 void __asan_load1(unsigned long addr)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 227e9c6..1dd8ec7 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 94d79e7..34ba46d 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,15 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -74,14 +80,32 @@ static void print_address_description(struct access_info *info)
 {
 	void *object;
 	struct kmem_cache *cache;
-	void *slab_start;
 	struct page *page;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
 		dump_page(page, "kasan error");
 		dump_stack();
 		break;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d7d8ffd..4a4dd59 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -636,6 +636,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -810,8 +811,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index f3603d2..8f5bb71 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -479,10 +480,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1252,11 +1255,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1274,11 +1279,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1391,8 +1398,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1426,8 +1436,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p);
+		}
 	}
 
 	page->freelist = start;
@@ -1452,6 +1464,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
+	kasan_free_slab_pages(page, compound_order(page));
 
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2486,6 +2499,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2512,6 +2526,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2901,6 +2917,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3281,6 +3298,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3324,12 +3343,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3345,6 +3366,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 08/10] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 35e61f6..b83dbc1 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1396,6 +1397,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 08/10] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 35e61f6..b83dbc1 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1396,6 +1397,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 09/10] kmemleak: disable kasan instrumentation for kmemleak
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Catalin Marinas

kmemleak scans all memory while searching for pointers to
objects. So function scan_block could access
kasan's shadow memory region while searching for pointer.

Also kmalloc internally round up allocation size, and kmemleak
uses rounded up size as size of object. This makes kasan
to complain while calculation of object's checksum. The
simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 09/10] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Catalin Marinas

kmemleak scans all memory while searching for pointers to
objects. So function scan_block could access
kasan's shadow memory region while searching for pointer.

Also kmalloc internally round up allocation size, and kmemleak
uses rounded up size as size of object. This makes kasan
to complain while calculation of object's checksum. The
simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 10/10] lib: add kasan test module
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 14:31     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm

This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.debug |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 09824b5..d3190bb 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -633,6 +633,14 @@ config DEBUG_STACKOVERFLOW
 
 	  If in doubt, say "N".
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m
+	help
+	  This is a test module doing varios nasty things like
+	  out of bounds accesses, use after free. It is usefull for testing
+	  kernel debugging features like kernel address sanitizer.
+
 source "lib/Kconfig.kmemcheck"
 
 source "lib/Kconfig.kasan"
diff --git a/lib/Makefile b/lib/Makefile
index b73c3c3..4da59a9 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_MODULE) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..e448d4e
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+void __init kmalloc_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_rigth();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
1.8.5.5


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC/PATCH v2 10/10] lib: add kasan test module
@ 2014-09-10 14:31     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm

This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.debug |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 09824b5..d3190bb 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -633,6 +633,14 @@ config DEBUG_STACKOVERFLOW
 
 	  If in doubt, say "N".
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m
+	help
+	  This is a test module doing varios nasty things like
+	  out of bounds accesses, use after free. It is usefull for testing
+	  kernel debugging features like kernel address sanitizer.
+
 source "lib/Kconfig.kmemcheck"
 
 source "lib/Kconfig.kasan"
diff --git a/lib/Makefile b/lib/Makefile
index b73c3c3..4da59a9 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_MODULE) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..e448d4e
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+void __init kmalloc_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_rigth();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
1.8.5.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector.
  2014-09-10 15:01     ` Dave Hansen
@ 2014-09-10 14:58       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:58 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Catalin Marinas

On 09/10/2014 07:01 PM, Dave Hansen wrote:
> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>> This is a second iteration of kerenel address sanitizer (KASan).
> 
> Could you give a summary of what you've changed since the last version?
> 

I gave, grep for "Changes since v1:"

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector.
@ 2014-09-10 14:58       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 14:58 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Catalin Marinas

On 09/10/2014 07:01 PM, Dave Hansen wrote:
> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>> This is a second iteration of kerenel address sanitizer (KASan).
> 
> Could you give a summary of what you've changed since the last version?
> 

I gave, grep for "Changes since v1:"

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector.
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 15:01     ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-09-10 15:01 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Catalin Marinas

On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
> This is a second iteration of kerenel address sanitizer (KASan).

Could you give a summary of what you've changed since the last version?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector.
@ 2014-09-10 15:01     ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-09-10 15:01 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Catalin Marinas

On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
> This is a second iteration of kerenel address sanitizer (KASan).

Could you give a summary of what you've changed since the last version?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector.
  2014-09-10 14:31   ` Andrey Ryabinin
@ 2014-09-10 15:12     ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-10 15:12 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Catalin Marinas

On 09/10/2014 10:31 AM, Andrey Ryabinin wrote:
> Hi,
> This is a second iteration of kerenel address sanitizer (KASan).

FWIW, I've been using v1 for a while and it has uncovered quite a few
real bugs across the kernel.

Some of them (I didn't go beyond the first page on google):

* https://lkml.org/lkml/2014/8/9/162 - Which resulted in major changes to
ballooning.
* https://lkml.org/lkml/2014/7/13/192
* https://lkml.org/lkml/2014/7/24/359


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector.
@ 2014-09-10 15:12     ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-10 15:12 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Catalin Marinas

On 09/10/2014 10:31 AM, Andrey Ryabinin wrote:
> Hi,
> This is a second iteration of kerenel address sanitizer (KASan).

FWIW, I've been using v1 for a while and it has uncovered quite a few
real bugs across the kernel.

Some of them (I didn't go beyond the first page on google):

* https://lkml.org/lkml/2014/8/9/162 - Which resulted in major changes to
ballooning.
* https://lkml.org/lkml/2014/7/13/192
* https://lkml.org/lkml/2014/7/24/359


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-10 14:31     ` Andrey Ryabinin
@ 2014-09-10 15:46       ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-09-10 15:46 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

Overall, the approach here looks pretty sane.  As you noted, it would be
nice to keep PAGE_OFFSET in one place, but it's not a deal breaker for
me.  The use of the vmemmap code looks to be a nice fit.

Few nits below.

On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
> 16TB of virtual addressed used for shadow memory.
> It's located in range [0xffff800000000000 - 0xffff900000000000]
> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
> to 0xffff900000000000.
...
> It would be nice to not have different PAGE_OFFSET with and without CONFIG_KASAN.
> We have big enough hole between vmemmap and esp fixup stacks.
> So how about moving all direct mapping, vmalloc and vmemmap 8TB up without
> hiding it under CONFIG_KASAN?

Is there a reason this has to be _below_ the linear map?  Couldn't we
just carve some space out of the vmalloc() area for the kasan area?


>  arch/x86/Kconfig                     |  1 +
>  arch/x86/boot/Makefile               |  2 ++
>  arch/x86/boot/compressed/Makefile    |  2 ++
>  arch/x86/include/asm/kasan.h         | 20 ++++++++++++
>  arch/x86/include/asm/page_64_types.h |  4 +++
>  arch/x86/include/asm/pgtable.h       |  7 ++++-
>  arch/x86/kernel/Makefile             |  2 ++
>  arch/x86/kernel/dumpstack.c          |  5 ++-
>  arch/x86/kernel/head64.c             |  6 ++++
>  arch/x86/kernel/head_64.S            | 16 ++++++++++
>  arch/x86/mm/Makefile                 |  3 ++
>  arch/x86/mm/init.c                   |  3 ++
>  arch/x86/mm/kasan_init_64.c          | 59 ++++++++++++++++++++++++++++++++++++
>  arch/x86/realmode/Makefile           |  2 +-
>  arch/x86/realmode/rm/Makefile        |  1 +
>  arch/x86/vdso/Makefile               |  1 +
>  include/linux/kasan.h                |  3 ++
>  lib/Kconfig.kasan                    |  1 +
>  18 files changed, 135 insertions(+), 3 deletions(-)
>  create mode 100644 arch/x86/include/asm/kasan.h
>  create mode 100644 arch/x86/mm/kasan_init_64.c

This probably deserves an update of Documentation/x86/x86_64/mm.txt, too.

> +void __init kasan_map_shadow(void)
> +{
> +	int i;
> +
> +	memcpy(early_level4_pgt, init_level4_pgt, 4096);
> +	load_cr3(early_level4_pgt);
> +
> +	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
> +				kasan_mem_to_shadow(0xffffc80000000000UL));

This 0xffffc80000000000UL could be PAGE_OFFSET+MAXMEM.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-10 15:46       ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-09-10 15:46 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

Overall, the approach here looks pretty sane.  As you noted, it would be
nice to keep PAGE_OFFSET in one place, but it's not a deal breaker for
me.  The use of the vmemmap code looks to be a nice fit.

Few nits below.

On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
> 16TB of virtual addressed used for shadow memory.
> It's located in range [0xffff800000000000 - 0xffff900000000000]
> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
> to 0xffff900000000000.
...
> It would be nice to not have different PAGE_OFFSET with and without CONFIG_KASAN.
> We have big enough hole between vmemmap and esp fixup stacks.
> So how about moving all direct mapping, vmalloc and vmemmap 8TB up without
> hiding it under CONFIG_KASAN?

Is there a reason this has to be _below_ the linear map?  Couldn't we
just carve some space out of the vmalloc() area for the kasan area?


>  arch/x86/Kconfig                     |  1 +
>  arch/x86/boot/Makefile               |  2 ++
>  arch/x86/boot/compressed/Makefile    |  2 ++
>  arch/x86/include/asm/kasan.h         | 20 ++++++++++++
>  arch/x86/include/asm/page_64_types.h |  4 +++
>  arch/x86/include/asm/pgtable.h       |  7 ++++-
>  arch/x86/kernel/Makefile             |  2 ++
>  arch/x86/kernel/dumpstack.c          |  5 ++-
>  arch/x86/kernel/head64.c             |  6 ++++
>  arch/x86/kernel/head_64.S            | 16 ++++++++++
>  arch/x86/mm/Makefile                 |  3 ++
>  arch/x86/mm/init.c                   |  3 ++
>  arch/x86/mm/kasan_init_64.c          | 59 ++++++++++++++++++++++++++++++++++++
>  arch/x86/realmode/Makefile           |  2 +-
>  arch/x86/realmode/rm/Makefile        |  1 +
>  arch/x86/vdso/Makefile               |  1 +
>  include/linux/kasan.h                |  3 ++
>  lib/Kconfig.kasan                    |  1 +
>  18 files changed, 135 insertions(+), 3 deletions(-)
>  create mode 100644 arch/x86/include/asm/kasan.h
>  create mode 100644 arch/x86/mm/kasan_init_64.c

This probably deserves an update of Documentation/x86/x86_64/mm.txt, too.

> +void __init kasan_map_shadow(void)
> +{
> +	int i;
> +
> +	memcpy(early_level4_pgt, init_level4_pgt, 4096);
> +	load_cr3(early_level4_pgt);
> +
> +	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
> +				kasan_mem_to_shadow(0xffffc80000000000UL));

This 0xffffc80000000000UL could be PAGE_OFFSET+MAXMEM.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function.
  2014-09-10 14:31     ` Andrey Ryabinin
@ 2014-09-10 16:16       ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-09-10 16:16 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Pekka Enberg, David Rientjes

On Wed, 10 Sep 2014, Andrey Ryabinin wrote:

> virt_to_obj takes kmem_cache address, address of slab page,
> address x pointing somewhere inside slab object,
> and returns address of the begging of object.

This function is SLUB specific. Does it really need to be in slab.h?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function.
@ 2014-09-10 16:16       ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-09-10 16:16 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Pekka Enberg, David Rientjes

On Wed, 10 Sep 2014, Andrey Ryabinin wrote:

> virt_to_obj takes kmem_cache address, address of slab page,
> address x pointing somewhere inside slab object,
> and returns address of the begging of object.

This function is SLUB specific. Does it really need to be in slab.h?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-10 15:46       ` Dave Hansen
@ 2014-09-10 20:30         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 20:30 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, x86, linux-mm, Thomas Gleixner, Ingo Molnar

2014-09-10 19:46 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
> Overall, the approach here looks pretty sane.  As you noted, it would be
> nice to keep PAGE_OFFSET in one place, but it's not a deal breaker for
> me.  The use of the vmemmap code looks to be a nice fit.
>
> Few nits below.
>
> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>> 16TB of virtual addressed used for shadow memory.
>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>> to 0xffff900000000000.
> ...
>> It would be nice to not have different PAGE_OFFSET with and without CONFIG_KASAN.
>> We have big enough hole between vmemmap and esp fixup stacks.
>> So how about moving all direct mapping, vmalloc and vmemmap 8TB up without
>> hiding it under CONFIG_KASAN?
>
> Is there a reason this has to be _below_ the linear map?  Couldn't we
> just carve some space out of the vmalloc() area for the kasan area?
>

Yes, there is a reason for this. For inline instrumentation we need to
catch access to userspace without any additional check.
This means that we need shadow of 1 << 61 bytes and we don't have so
many addresses available. However, we could use
hole between userspace and kernelspace for that. For any address
between [0 - 0xffff800000000000], shadow address will be
in this hole, so checking shadow value will produce general protection
fault (GPF). We may even try handle GPF in a special way
and print more user-friendly report (this will be under CONFIG_KASAN of course).

But now I realized that we even if we put shadow in vmalloc, shadow
addresses  corresponding to userspace addresses
still will be in between userspace - kernelspace, so we also will get GPF.
There is the only problem I see now in such approach. Lets consider
that because of some bug in kernel we are trying to access
memory slightly bellow 0xffff800000000000. In this case kasan will try
to check some shadow which in fact is not a shadow byte at all.
It's not a big deal though, kernel will crash anyway. In only means
that debugging of such problems could be a little more complex
than without kasan.



>
>>  arch/x86/Kconfig                     |  1 +
>>  arch/x86/boot/Makefile               |  2 ++
>>  arch/x86/boot/compressed/Makefile    |  2 ++
>>  arch/x86/include/asm/kasan.h         | 20 ++++++++++++
>>  arch/x86/include/asm/page_64_types.h |  4 +++
>>  arch/x86/include/asm/pgtable.h       |  7 ++++-
>>  arch/x86/kernel/Makefile             |  2 ++
>>  arch/x86/kernel/dumpstack.c          |  5 ++-
>>  arch/x86/kernel/head64.c             |  6 ++++
>>  arch/x86/kernel/head_64.S            | 16 ++++++++++
>>  arch/x86/mm/Makefile                 |  3 ++
>>  arch/x86/mm/init.c                   |  3 ++
>>  arch/x86/mm/kasan_init_64.c          | 59 ++++++++++++++++++++++++++++++++++++
>>  arch/x86/realmode/Makefile           |  2 +-
>>  arch/x86/realmode/rm/Makefile        |  1 +
>>  arch/x86/vdso/Makefile               |  1 +
>>  include/linux/kasan.h                |  3 ++
>>  lib/Kconfig.kasan                    |  1 +
>>  18 files changed, 135 insertions(+), 3 deletions(-)
>>  create mode 100644 arch/x86/include/asm/kasan.h
>>  create mode 100644 arch/x86/mm/kasan_init_64.c
>
> This probably deserves an update of Documentation/x86/x86_64/mm.txt, too.
>

Sure, I didn't bother to do it now in case if memory layout changes in
this patch not final.

>> +void __init kasan_map_shadow(void)
>> +{
>> +     int i;
>> +
>> +     memcpy(early_level4_pgt, init_level4_pgt, 4096);
>> +     load_cr3(early_level4_pgt);
>> +
>> +     clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
>> +                             kasan_mem_to_shadow(0xffffc80000000000UL));
>
> This 0xffffc80000000000UL could be PAGE_OFFSET+MAXMEM.
>
>
>

-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-10 20:30         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 20:30 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, x86, linux-mm, Thomas Gleixner, Ingo Molnar

2014-09-10 19:46 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
> Overall, the approach here looks pretty sane.  As you noted, it would be
> nice to keep PAGE_OFFSET in one place, but it's not a deal breaker for
> me.  The use of the vmemmap code looks to be a nice fit.
>
> Few nits below.
>
> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>> 16TB of virtual addressed used for shadow memory.
>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>> to 0xffff900000000000.
> ...
>> It would be nice to not have different PAGE_OFFSET with and without CONFIG_KASAN.
>> We have big enough hole between vmemmap and esp fixup stacks.
>> So how about moving all direct mapping, vmalloc and vmemmap 8TB up without
>> hiding it under CONFIG_KASAN?
>
> Is there a reason this has to be _below_ the linear map?  Couldn't we
> just carve some space out of the vmalloc() area for the kasan area?
>

Yes, there is a reason for this. For inline instrumentation we need to
catch access to userspace without any additional check.
This means that we need shadow of 1 << 61 bytes and we don't have so
many addresses available. However, we could use
hole between userspace and kernelspace for that. For any address
between [0 - 0xffff800000000000], shadow address will be
in this hole, so checking shadow value will produce general protection
fault (GPF). We may even try handle GPF in a special way
and print more user-friendly report (this will be under CONFIG_KASAN of course).

But now I realized that we even if we put shadow in vmalloc, shadow
addresses  corresponding to userspace addresses
still will be in between userspace - kernelspace, so we also will get GPF.
There is the only problem I see now in such approach. Lets consider
that because of some bug in kernel we are trying to access
memory slightly bellow 0xffff800000000000. In this case kasan will try
to check some shadow which in fact is not a shadow byte at all.
It's not a big deal though, kernel will crash anyway. In only means
that debugging of such problems could be a little more complex
than without kasan.



>
>>  arch/x86/Kconfig                     |  1 +
>>  arch/x86/boot/Makefile               |  2 ++
>>  arch/x86/boot/compressed/Makefile    |  2 ++
>>  arch/x86/include/asm/kasan.h         | 20 ++++++++++++
>>  arch/x86/include/asm/page_64_types.h |  4 +++
>>  arch/x86/include/asm/pgtable.h       |  7 ++++-
>>  arch/x86/kernel/Makefile             |  2 ++
>>  arch/x86/kernel/dumpstack.c          |  5 ++-
>>  arch/x86/kernel/head64.c             |  6 ++++
>>  arch/x86/kernel/head_64.S            | 16 ++++++++++
>>  arch/x86/mm/Makefile                 |  3 ++
>>  arch/x86/mm/init.c                   |  3 ++
>>  arch/x86/mm/kasan_init_64.c          | 59 ++++++++++++++++++++++++++++++++++++
>>  arch/x86/realmode/Makefile           |  2 +-
>>  arch/x86/realmode/rm/Makefile        |  1 +
>>  arch/x86/vdso/Makefile               |  1 +
>>  include/linux/kasan.h                |  3 ++
>>  lib/Kconfig.kasan                    |  1 +
>>  18 files changed, 135 insertions(+), 3 deletions(-)
>>  create mode 100644 arch/x86/include/asm/kasan.h
>>  create mode 100644 arch/x86/mm/kasan_init_64.c
>
> This probably deserves an update of Documentation/x86/x86_64/mm.txt, too.
>

Sure, I didn't bother to do it now in case if memory layout changes in
this patch not final.

>> +void __init kasan_map_shadow(void)
>> +{
>> +     int i;
>> +
>> +     memcpy(early_level4_pgt, init_level4_pgt, 4096);
>> +     load_cr3(early_level4_pgt);
>> +
>> +     clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
>> +                             kasan_mem_to_shadow(0xffffc80000000000UL));
>
> This 0xffffc80000000000UL could be PAGE_OFFSET+MAXMEM.
>
>
>

-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function.
  2014-09-10 16:16       ` Christoph Lameter
@ 2014-09-10 20:32         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 20:32 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Pekka Enberg, David Rientjes

2014-09-10 20:16 GMT+04:00 Christoph Lameter <cl@linux.com>:
> On Wed, 10 Sep 2014, Andrey Ryabinin wrote:
>
>> virt_to_obj takes kmem_cache address, address of slab page,
>> address x pointing somewhere inside slab object,
>> and returns address of the begging of object.
>
> This function is SLUB specific. Does it really need to be in slab.h?
>

Oh, yes this should be in slub.c

-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function.
@ 2014-09-10 20:32         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 20:32 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Pekka Enberg, David Rientjes

2014-09-10 20:16 GMT+04:00 Christoph Lameter <cl@linux.com>:
> On Wed, 10 Sep 2014, Andrey Ryabinin wrote:
>
>> virt_to_obj takes kmem_cache address, address of slab page,
>> address x pointing somewhere inside slab object,
>> and returns address of the begging of object.
>
> This function is SLUB specific. Does it really need to be in slab.h?
>

Oh, yes this should be in slub.c

-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 10/10] lib: add kasan test module
  2014-09-10 14:31     ` Andrey Ryabinin
@ 2014-09-10 20:38       ` Dave Jones
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Jones @ 2014-09-10 20:38 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm

On Wed, Sep 10, 2014 at 06:31:27PM +0400, Andrey Ryabinin wrote:
 > This is a test module doing varios nasty things like
 > out of bounds accesses, use after free. It is usefull for testing
 > kernel debugging features like kernel address sanitizer.
 
 > +void __init kmalloc_oob_rigth(void)
 > +{

'right' ?

	Dave

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 10/10] lib: add kasan test module
@ 2014-09-10 20:38       ` Dave Jones
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Jones @ 2014-09-10 20:38 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm

On Wed, Sep 10, 2014 at 06:31:27PM +0400, Andrey Ryabinin wrote:
 > This is a test module doing varios nasty things like
 > out of bounds accesses, use after free. It is usefull for testing
 > kernel debugging features like kernel address sanitizer.
 
 > +void __init kmalloc_oob_rigth(void)
 > +{

'right' ?

	Dave

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 10/10] lib: add kasan test module
  2014-09-10 20:38       ` Dave Jones
@ 2014-09-10 20:46         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 20:46 UTC (permalink / raw)
  To: Dave Jones, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm

2014-09-11 0:38 GMT+04:00 Dave Jones <davej@redhat.com>:
> On Wed, Sep 10, 2014 at 06:31:27PM +0400, Andrey Ryabinin wrote:
>  > This is a test module doing varios nasty things like
>  > out of bounds accesses, use after free. It is usefull for testing
>  > kernel debugging features like kernel address sanitizer.
>
>  > +void __init kmalloc_oob_rigth(void)
>  > +{
>
> 'right' ?
>
>

I mean to the right side here (opposite to left), not synonym of  word
'correct'.

-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 10/10] lib: add kasan test module
@ 2014-09-10 20:46         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 20:46 UTC (permalink / raw)
  To: Dave Jones, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm

2014-09-11 0:38 GMT+04:00 Dave Jones <davej@redhat.com>:
> On Wed, Sep 10, 2014 at 06:31:27PM +0400, Andrey Ryabinin wrote:
>  > This is a test module doing varios nasty things like
>  > out of bounds accesses, use after free. It is usefull for testing
>  > kernel debugging features like kernel address sanitizer.
>
>  > +void __init kmalloc_oob_rigth(void)
>  > +{
>
> 'right' ?
>
>

I mean to the right side here (opposite to left), not synonym of  word
'correct'.

-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 10/10] lib: add kasan test module
  2014-09-10 20:46         ` Andrey Ryabinin
@ 2014-09-10 20:47           ` Dave Jones
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Jones @ 2014-09-10 20:47 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm

On Thu, Sep 11, 2014 at 12:46:04AM +0400, Andrey Ryabinin wrote:
 > 2014-09-11 0:38 GMT+04:00 Dave Jones <davej@redhat.com>:
 > > On Wed, Sep 10, 2014 at 06:31:27PM +0400, Andrey Ryabinin wrote:
 > >  > This is a test module doing varios nasty things like
 > >  > out of bounds accesses, use after free. It is usefull for testing
 > >  > kernel debugging features like kernel address sanitizer.
 > >
 > >  > +void __init kmalloc_oob_rigth(void)
 > >  > +{
 > >
 > > 'right' ?
 > >
 > >
 > 
 > I mean to the right side here (opposite to left), not synonym of  word
 > 'correct'.

yes, but there's a typo.

	Dave

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 10/10] lib: add kasan test module
@ 2014-09-10 20:47           ` Dave Jones
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Jones @ 2014-09-10 20:47 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm

On Thu, Sep 11, 2014 at 12:46:04AM +0400, Andrey Ryabinin wrote:
 > 2014-09-11 0:38 GMT+04:00 Dave Jones <davej@redhat.com>:
 > > On Wed, Sep 10, 2014 at 06:31:27PM +0400, Andrey Ryabinin wrote:
 > >  > This is a test module doing varios nasty things like
 > >  > out of bounds accesses, use after free. It is usefull for testing
 > >  > kernel debugging features like kernel address sanitizer.
 > >
 > >  > +void __init kmalloc_oob_rigth(void)
 > >  > +{
 > >
 > > 'right' ?
 > >
 > >
 > 
 > I mean to the right side here (opposite to left), not synonym of  word
 > 'correct'.

yes, but there's a typo.

	Dave

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 10/10] lib: add kasan test module
  2014-09-10 20:47           ` Dave Jones
@ 2014-09-10 20:50             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 20:50 UTC (permalink / raw)
  To: Dave Jones, Andrey Ryabinin, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm

2014-09-11 0:47 GMT+04:00 Dave Jones <davej@redhat.com>:
> On Thu, Sep 11, 2014 at 12:46:04AM +0400, Andrey Ryabinin wrote:
>  > 2014-09-11 0:38 GMT+04:00 Dave Jones <davej@redhat.com>:
>  > > On Wed, Sep 10, 2014 at 06:31:27PM +0400, Andrey Ryabinin wrote:
>  > >  > This is a test module doing varios nasty things like
>  > >  > out of bounds accesses, use after free. It is usefull for testing
>  > >  > kernel debugging features like kernel address sanitizer.
>  > >
>  > >  > +void __init kmalloc_oob_rigth(void)
>  > >  > +{
>  > >
>  > > 'right' ?
>  > >
>  > >
>  >
>  > I mean to the right side here (opposite to left), not synonym of  word
>  > 'correct'.
>
> yes, but there's a typo.
>
>         Dave

Yeah, I see now, thanks.

-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 10/10] lib: add kasan test module
@ 2014-09-10 20:50             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-10 20:50 UTC (permalink / raw)
  To: Dave Jones, Andrey Ryabinin, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm

2014-09-11 0:47 GMT+04:00 Dave Jones <davej@redhat.com>:
> On Thu, Sep 11, 2014 at 12:46:04AM +0400, Andrey Ryabinin wrote:
>  > 2014-09-11 0:38 GMT+04:00 Dave Jones <davej@redhat.com>:
>  > > On Wed, Sep 10, 2014 at 06:31:27PM +0400, Andrey Ryabinin wrote:
>  > >  > This is a test module doing varios nasty things like
>  > >  > out of bounds accesses, use after free. It is usefull for testing
>  > >  > kernel debugging features like kernel address sanitizer.
>  > >
>  > >  > +void __init kmalloc_oob_rigth(void)
>  > >  > +{
>  > >
>  > > 'right' ?
>  > >
>  > >
>  >
>  > I mean to the right side here (opposite to left), not synonym of  word
>  > 'correct'.
>
> yes, but there's a typo.
>
>         Dave

Yeah, I see now, thanks.

-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-10 20:30         ` Andrey Ryabinin
@ 2014-09-10 22:45           ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-09-10 22:45 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, x86, linux-mm, Thomas Gleixner, Ingo Molnar

On 09/10/2014 01:30 PM, Andrey Ryabinin wrote:
> Yes, there is a reason for this. For inline instrumentation we need to
> catch access to userspace without any additional check.
> This means that we need shadow of 1 << 61 bytes and we don't have so
> many addresses available.

That sounds reasonable.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-10 22:45           ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-09-10 22:45 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, x86, linux-mm, Thomas Gleixner, Ingo Molnar

On 09/10/2014 01:30 PM, Andrey Ryabinin wrote:
> Yes, there is a reason for this. For inline instrumentation we need to
> catch access to userspace without any additional check.
> This means that we need shadow of 1 << 61 bytes and we don't have so
> many addresses available.

That sounds reasonable.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
  2014-09-10 14:31     ` Andrey Ryabinin
@ 2014-09-11  3:55       ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-11  3:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Michal Marek, Ingo Molnar, Peter Zijlstra

On 09/10/2014 10:31 AM, Andrey Ryabinin wrote:
> +ifdef CONFIG_KASAN
> +  ifeq ($(call cc-option, $(CFLAGS_KASAN)),)
> +    $(warning Cannot use CONFIG_KASAN: \
> +	      -fsanitize=kernel-address not supported by compiler)
> +  endif
> +endif

This seems to always indicate that my gcc doesn't support
-fsanitize=kernel-address:

Makefile:769: Cannot use CONFIG_KASAN: -fsanitize=kernel-address not supported by compiler

Even though:

$ gcc --version
gcc (GCC) 5.0.0 20140904 (experimental)
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

$ cat test.c
#include <stdio.h>
#include <sys/mman.h>

void __asan_init_v3(void) { }

int main(int argc, char *argv[])
{
        return 0;
}
$ gcc -fsanitize=kernel-address test.c
$ ./a.out
$


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
@ 2014-09-11  3:55       ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-11  3:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Michal Marek, Ingo Molnar, Peter Zijlstra

On 09/10/2014 10:31 AM, Andrey Ryabinin wrote:
> +ifdef CONFIG_KASAN
> +  ifeq ($(call cc-option, $(CFLAGS_KASAN)),)
> +    $(warning Cannot use CONFIG_KASAN: \
> +	      -fsanitize=kernel-address not supported by compiler)
> +  endif
> +endif

This seems to always indicate that my gcc doesn't support
-fsanitize=kernel-address:

Makefile:769: Cannot use CONFIG_KASAN: -fsanitize=kernel-address not supported by compiler

Even though:

$ gcc --version
gcc (GCC) 5.0.0 20140904 (experimental)
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

$ cat test.c
#include <stdio.h>
#include <sys/mman.h>

void __asan_init_v3(void) { }

int main(int argc, char *argv[])
{
        return 0;
}
$ gcc -fsanitize=kernel-address test.c
$ ./a.out
$


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-10 14:31     ` Andrey Ryabinin
@ 2014-09-11  4:01       ` H. Peter Anvin
  -1 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:01 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
> This patch add arch specific code for kernel address sanitizer.
> 
> 16TB of virtual addressed used for shadow memory.
> It's located in range [0xffff800000000000 - 0xffff900000000000]
> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
> to 0xffff900000000000.

NAK on this.

0xffff880000000000 is the lowest usable address because we have agreed
to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
other non-OS uses.

Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
zone higher up in memory?

	-hpa


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-10 14:31     ` Andrey Ryabinin
@ 2014-09-11  4:01       ` H. Peter Anvin
  -1 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:01 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
> This patch add arch specific code for kernel address sanitizer.
> 
> 16TB of virtual addressed used for shadow memory.
> It's located in range [0xffff800000000000 - 0xffff900000000000]
> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
> to 0xffff900000000000.

NAK on this.

0xffff880000000000 is the lowest usable address because we have agreed
to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
other non-OS uses.

Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
zone higher up in memory?

	-hpa


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  4:01       ` H. Peter Anvin
  0 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:01 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
> This patch add arch specific code for kernel address sanitizer.
> 
> 16TB of virtual addressed used for shadow memory.
> It's located in range [0xffff800000000000 - 0xffff900000000000]
> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
> to 0xffff900000000000.

NAK on this.

0xffff880000000000 is the lowest usable address because we have agreed
to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
other non-OS uses.

Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
zone higher up in memory?

	-hpa

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  4:01       ` H. Peter Anvin
  0 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:01 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
> This patch add arch specific code for kernel address sanitizer.
> 
> 16TB of virtual addressed used for shadow memory.
> It's located in range [0xffff800000000000 - 0xffff900000000000]
> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
> to 0xffff900000000000.

NAK on this.

0xffff880000000000 is the lowest usable address because we have agreed
to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
other non-OS uses.

Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
zone higher up in memory?

	-hpa

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-10 22:45           ` Dave Hansen
@ 2014-09-11  4:26             ` H. Peter Anvin
  -1 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:26 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Andi Kleen,
	Vegard Nossum, x86, linux-mm, Thomas Gleixner, Ingo Molnar

Except you just broke PVop kernels.

Sent from my tablet, pardon any formatting problems.

> On Sep 10, 2014, at 15:45, Dave Hansen <dave.hansen@intel.com> wrote:
> 
>> On 09/10/2014 01:30 PM, Andrey Ryabinin wrote:
>> Yes, there is a reason for this. For inline instrumentation we need to
>> catch access to userspace without any additional check.
>> This means that we need shadow of 1 << 61 bytes and we don't have so
>> many addresses available.
> 
> That sounds reasonable.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  4:26             ` H. Peter Anvin
  0 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:26 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Andi Kleen,
	Vegard Nossum, x86, linux-mm, Thomas Gleixner, Ingo Molnar

Except you just broke PVop kernels.

Sent from my tablet, pardon any formatting problems.

> On Sep 10, 2014, at 15:45, Dave Hansen <dave.hansen@intel.com> wrote:
> 
>> On 09/10/2014 01:30 PM, Andrey Ryabinin wrote:
>> Yes, there is a reason for this. For inline instrumentation we need to
>> catch access to userspace without any additional check.
>> This means that we need shadow of 1 << 61 bytes and we don't have so
>> many addresses available.
> 
> That sounds reasonable.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  4:26             ` H. Peter Anvin
@ 2014-09-11  4:29               ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-11  4:29 UTC (permalink / raw)
  To: H. Peter Anvin, Dave Hansen
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar

On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
> Except you just broke PVop kernels.

So is this why v2 refuses to boot on my KVM guest? (was digging
into that before I send a mail out).


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  4:29               ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-11  4:29 UTC (permalink / raw)
  To: H. Peter Anvin, Dave Hansen
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar

On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
> Except you just broke PVop kernels.

So is this why v2 refuses to boot on my KVM guest? (was digging
into that before I send a mail out).


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  4:29               ` Sasha Levin
@ 2014-09-11  4:33                 ` H. Peter Anvin
  -1 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:33 UTC (permalink / raw)
  To: Sasha Levin, Dave Hansen
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar

On 09/10/2014 09:29 PM, Sasha Levin wrote:
> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>> Except you just broke PVop kernels.
> 
> So is this why v2 refuses to boot on my KVM guest? (was digging
> into that before I send a mail out).
> 

No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.

	-hpa



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  4:29               ` Sasha Levin
@ 2014-09-11  4:33                 ` H. Peter Anvin
  -1 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:33 UTC (permalink / raw)
  To: Sasha Levin, Dave Hansen
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar

On 09/10/2014 09:29 PM, Sasha Levin wrote:
> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>> Except you just broke PVop kernels.
> 
> So is this why v2 refuses to boot on my KVM guest? (was digging
> into that before I send a mail out).
> 

No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.

	-hpa



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  4:33                 ` H. Peter Anvin
  0 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:33 UTC (permalink / raw)
  To: Sasha Levin, Dave Hansen
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar

On 09/10/2014 09:29 PM, Sasha Levin wrote:
> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>> Except you just broke PVop kernels.
> 
> So is this why v2 refuses to boot on my KVM guest? (was digging
> into that before I send a mail out).
> 

No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.

	-hpa


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  4:33                 ` H. Peter Anvin
  0 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:33 UTC (permalink / raw)
  To: Sasha Levin, Dave Hansen
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Andi Kleen, Vegard Nossum, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar

On 09/10/2014 09:29 PM, Sasha Levin wrote:
> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>> Except you just broke PVop kernels.
> 
> So is this why v2 refuses to boot on my KVM guest? (was digging
> into that before I send a mail out).
> 

No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.

	-hpa


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  4:33                 ` H. Peter Anvin
@ 2014-09-11  4:46                   ` Andi Kleen
  -1 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-09-11  4:46 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Sasha Levin, Dave Hansen, Andrey Ryabinin, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Andi Kleen,
	Vegard Nossum, x86, linux-mm, Thomas Gleixner, Ingo Molnar

On Wed, Sep 10, 2014 at 09:33:11PM -0700, H. Peter Anvin wrote:
> On 09/10/2014 09:29 PM, Sasha Levin wrote:
> > On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
> >> Except you just broke PVop kernels.
> > 
> > So is this why v2 refuses to boot on my KVM guest? (was digging
> > into that before I send a mail out).
> > 
> 
> No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.

Just exclude it in Kconfig? I assume PV will eventually go away anyways.

-Andi

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  4:46                   ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-09-11  4:46 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Sasha Levin, Dave Hansen, Andrey Ryabinin, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Andi Kleen,
	Vegard Nossum, x86, linux-mm, Thomas Gleixner, Ingo Molnar

On Wed, Sep 10, 2014 at 09:33:11PM -0700, H. Peter Anvin wrote:
> On 09/10/2014 09:29 PM, Sasha Levin wrote:
> > On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
> >> Except you just broke PVop kernels.
> > 
> > So is this why v2 refuses to boot on my KVM guest? (was digging
> > into that before I send a mail out).
> > 
> 
> No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.

Just exclude it in Kconfig? I assume PV will eventually go away anyways.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  4:46                   ` Andi Kleen
@ 2014-09-11  4:52                     ` H. Peter Anvin
  -1 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:52 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Sasha Levin, Dave Hansen, Andrey Ryabinin, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Vegard Nossum,
	x86, linux-mm, Thomas Gleixner, Ingo Molnar

On 09/10/2014 09:46 PM, Andi Kleen wrote:
> On Wed, Sep 10, 2014 at 09:33:11PM -0700, H. Peter Anvin wrote:
>> On 09/10/2014 09:29 PM, Sasha Levin wrote:
>>> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>>>> Except you just broke PVop kernels.
>>>
>>> So is this why v2 refuses to boot on my KVM guest? (was digging
>>> into that before I send a mail out).
>>>
>>
>> No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.
> 
> Just exclude it in Kconfig? I assume PV will eventually go away anyways.
> 

That would be nice...

	-hpa



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  4:52                     ` H. Peter Anvin
  0 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-09-11  4:52 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Sasha Levin, Dave Hansen, Andrey Ryabinin, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Vegard Nossum,
	x86, linux-mm, Thomas Gleixner, Ingo Molnar

On 09/10/2014 09:46 PM, Andi Kleen wrote:
> On Wed, Sep 10, 2014 at 09:33:11PM -0700, H. Peter Anvin wrote:
>> On 09/10/2014 09:29 PM, Sasha Levin wrote:
>>> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>>>> Except you just broke PVop kernels.
>>>
>>> So is this why v2 refuses to boot on my KVM guest? (was digging
>>> into that before I send a mail out).
>>>
>>
>> No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.
> 
> Just exclude it in Kconfig? I assume PV will eventually go away anyways.
> 

That would be nice...

	-hpa


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  4:46                   ` Andi Kleen
@ 2014-09-11  5:25                     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-11  5:25 UTC (permalink / raw)
  To: Andi Kleen, H. Peter Anvin
  Cc: Sasha Levin, Dave Hansen, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/11/2014 08:46 AM, Andi Kleen wrote:
> On Wed, Sep 10, 2014 at 09:33:11PM -0700, H. Peter Anvin wrote:
>> On 09/10/2014 09:29 PM, Sasha Levin wrote:
>>> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>>>> Except you just broke PVop kernels.
>>>
>>> So is this why v2 refuses to boot on my KVM guest? (was digging
>>> into that before I send a mail out).
>>>
>>
>> No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.
> 
> Just exclude it in Kconfig? I assume PV will eventually go away anyways.
> 
> -Andi
> 

That's done already in this patch:

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -135,6 +135,7 @@ config X86
 	select HAVE_ACPI_APEI if ACPI
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
+	select HAVE_ARCH_KASAN if X86_64 && !XEN


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  5:25                     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-11  5:25 UTC (permalink / raw)
  To: Andi Kleen, H. Peter Anvin
  Cc: Sasha Levin, Dave Hansen, Andrey Ryabinin, LKML, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/11/2014 08:46 AM, Andi Kleen wrote:
> On Wed, Sep 10, 2014 at 09:33:11PM -0700, H. Peter Anvin wrote:
>> On 09/10/2014 09:29 PM, Sasha Levin wrote:
>>> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>>>> Except you just broke PVop kernels.
>>>
>>> So is this why v2 refuses to boot on my KVM guest? (was digging
>>> into that before I send a mail out).
>>>
>>
>> No, KVM should be fine.  It is Xen PV which ends up as a smoldering crater.
> 
> Just exclude it in Kconfig? I assume PV will eventually go away anyways.
> 
> -Andi
> 

That's done already in this patch:

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -135,6 +135,7 @@ config X86
 	select HAVE_ACPI_APEI if ACPI
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
+	select HAVE_ARCH_KASAN if X86_64 && !XEN

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  4:01       ` H. Peter Anvin
@ 2014-09-11  5:31         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-11  5:31 UTC (permalink / raw)
  To: H. Peter Anvin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/11/2014 08:01 AM, H. Peter Anvin wrote:
> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>> This patch add arch specific code for kernel address sanitizer.
>>
>> 16TB of virtual addressed used for shadow memory.
>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>> to 0xffff900000000000.
> 
> NAK on this.
> 
> 0xffff880000000000 is the lowest usable address because we have agreed
> to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
> other non-OS uses.
> 
> Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
> zone higher up in memory?
> 

I already answered to Dave why I choose to place shadow bellow PAGE_OFFSET (answer copied bellow).
In short - yes, shadow could be higher. But for some sort of kernel bugs we could have confusing oopses in kasan kernel.

On 09/11/2014 12:30 AM, Andrey Ryabinin wrote:
> 2014-09-10 19:46 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
>>
>> Is there a reason this has to be _below_ the linear map?  Couldn't we
>> just carve some space out of the vmalloc() area for the kasan area?
>>
>
> Yes, there is a reason for this. For inline instrumentation we need to
> catch access to userspace without any additional check.
> This means that we need shadow of 1 << 61 bytes and we don't have so
> many addresses available. However, we could use
> hole between userspace and kernelspace for that. For any address
> between [0 - 0xffff800000000000], shadow address will be
> in this hole, so checking shadow value will produce general protection
> fault (GPF). We may even try handle GPF in a special way
> and print more user-friendly report (this will be under CONFIG_KASAN of course).
>
> But now I realized that we even if we put shadow in vmalloc, shadow
> addresses  corresponding to userspace addresses
> still will be in between userspace - kernelspace, so we also will get GPF.
> There is the only problem I see now in such approach. Lets consider
> that because of some bug in kernel we are trying to access
> memory slightly bellow 0xffff800000000000. In this case kasan will try
> to check some shadow which in fact is not a shadow byte at all.
> It's not a big deal though, kernel will crash anyway. In only means
> that debugging of such problems could be a little more complex
> than without kasan.
>
>


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11  5:31         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-11  5:31 UTC (permalink / raw)
  To: H. Peter Anvin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/11/2014 08:01 AM, H. Peter Anvin wrote:
> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>> This patch add arch specific code for kernel address sanitizer.
>>
>> 16TB of virtual addressed used for shadow memory.
>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>> to 0xffff900000000000.
> 
> NAK on this.
> 
> 0xffff880000000000 is the lowest usable address because we have agreed
> to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
> other non-OS uses.
> 
> Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
> zone higher up in memory?
> 

I already answered to Dave why I choose to place shadow bellow PAGE_OFFSET (answer copied bellow).
In short - yes, shadow could be higher. But for some sort of kernel bugs we could have confusing oopses in kasan kernel.

On 09/11/2014 12:30 AM, Andrey Ryabinin wrote:
> 2014-09-10 19:46 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
>>
>> Is there a reason this has to be _below_ the linear map?  Couldn't we
>> just carve some space out of the vmalloc() area for the kasan area?
>>
>
> Yes, there is a reason for this. For inline instrumentation we need to
> catch access to userspace without any additional check.
> This means that we need shadow of 1 << 61 bytes and we don't have so
> many addresses available. However, we could use
> hole between userspace and kernelspace for that. For any address
> between [0 - 0xffff800000000000], shadow address will be
> in this hole, so checking shadow value will produce general protection
> fault (GPF). We may even try handle GPF in a special way
> and print more user-friendly report (this will be under CONFIG_KASAN of course).
>
> But now I realized that we even if we put shadow in vmalloc, shadow
> addresses  corresponding to userspace addresses
> still will be in between userspace - kernelspace, so we also will get GPF.
> There is the only problem I see now in such approach. Lets consider
> that because of some bug in kernel we are trying to access
> memory slightly bellow 0xffff800000000000. In this case kasan will try
> to check some shadow which in fact is not a shadow byte at all.
> It's not a big deal though, kernel will crash anyway. In only means
> that debugging of such problems could be a little more complex
> than without kasan.
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  4:29               ` Sasha Levin
@ 2014-09-11 11:51                 ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-11 11:51 UTC (permalink / raw)
  To: Sasha Levin
  Cc: H. Peter Anvin, Dave Hansen, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Andi Kleen,
	Vegard Nossum, x86, linux-mm, Thomas Gleixner, Ingo Molnar

On 09/11/2014 08:29 AM, Sasha Levin wrote:
> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>> Except you just broke PVop kernels.
> 
> So is this why v2 refuses to boot on my KVM guest? (was digging
> into that before I send a mail out).
> 

Maybe this will help?


From: Andrey Ryabinin <a.ryabinin@samsung.com>
Subject: [PATCH] x86_64: kasan: fix kernel boot with CONFIG_DEBUG_VIRTUAL=y

Use __pa_nodebug instead of __pa before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y may result in __asan_load
call before shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/head64.c    | 6 +++---
 arch/x86/mm/kasan_init_64.c | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 9d97e3a..5669a8b 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -47,7 +47,7 @@ static void __init reset_early_page_tables(void)

 	next_early_pgt = 0;

-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }

 /* Create a new PMD entry */
@@ -60,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;

 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;

 again:
@@ -160,7 +160,7 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	reset_early_page_tables();

 	kasan_map_zero_shadow(early_level4_pgt);
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));

 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index b7c857e..6615bf1 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -35,7 +35,7 @@ void __init kasan_map_zero_shadow(pgd_t *pgd)
 	unsigned long end = KASAN_SHADOW_END;

 	for (i = pgd_index(start); start < end; i++) {
-		pgd[i] = __pgd(__pa(zero_pud) | __PAGE_KERNEL_RO);
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
 		start += PGDIR_SIZE;
 	}
 }
-- 
2.1.0



^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-11 11:51                 ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-11 11:51 UTC (permalink / raw)
  To: Sasha Levin
  Cc: H. Peter Anvin, Dave Hansen, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Andi Kleen,
	Vegard Nossum, x86, linux-mm, Thomas Gleixner, Ingo Molnar

On 09/11/2014 08:29 AM, Sasha Levin wrote:
> On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>> Except you just broke PVop kernels.
> 
> So is this why v2 refuses to boot on my KVM guest? (was digging
> into that before I send a mail out).
> 

Maybe this will help?


From: Andrey Ryabinin <a.ryabinin@samsung.com>
Subject: [PATCH] x86_64: kasan: fix kernel boot with CONFIG_DEBUG_VIRTUAL=y

Use __pa_nodebug instead of __pa before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y may result in __asan_load
call before shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/head64.c    | 6 +++---
 arch/x86/mm/kasan_init_64.c | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 9d97e3a..5669a8b 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -47,7 +47,7 @@ static void __init reset_early_page_tables(void)

 	next_early_pgt = 0;

-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }

 /* Create a new PMD entry */
@@ -60,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;

 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;

 again:
@@ -160,7 +160,7 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	reset_early_page_tables();

 	kasan_map_zero_shadow(early_level4_pgt);
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));

 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index b7c857e..6615bf1 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -35,7 +35,7 @@ void __init kasan_map_zero_shadow(pgd_t *pgd)
 	unsigned long end = KASAN_SHADOW_END;

 	for (i = pgd_index(start); start < end; i++) {
-		pgd[i] = __pgd(__pa(zero_pud) | __PAGE_KERNEL_RO);
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
 		start += PGDIR_SIZE;
 	}
 }
-- 
2.1.0


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
  2014-09-10 14:31     ` Andrey Ryabinin
@ 2014-09-14  1:35       ` Randy Dunlap
  -1 siblings, 0 replies; 862+ messages in thread
From: Randy Dunlap @ 2014-09-14  1:35 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Michal Marek, Ingo Molnar, Peter Zijlstra

On 09/10/14 07:31, Andrey Ryabinin wrote:
> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> 
> KASAN uses compile-time instrumentation for checking every memory access,
> therefore fresh GCC >= v5.0.0 required.
> 
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
> 
> Basic idea:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
> 
> Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
> and uses direct mapping with a scale and offset to translate a memory
> address to its corresponding shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + KASAN_SHADOW_START;
>      }
> where KASAN_SHADOW_SCALE_SHIFT = 3.
> 
> So for every 8 bytes there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are unaccessible.

                                                           inaccessible.

> Different negative values used to distinguish between different kinds of
> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

  inaccessible

> 
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
> 
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
> 
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
> 
> Based on work by Andrey Konovalov <adech.fo@gmail.com>
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Documentation/kasan.txt | 180 ++++++++++++++++++++++++++++++++++++++++++++++
>  Makefile                |  10 ++-
>  include/linux/kasan.h   |  42 +++++++++++
>  include/linux/sched.h   |   3 +
>  lib/Kconfig.debug       |   2 +
>  lib/Kconfig.kasan       |  16 +++++
>  mm/Makefile             |   1 +
>  mm/kasan/Makefile       |   3 +
>  mm/kasan/kasan.c        | 188 ++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h        |  32 +++++++++
>  mm/kasan/report.c       | 183 ++++++++++++++++++++++++++++++++++++++++++++++
>  scripts/Makefile.lib    |  10 +++
>  12 files changed, 669 insertions(+), 1 deletion(-)
>  create mode 100644 Documentation/kasan.txt
>  create mode 100644 include/linux/kasan.h
>  create mode 100644 lib/Kconfig.kasan
>  create mode 100644 mm/kasan/Makefile
>  create mode 100644 mm/kasan/kasan.c
>  create mode 100644 mm/kasan/kasan.h
>  create mode 100644 mm/kasan/report.c
> 
> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
> new file mode 100644
> index 0000000..5a9d903
> --- /dev/null
> +++ b/Documentation/kasan.txt
> @@ -0,0 +1,180 @@
> +Kernel address sanitizer
> +================
> +
> +0. Overview
> +===========
> +
> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

   a fast and ...

> +
> +KASAN uses compile-time instrumentation for checking every memory access, therefore you
> +will need a special compiler: GCC >= 5.0.0.
> +
> +Currently KASAN supported only for x86_64 architecture and requires kernel

                   is supported

> +to be build with SLUB allocator.

         built

> +
> +1. Usage
> +=========
> +
> +KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
> +
> +To enable KASAN configure kernel with:
> +
> +	  CONFIG_KASAN = y
> +
> +Currently KASAN works only with SLUB.

                              with the SLUB memory allocator.

> +For better bug detection and nicer report enable CONFIG_STACKTRACE, CONFIG_SLUB_DEBUG

                                      report,

> +and put 'slub_debug=FU' to boot cmdline.

                           in the boot cmdline.


Following sentence is confusing.  I'm not sure how to fix it.

> +Please don't use slab poisoning with KASan (slub_debug=P), beacuse if KASan will

                                                                         drop: will

> +detects use after free allocation and free stacktraces will be overwritten by

maybe:     use after free,

> +poison bytes, and KASan won't be able to print this backtraces.

                                                       backtrace.

> +
> +To exclude files from being instrumented by compiler, add a line
> +similar to the following to the respective kernel Makefile:
> +
> +
> +        For a single file (e.g. main.o):
> +                KASAN_SANITIZE_main.o := n
> +
> +        For all files in one directory:
> +                KASAN_SANITIZE := n
> +
> +Only files which are linked to the main kernel image or are compiled as
> +kernel modules are supported by this mechanism.
> +
> +
> +1.1 Error reports
> +==========
> +
> +A typical out of bounds access report looks like this:
> +
> +==================================================================
> +AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b

Curious:  what does "rigth" mean?

> +=============================================================================
> +BUG kmalloc-128 (Not tainted): kasan error
> +-----------------------------------------------------------------------------
> +
> +Disabling lock debugging due to kernel taint
> +INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
> +	__slab_alloc.constprop.72+0x64f/0x680
> +	kmem_cache_alloc+0xa8/0xe0
> +	kasan_kmalloc_oob_rigth+0x2c/0x7a
> +	kasan_tests_init+0x8/0xc
> +	do_one_initcall+0x85/0x1a0
> +	kernel_init_freeable+0x1f1/0x279
> +	kernel_init+0x8/0xd0
> +	ret_from_kernel_thread+0x21/0x30
> +INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
> +INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
> +
> +Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
> +Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
> +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
> + 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
> + c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
> + c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
> +Call Trace:
> + [<c1c4446f>] dump_stack+0x4b/0x75
> + [<c11c3f32>] print_trailer+0xf2/0x180
> + [<c11c4ff5>] object_err+0x25/0x30
> + [<c11ccb78>] kasan_report_error+0xf8/0x380
> + [<c1c57940>] ? need_resched+0x21/0x25
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
> + [<c11cbacc>] __asan_store1+0x9c/0xa0
> + [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
> + [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
> + [<c1f8276b>] kasan_tests_init+0x8/0xc
> + [<c1000435>] do_one_initcall+0x85/0x1a0
> + [<c1f6f508>] ? repair_env_string+0x23/0x66
> + [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
> + [<c10c9883>] ? parse_args+0x33/0x450
> + [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
> + [<c1000558>] kernel_init+0x8/0xd0
> + [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
> + [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
> +Write of size 1 by thread T1:
> +Memory state around the buggy address:
> + c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
> +>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
> +                    ^
> + c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
> + c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> + c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
> + c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
> + c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
> +==================================================================
> +
> +In the last section the report shows memory state around the accessed address.
> +Reading this part requires some more undestanding of how KASAN works.

                                        understanding

> +
> +Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
> +partially addressable, freed or they can be part of a redzone.
> +If bytes are marked as addressable that means that they belong to some
> +allocated memory block and it is possible to read or modify any of these
> +bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
> +When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
> +memory block, this bytes are partially addressable and marked by 'N'.
> +
> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:

              inaccessible

> +
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
> +#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
> +
> +In the report above the arrows point to the shadow byte 03, which means that the
> +accessed address is partially addressable.
> +
> +
> +2. Implementation details
> +========================
> +
> +From a high level, our approach to memory error detection is similar to that
> +of kmemcheck: use shadow memory to record whether each byte of memory is safe
> +to access, and use compile-time instrumentation to check shadow on each memory
> +access.
> +
> +AddressSanitizer dedicates 1/8 of the addressable in kernel memory to its shadow

                                                     in-kernel or just kernel memory

> +memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
> +scale and offset to translate a memory address to its corresponding shadow address.
> +
> +Here is function witch translate address to corresponding shadow address:

   Here is the function which translates an address to its corresponding shadow address:

> +
> +unsigned long kasan_mem_to_shadow(unsigned long addr)
> +{
> +	return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
> +		+ KASAN_SHADOW_START;
> +}
> +
> +where KASAN_SHADOW_SCALE_SHIFT = 3.
> +
> +Each shadow byte corresponds to 8 bytes of the main memory. We use the
> +following encoding for each shadow byte: 0 means that all 8 bytes of the
> +corresponding memory region are addressable; k (1 <= k <= 7) means that
> +the first k bytes are addressable, and other (8 - k) bytes are not;
> +any negative value indicates that the entire 8-byte word is unaddressable.
> +We use different negative values to distinguish between different kinds of
> +unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
> +

Is there any need for something similar to k (1 <= k <= 7) but meaning that the
*last* k bytes are addressable instead of the first k bytes?

> +Poisoning or unpoisoning a byte in the main memory means writing some special
> +value into the corresponding shadow memory. This value indicates whether the
> +byte is addressable or not.
> +


> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..65f8145
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,188 @@
> +
> +/* to shut up compiler complains */

                          complaints

> +void __asan_init_v3(void) {}
> +EXPORT_SYMBOL(__asan_init_v3);
> +void __asan_handle_no_return(void) {}
> +EXPORT_SYMBOL(__asan_handle_no_return);


-- 
~Randy

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
@ 2014-09-14  1:35       ` Randy Dunlap
  0 siblings, 0 replies; 862+ messages in thread
From: Randy Dunlap @ 2014-09-14  1:35 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Michal Marek, Ingo Molnar, Peter Zijlstra

On 09/10/14 07:31, Andrey Ryabinin wrote:
> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> 
> KASAN uses compile-time instrumentation for checking every memory access,
> therefore fresh GCC >= v5.0.0 required.
> 
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
> 
> Basic idea:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
> 
> Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
> and uses direct mapping with a scale and offset to translate a memory
> address to its corresponding shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + KASAN_SHADOW_START;
>      }
> where KASAN_SHADOW_SCALE_SHIFT = 3.
> 
> So for every 8 bytes there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are unaccessible.

                                                           inaccessible.

> Different negative values used to distinguish between different kinds of
> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

  inaccessible

> 
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
> 
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
> 
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
> 
> Based on work by Andrey Konovalov <adech.fo@gmail.com>
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Documentation/kasan.txt | 180 ++++++++++++++++++++++++++++++++++++++++++++++
>  Makefile                |  10 ++-
>  include/linux/kasan.h   |  42 +++++++++++
>  include/linux/sched.h   |   3 +
>  lib/Kconfig.debug       |   2 +
>  lib/Kconfig.kasan       |  16 +++++
>  mm/Makefile             |   1 +
>  mm/kasan/Makefile       |   3 +
>  mm/kasan/kasan.c        | 188 ++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h        |  32 +++++++++
>  mm/kasan/report.c       | 183 ++++++++++++++++++++++++++++++++++++++++++++++
>  scripts/Makefile.lib    |  10 +++
>  12 files changed, 669 insertions(+), 1 deletion(-)
>  create mode 100644 Documentation/kasan.txt
>  create mode 100644 include/linux/kasan.h
>  create mode 100644 lib/Kconfig.kasan
>  create mode 100644 mm/kasan/Makefile
>  create mode 100644 mm/kasan/kasan.c
>  create mode 100644 mm/kasan/kasan.h
>  create mode 100644 mm/kasan/report.c
> 
> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
> new file mode 100644
> index 0000000..5a9d903
> --- /dev/null
> +++ b/Documentation/kasan.txt
> @@ -0,0 +1,180 @@
> +Kernel address sanitizer
> +================
> +
> +0. Overview
> +===========
> +
> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

   a fast and ...

> +
> +KASAN uses compile-time instrumentation for checking every memory access, therefore you
> +will need a special compiler: GCC >= 5.0.0.
> +
> +Currently KASAN supported only for x86_64 architecture and requires kernel

                   is supported

> +to be build with SLUB allocator.

         built

> +
> +1. Usage
> +=========
> +
> +KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
> +
> +To enable KASAN configure kernel with:
> +
> +	  CONFIG_KASAN = y
> +
> +Currently KASAN works only with SLUB.

                              with the SLUB memory allocator.

> +For better bug detection and nicer report enable CONFIG_STACKTRACE, CONFIG_SLUB_DEBUG

                                      report,

> +and put 'slub_debug=FU' to boot cmdline.

                           in the boot cmdline.


Following sentence is confusing.  I'm not sure how to fix it.

> +Please don't use slab poisoning with KASan (slub_debug=P), beacuse if KASan will

                                                                         drop: will

> +detects use after free allocation and free stacktraces will be overwritten by

maybe:     use after free,

> +poison bytes, and KASan won't be able to print this backtraces.

                                                       backtrace.

> +
> +To exclude files from being instrumented by compiler, add a line
> +similar to the following to the respective kernel Makefile:
> +
> +
> +        For a single file (e.g. main.o):
> +                KASAN_SANITIZE_main.o := n
> +
> +        For all files in one directory:
> +                KASAN_SANITIZE := n
> +
> +Only files which are linked to the main kernel image or are compiled as
> +kernel modules are supported by this mechanism.
> +
> +
> +1.1 Error reports
> +==========
> +
> +A typical out of bounds access report looks like this:
> +
> +==================================================================
> +AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b

Curious:  what does "rigth" mean?

> +=============================================================================
> +BUG kmalloc-128 (Not tainted): kasan error
> +-----------------------------------------------------------------------------
> +
> +Disabling lock debugging due to kernel taint
> +INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
> +	__slab_alloc.constprop.72+0x64f/0x680
> +	kmem_cache_alloc+0xa8/0xe0
> +	kasan_kmalloc_oob_rigth+0x2c/0x7a
> +	kasan_tests_init+0x8/0xc
> +	do_one_initcall+0x85/0x1a0
> +	kernel_init_freeable+0x1f1/0x279
> +	kernel_init+0x8/0xd0
> +	ret_from_kernel_thread+0x21/0x30
> +INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
> +INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
> +
> +Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
> +Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
> +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
> + 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
> + c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
> + c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
> +Call Trace:
> + [<c1c4446f>] dump_stack+0x4b/0x75
> + [<c11c3f32>] print_trailer+0xf2/0x180
> + [<c11c4ff5>] object_err+0x25/0x30
> + [<c11ccb78>] kasan_report_error+0xf8/0x380
> + [<c1c57940>] ? need_resched+0x21/0x25
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
> + [<c11cbacc>] __asan_store1+0x9c/0xa0
> + [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
> + [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
> + [<c1f8276b>] kasan_tests_init+0x8/0xc
> + [<c1000435>] do_one_initcall+0x85/0x1a0
> + [<c1f6f508>] ? repair_env_string+0x23/0x66
> + [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
> + [<c10c9883>] ? parse_args+0x33/0x450
> + [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
> + [<c1000558>] kernel_init+0x8/0xd0
> + [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
> + [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
> +Write of size 1 by thread T1:
> +Memory state around the buggy address:
> + c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
> +>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
> +                    ^
> + c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
> + c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> + c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
> + c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
> + c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
> +==================================================================
> +
> +In the last section the report shows memory state around the accessed address.
> +Reading this part requires some more undestanding of how KASAN works.

                                        understanding

> +
> +Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
> +partially addressable, freed or they can be part of a redzone.
> +If bytes are marked as addressable that means that they belong to some
> +allocated memory block and it is possible to read or modify any of these
> +bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
> +When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
> +memory block, this bytes are partially addressable and marked by 'N'.
> +
> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:

              inaccessible

> +
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
> +#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
> +
> +In the report above the arrows point to the shadow byte 03, which means that the
> +accessed address is partially addressable.
> +
> +
> +2. Implementation details
> +========================
> +
> +From a high level, our approach to memory error detection is similar to that
> +of kmemcheck: use shadow memory to record whether each byte of memory is safe
> +to access, and use compile-time instrumentation to check shadow on each memory
> +access.
> +
> +AddressSanitizer dedicates 1/8 of the addressable in kernel memory to its shadow

                                                     in-kernel or just kernel memory

> +memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
> +scale and offset to translate a memory address to its corresponding shadow address.
> +
> +Here is function witch translate address to corresponding shadow address:

   Here is the function which translates an address to its corresponding shadow address:

> +
> +unsigned long kasan_mem_to_shadow(unsigned long addr)
> +{
> +	return ((addr - KASAN_SHADOW_START) >> KASAN_SHADOW_SCALE_SHIFT)
> +		+ KASAN_SHADOW_START;
> +}
> +
> +where KASAN_SHADOW_SCALE_SHIFT = 3.
> +
> +Each shadow byte corresponds to 8 bytes of the main memory. We use the
> +following encoding for each shadow byte: 0 means that all 8 bytes of the
> +corresponding memory region are addressable; k (1 <= k <= 7) means that
> +the first k bytes are addressable, and other (8 - k) bytes are not;
> +any negative value indicates that the entire 8-byte word is unaddressable.
> +We use different negative values to distinguish between different kinds of
> +unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
> +

Is there any need for something similar to k (1 <= k <= 7) but meaning that the
*last* k bytes are addressable instead of the first k bytes?

> +Poisoning or unpoisoning a byte in the main memory means writing some special
> +value into the corresponding shadow memory. This value indicates whether the
> +byte is addressable or not.
> +


> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..65f8145
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,188 @@
> +
> +/* to shut up compiler complains */

                          complaints

> +void __asan_init_v3(void) {}
> +EXPORT_SYMBOL(__asan_init_v3);
> +void __asan_handle_no_return(void) {}
> +EXPORT_SYMBOL(__asan_handle_no_return);


-- 
~Randy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function.
  2014-09-10 20:32         ` Andrey Ryabinin
@ 2014-09-15  7:11           ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-15  7:11 UTC (permalink / raw)
  To: Andrey Ryabinin, Christoph Lameter
  Cc: LKML, Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

On 09/11/2014 12:32 AM, Andrey Ryabinin wrote:
> 2014-09-10 20:16 GMT+04:00 Christoph Lameter <cl@linux.com>:
>> On Wed, 10 Sep 2014, Andrey Ryabinin wrote:
>>
>>> virt_to_obj takes kmem_cache address, address of slab page,
>>> address x pointing somewhere inside slab object,
>>> and returns address of the begging of object.
>>
>> This function is SLUB specific. Does it really need to be in slab.h?
>>
> 
> Oh, yes this should be in slub.c
> 

I forgot that include/linux/slub_def.h exists. Perhaps it would be better to move
virt_to_obj into slub_def.h to avoid ugly #ifdef CONFIG_KASAN in slub.c


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function.
@ 2014-09-15  7:11           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-15  7:11 UTC (permalink / raw)
  To: Andrey Ryabinin, Christoph Lameter
  Cc: LKML, Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

On 09/11/2014 12:32 AM, Andrey Ryabinin wrote:
> 2014-09-10 20:16 GMT+04:00 Christoph Lameter <cl@linux.com>:
>> On Wed, 10 Sep 2014, Andrey Ryabinin wrote:
>>
>>> virt_to_obj takes kmem_cache address, address of slab page,
>>> address x pointing somewhere inside slab object,
>>> and returns address of the begging of object.
>>
>> This function is SLUB specific. Does it really need to be in slab.h?
>>
> 
> Oh, yes this should be in slub.c
> 

I forgot that include/linux/slub_def.h exists. Perhaps it would be better to move
virt_to_obj into slub_def.h to avoid ugly #ifdef CONFIG_KASAN in slub.c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 05/10] mm: slub: share slab_err and object_err functions
  2014-09-10 14:31     ` Andrey Ryabinin
@ 2014-09-15  7:11       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-15  7:11 UTC (permalink / raw)
  To: linux-kernel, Christoph Lameter
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

On 09/10/2014 06:31 PM, Andrey Ryabinin wrote:
> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.
> 

The same as with virt_to_obj. include/linux/slub_def.h is much better place for this than mm/slab.h.


> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slab.h | 5 +++++
>  mm/slub.c | 4 ++--
>  2 files changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index 3e3a6ae..87491dd 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -345,6 +345,11 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>  
>  void *slab_next(struct seq_file *m, void *p, loff_t *pos);
>  void slab_stop(struct seq_file *m, void *p);
> +void slab_err(struct kmem_cache *s, struct page *page,
> +		const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +		u8 *object, char *reason);
> +
>  
>  static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
>  {
> diff --git a/mm/slub.c b/mm/slub.c
> index fa86e58..c4158b2 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -639,14 +639,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  	dump_stack();
>  }
>  
> -static void object_err(struct kmem_cache *s, struct page *page,
> +void object_err(struct kmem_cache *s, struct page *page,
>  			u8 *object, char *reason)
>  {
>  	slab_bug(s, "%s", reason);
>  	print_trailer(s, page, object);
>  }
>  
> -static void slab_err(struct kmem_cache *s, struct page *page,
> +void slab_err(struct kmem_cache *s, struct page *page,
>  			const char *fmt, ...)
>  {
>  	va_list args;
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 05/10] mm: slub: share slab_err and object_err functions
@ 2014-09-15  7:11       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-15  7:11 UTC (permalink / raw)
  To: linux-kernel, Christoph Lameter
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Pekka Enberg,
	David Rientjes

On 09/10/2014 06:31 PM, Andrey Ryabinin wrote:
> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.
> 

The same as with virt_to_obj. include/linux/slub_def.h is much better place for this than mm/slab.h.


> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slab.h | 5 +++++
>  mm/slub.c | 4 ++--
>  2 files changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index 3e3a6ae..87491dd 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -345,6 +345,11 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>  
>  void *slab_next(struct seq_file *m, void *p, loff_t *pos);
>  void slab_stop(struct seq_file *m, void *p);
> +void slab_err(struct kmem_cache *s, struct page *page,
> +		const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +		u8 *object, char *reason);
> +
>  
>  static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
>  {
> diff --git a/mm/slub.c b/mm/slub.c
> index fa86e58..c4158b2 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -639,14 +639,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  	dump_stack();
>  }
>  
> -static void object_err(struct kmem_cache *s, struct page *page,
> +void object_err(struct kmem_cache *s, struct page *page,
>  			u8 *object, char *reason)
>  {
>  	slab_bug(s, "%s", reason);
>  	print_trailer(s, page, object);
>  }
>  
> -static void slab_err(struct kmem_cache *s, struct page *page,
> +void slab_err(struct kmem_cache *s, struct page *page,
>  			const char *fmt, ...)
>  {
>  	va_list args;
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
  2014-09-14  1:35       ` Randy Dunlap
@ 2014-09-15 15:28         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-15 15:28 UTC (permalink / raw)
  To: Randy Dunlap, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Michal Marek, Ingo Molnar, Peter Zijlstra

On 09/14/2014 05:35 AM, Randy Dunlap wrote:
> Following sentence is confusing.  I'm not sure how to fix it.
> 


Perhaps rephrase is like this:

Do not use slub poisoning with KASan if user tracking enabled (iow slub_debug=PU).
User tracking info (allocation/free stacktraces) are stored inside slub object's metadata.
Slub poisoning overwrites slub object and it's metadata with poison value on freeing.
So if KASan will detect use after free, allocation/free stacktraces will be overwritten
and KASan won't be able to print them.


>> +Please don't use slab poisoning with KASan (slub_debug=P), beacuse if KASan will
> 
>                                                                          drop: will
> 
>> +detects use after free allocation and free stacktraces will be overwritten by
> 
> maybe:     use after free,
> 
>> +poison bytes, and KASan won't be able to print this backtraces.
> 
>                                                        backtrace.
> 
>> +
>> +Each shadow byte corresponds to 8 bytes of the main memory. We use the
>> +following encoding for each shadow byte: 0 means that all 8 bytes of the
>> +corresponding memory region are addressable; k (1 <= k <= 7) means that
>> +the first k bytes are addressable, and other (8 - k) bytes are not;
>> +any negative value indicates that the entire 8-byte word is unaddressable.
>> +We use different negative values to distinguish between different kinds of
>> +unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
>> +
> 
> Is there any need for something similar to k (1 <= k <= 7) but meaning that the
> *last* k bytes are addressable instead of the first k bytes?
> 

There is no need for that. Slub allocations are always 8 byte aligned (at least on 64bit systems).
Now I realized that it could be a problem for 32bit systems. Anyway, the best way to deal
with that would be align allocation to 8 bytes.

>> +Poisoning or unpoisoning a byte in the main memory means writing some special
>> +value into the corresponding shadow memory. This value indicates whether the
>> +byte is addressable or not.
>> +
> 



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
@ 2014-09-15 15:28         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-15 15:28 UTC (permalink / raw)
  To: Randy Dunlap, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Michal Marek, Ingo Molnar, Peter Zijlstra

On 09/14/2014 05:35 AM, Randy Dunlap wrote:
> Following sentence is confusing.  I'm not sure how to fix it.
> 


Perhaps rephrase is like this:

Do not use slub poisoning with KASan if user tracking enabled (iow slub_debug=PU).
User tracking info (allocation/free stacktraces) are stored inside slub object's metadata.
Slub poisoning overwrites slub object and it's metadata with poison value on freeing.
So if KASan will detect use after free, allocation/free stacktraces will be overwritten
and KASan won't be able to print them.


>> +Please don't use slab poisoning with KASan (slub_debug=P), beacuse if KASan will
> 
>                                                                          drop: will
> 
>> +detects use after free allocation and free stacktraces will be overwritten by
> 
> maybe:     use after free,
> 
>> +poison bytes, and KASan won't be able to print this backtraces.
> 
>                                                        backtrace.
> 
>> +
>> +Each shadow byte corresponds to 8 bytes of the main memory. We use the
>> +following encoding for each shadow byte: 0 means that all 8 bytes of the
>> +corresponding memory region are addressable; k (1 <= k <= 7) means that
>> +the first k bytes are addressable, and other (8 - k) bytes are not;
>> +any negative value indicates that the entire 8-byte word is unaddressable.
>> +We use different negative values to distinguish between different kinds of
>> +unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
>> +
> 
> Is there any need for something similar to k (1 <= k <= 7) but meaning that the
> *last* k bytes are addressable instead of the first k bytes?
> 

There is no need for that. Slub allocations are always 8 byte aligned (at least on 64bit systems).
Now I realized that it could be a problem for 32bit systems. Anyway, the best way to deal
with that would be align allocation to 8 bytes.

>> +Poisoning or unpoisoning a byte in the main memory means writing some special
>> +value into the corresponding shadow memory. This value indicates whether the
>> +byte is addressable or not.
>> +
> 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
  2014-09-15 15:28         ` Andrey Ryabinin
@ 2014-09-15 16:24           ` Randy Dunlap
  -1 siblings, 0 replies; 862+ messages in thread
From: Randy Dunlap @ 2014-09-15 16:24 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Michal Marek, Ingo Molnar, Peter Zijlstra

On 09/15/14 08:28, Andrey Ryabinin wrote:
> On 09/14/2014 05:35 AM, Randy Dunlap wrote:
>> Following sentence is confusing.  I'm not sure how to fix it.
>>
> 
> 
> Perhaps rephrase is like this:
> 
> Do not use slub poisoning with KASan if user tracking enabled (iow slub_debug=PU).

                                       if user tracking is enabled

> User tracking info (allocation/free stacktraces) are stored inside slub object's metadata.
> Slub poisoning overwrites slub object and it's metadata with poison value on freeing.

                                            its

> So if KASan will detect use after free, allocation/free stacktraces will be overwritten

  So if KASan detects a use after free, allocation/free stacktraces are overwritten

> and KASan won't be able to print them.


Thanks.

-- 
~Randy

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure.
@ 2014-09-15 16:24           ` Randy Dunlap
  0 siblings, 0 replies; 862+ messages in thread
From: Randy Dunlap @ 2014-09-15 16:24 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Michal Marek, Ingo Molnar, Peter Zijlstra

On 09/15/14 08:28, Andrey Ryabinin wrote:
> On 09/14/2014 05:35 AM, Randy Dunlap wrote:
>> Following sentence is confusing.  I'm not sure how to fix it.
>>
> 
> 
> Perhaps rephrase is like this:
> 
> Do not use slub poisoning with KASan if user tracking enabled (iow slub_debug=PU).

                                       if user tracking is enabled

> User tracking info (allocation/free stacktraces) are stored inside slub object's metadata.
> Slub poisoning overwrites slub object and it's metadata with poison value on freeing.

                                            its

> So if KASan will detect use after free, allocation/free stacktraces will be overwritten

  So if KASan detects a use after free, allocation/free stacktraces are overwritten

> and KASan won't be able to print them.


Thanks.

-- 
~Randy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11 11:51                 ` Andrey Ryabinin
@ 2014-09-18 16:54                   ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-18 16:54 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: H. Peter Anvin, Dave Hansen, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Andi Kleen,
	Vegard Nossum, x86, linux-mm, Thomas Gleixner, Ingo Molnar

On 09/11/2014 07:51 AM, Andrey Ryabinin wrote:
> On 09/11/2014 08:29 AM, Sasha Levin wrote:
>> > On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>>> >> Except you just broke PVop kernels.
>> > 
>> > So is this why v2 refuses to boot on my KVM guest? (was digging
>> > into that before I send a mail out).
>> > 
> Maybe this will help?
> 
> 
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
> Subject: [PATCH] x86_64: kasan: fix kernel boot with CONFIG_DEBUG_VIRTUAL=y
> 
> Use __pa_nodebug instead of __pa before shadow initialized.
> __pa with CONFIG_DEBUG_VIRTUAL=y may result in __asan_load
> call before shadow area initialized.

Woops, I got sidetracked and forgot to reply. Yes, this patch fixed
the issue, KASan is running properly now.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-09-18 16:54                   ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-18 16:54 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: H. Peter Anvin, Dave Hansen, Andrey Ryabinin, LKML,
	Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Andi Kleen,
	Vegard Nossum, x86, linux-mm, Thomas Gleixner, Ingo Molnar

On 09/11/2014 07:51 AM, Andrey Ryabinin wrote:
> On 09/11/2014 08:29 AM, Sasha Levin wrote:
>> > On 09/11/2014 12:26 AM, H. Peter Anvin wrote:
>>> >> Except you just broke PVop kernels.
>> > 
>> > So is this why v2 refuses to boot on my KVM guest? (was digging
>> > into that before I send a mail out).
>> > 
> Maybe this will help?
> 
> 
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
> Subject: [PATCH] x86_64: kasan: fix kernel boot with CONFIG_DEBUG_VIRTUAL=y
> 
> Use __pa_nodebug instead of __pa before shadow initialized.
> __pa with CONFIG_DEBUG_VIRTUAL=y may result in __asan_load
> call before shadow area initialized.

Woops, I got sidetracked and forgot to reply. Yes, this patch fixed
the issue, KASan is running properly now.


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2014-09-24 12:43   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, linux-kbuild, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones

Hi.

This is a third iteration of kerenel address sanitizer (KASan).

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v5.0.0.

Patches are based on mmotm-2014-09-22-16-57 tree and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v3

Note: patch (https://lkml.org/lkml/2014/9/4/364) for gcc5 support
somewhat just disappeared from the last mmotm, so you will need to apply it.
It's already done in my git above.

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Comparison with other debuggin features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Andrey Ryabinin (13):
  Add kernel address sanitizer infrastructure.
  efi: libstub: disable KASAN for efistub
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  kasan: introduce inline instrumentation

 Documentation/kasan.txt               | 179 ++++++++++++++
 Makefile                              |  16 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 +++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 +++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   5 +
 include/linux/kasan.h                 |  72 ++++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |   9 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 +++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 441 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  37 +++
 mm/kasan/report.c                     | 259 ++++++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  56 ++++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1595 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c


Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <linux-kbuild@vger.kernel.org>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>


-- 
2.1.1


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-24 12:43   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, linux-kbuild, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones

Hi.

This is a third iteration of kerenel address sanitizer (KASan).

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v5.0.0.

Patches are based on mmotm-2014-09-22-16-57 tree and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v3

Note: patch (https://lkml.org/lkml/2014/9/4/364) for gcc5 support
somewhat just disappeared from the last mmotm, so you will need to apply it.
It's already done in my git above.

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Comparison with other debuggin features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Andrey Ryabinin (13):
  Add kernel address sanitizer infrastructure.
  efi: libstub: disable KASAN for efistub
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  kasan: introduce inline instrumentation

 Documentation/kasan.txt               | 179 ++++++++++++++
 Makefile                              |  16 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 +++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 +++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   5 +
 include/linux/kasan.h                 |  72 ++++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |   9 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 +++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 441 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  37 +++
 mm/kasan/report.c                     | 259 ++++++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  56 ++++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1595 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c


Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <linux-kbuild@vger.kernel.org>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>


-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v3 01/13] Add kernel address sanitizer infrastructure.
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Randy Dunlap, Michal Marek, Ingo Molnar, Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore fresh GCC >= v5.0.0 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 179 ++++++++++++++++++++++++++
 Makefile                |  11 +-
 include/linux/kasan.h   |  42 ++++++
 include/linux/sched.h   |   3 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  15 +++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 330 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  31 +++++
 mm/kasan/report.c       | 180 ++++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 ++
 12 files changed, 805 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..4173783
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,179 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN uses compile-time instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 5.0.0.
+
+Currently KASAN is supported only for x86_64 architecture and requires kernel
+to be built with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+'slub_debug=FU' in the boot cmdline.
+
+Do not use slub poisoning with KASan if user is tracking enabled (iow slub_debug=PU).
+User tracking info (allocation/free stacktraces) are stored inside slub object's metadata.
+Slub poisoning overwrites slub object and its metadata with poison value on freeing.
+So if KASan detects a use after free, allocation/free stacktraces are overwritten
+and KASan won't be able to print them.
+
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow on each memory
+access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow
+memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
+scale and offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
diff --git a/Makefile b/Makefile
index a37665d..6cefe5e 100644
--- a/Makefile
+++ b/Makefile
@@ -397,7 +397,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -441,7 +441,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -772,6 +772,13 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 13c34f2..708a815 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1661,6 +1661,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index a8c0ba9..219a418 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..54cf44f
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,15 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+endif
diff --git a/mm/Makefile b/mm/Makefile
index af993eb..7a4b87e 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -65,3 +65,4 @@ obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..454df8d
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,330 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..5895e31
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,31 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..bf559fa
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,180 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..c1517e2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 01/13] Add kernel address sanitizer infrastructure.
@ 2014-09-24 12:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Randy Dunlap, Michal Marek, Ingo Molnar, Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore fresh GCC >= v5.0.0 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 179 ++++++++++++++++++++++++++
 Makefile                |  11 +-
 include/linux/kasan.h   |  42 ++++++
 include/linux/sched.h   |   3 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  15 +++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 330 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  31 +++++
 mm/kasan/report.c       | 180 ++++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 ++
 12 files changed, 805 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..4173783
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,179 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN uses compile-time instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 5.0.0.
+
+Currently KASAN is supported only for x86_64 architecture and requires kernel
+to be built with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+'slub_debug=FU' in the boot cmdline.
+
+Do not use slub poisoning with KASan if user is tracking enabled (iow slub_debug=PU).
+User tracking info (allocation/free stacktraces) are stored inside slub object's metadata.
+Slub poisoning overwrites slub object and its metadata with poison value on freeing.
+So if KASan detects a use after free, allocation/free stacktraces are overwritten
+and KASan won't be able to print them.
+
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow on each memory
+access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow
+memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
+scale and offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
diff --git a/Makefile b/Makefile
index a37665d..6cefe5e 100644
--- a/Makefile
+++ b/Makefile
@@ -397,7 +397,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -441,7 +441,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -772,6 +772,13 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 13c34f2..708a815 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1661,6 +1661,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index a8c0ba9..219a418 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..54cf44f
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,15 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+endif
diff --git a/mm/Makefile b/mm/Makefile
index af993eb..7a4b87e 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -65,3 +65,4 @@ obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..454df8d
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,330 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..5895e31
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,31 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..bf559fa
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,180 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..c1517e2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 02/13] efi: libstub: disable KASAN for efistub
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

KASan as many other options should be disabled for this stub
to prevent build failures.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 drivers/firmware/efi/libstub/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 02/13] efi: libstub: disable KASAN for efistub
@ 2014-09-24 12:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

KASan as many other options should be disabled for this stub
to prevent build failures.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 drivers/firmware/efi/libstub/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 03/13] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index fe52f2d..51d393f 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 03/13] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
@ 2014-09-24 12:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index fe52f2d..51d393f 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 04/13] x86_64: add KASan support
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Change-Id: I289ea19eab98e572df7f80cacec661813ea61281
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/boot/Makefile            |  2 +
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/include/asm/kasan.h      | 27 ++++++++++++
 arch/x86/kernel/Makefile          |  2 +
 arch/x86/kernel/dumpstack.c       |  5 ++-
 arch/x86/kernel/head64.c          |  9 +++-
 arch/x86/kernel/head_64.S         | 28 +++++++++++++
 arch/x86/mm/Makefile              |  3 ++
 arch/x86/mm/init.c                |  3 ++
 arch/x86/mm/kasan_init_64.c       | 87 +++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |  2 +-
 arch/x86/realmode/rm/Makefile     |  1 +
 arch/x86/vdso/Makefile            |  1 +
 lib/Kconfig.kasan                 |  6 +++
 15 files changed, 175 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2872aaa..cec0c26 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -136,6 +136,7 @@ config X86
 	select HAVE_ACPI_APEI if ACPI
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
+	select HAVE_ARCH_KASAN if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 7a801a3..8e5b9b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..056c943
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_map_shadow(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_map_shadow(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index ada2e2d..4c59d7f 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..4a5a597 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,7 @@
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
 #include <asm/init.h>
+#include <asm/kasan.h>
 #include <asm/page.h>
 #include <asm/page_types.h>
 #include <asm/sections.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..c6ea8a4
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,87 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 54cf44f..b458a00 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -12,4 +13,9 @@ config KASAN
 	  of available memory and brings about ~x3 performance slowdown.
 	  For better error detection enable CONFIG_STACKTRACE,
 	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+	default 0xdfffe90000000000 if X86_64
+
 endif
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 04/13] x86_64: add KASan support
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Change-Id: I289ea19eab98e572df7f80cacec661813ea61281
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/boot/Makefile            |  2 +
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/include/asm/kasan.h      | 27 ++++++++++++
 arch/x86/kernel/Makefile          |  2 +
 arch/x86/kernel/dumpstack.c       |  5 ++-
 arch/x86/kernel/head64.c          |  9 +++-
 arch/x86/kernel/head_64.S         | 28 +++++++++++++
 arch/x86/mm/Makefile              |  3 ++
 arch/x86/mm/init.c                |  3 ++
 arch/x86/mm/kasan_init_64.c       | 87 +++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |  2 +-
 arch/x86/realmode/rm/Makefile     |  1 +
 arch/x86/vdso/Makefile            |  1 +
 lib/Kconfig.kasan                 |  6 +++
 15 files changed, 175 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2872aaa..cec0c26 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -136,6 +136,7 @@ config X86
 	select HAVE_ACPI_APEI if ACPI
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
+	select HAVE_ARCH_KASAN if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 7a801a3..8e5b9b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..056c943
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_map_shadow(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_map_shadow(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index ada2e2d..4c59d7f 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..4a5a597 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,7 @@
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
 #include <asm/init.h>
+#include <asm/kasan.h>
 #include <asm/page.h>
 #include <asm/page_types.h>
 #include <asm/sections.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..c6ea8a4
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,87 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 54cf44f..b458a00 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -12,4 +13,9 @@ config KASAN
 	  of available memory and brings about ~x3 performance slowdown.
 	  For better error detection enable CONFIG_STACKTRACE,
 	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+	default 0xdfffe90000000000 if X86_64
+
 endif
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 05/13] mm: page_alloc: add kasan hooks on alloc and free paths
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 92075d5..686b5c2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 454df8d..7cfc1fe 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -251,6 +251,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report_error(&info);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5895e31..5e61799 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index bf559fa..f9d4e8d 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ee95d0a..ef3604a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -59,6 +59,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -755,6 +756,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -941,6 +943,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 05/13] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 92075d5..686b5c2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 454df8d..7cfc1fe 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -251,6 +251,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report_error(&info);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5895e31..5e61799 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index bf559fa..f9d4e8d 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ee95d0a..ef3604a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -59,6 +59,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -755,6 +756,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -941,6 +943,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 06/13] mm: slub: introduce virt_to_obj function.
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 06/13] mm: slub: introduce virt_to_obj function.
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 07/13] mm: slub: share slab_err and object_err functions
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 4 ++++
 mm/slub.c                | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..8fed60d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index ae7b9f1..82282f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 07/13] mm: slub: share slab_err and object_err functions
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 4 ++++
 mm/slub.c                | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..8fed60d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index ae7b9f1..82282f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 08/13] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 82282f5..9b1f75c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 08/13] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 82282f5..9b1f75c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 24 +++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  5 +++
 mm/kasan/report.c     | 27 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 36 +++++++++++++++++--
 9 files changed, 203 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..4b866fa 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,17 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_free_slab_pages(struct page *page, int order);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +53,19 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index b458a00..d16b899 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 7a4b87e..c08a70f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7cfc1fe..3c1687a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -265,6 +266,102 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_free_slab_pages(struct page *page, int order)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_end = round_up(object_end, PAGE_SIZE);
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	size_t size = padding_end - padding_start;
+
+	if (size)
+		kasan_poison_shadow((void *)padding_start,
+				size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5e61799..b3974c7 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f9d4e8d..c42f6ba 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,15 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -73,12 +79,33 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
 		dump_page(page, "kasan error");
 		dump_stack();
 		break;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3a6e0cf..33868b4 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -795,6 +795,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -969,8 +970,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 9b1f75c..12ffdd0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p);
+		}
 	}
 
 	page->freelist = start;
@@ -1442,6 +1454,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
+	kasan_free_slab_pages(page, compound_order(page));
 
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2488,6 +2501,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2514,6 +2528,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2897,6 +2913,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3269,6 +3286,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3312,12 +3331,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3333,6 +3354,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 24 +++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  5 +++
 mm/kasan/report.c     | 27 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 36 +++++++++++++++++--
 9 files changed, 203 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..4b866fa 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,17 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_free_slab_pages(struct page *page, int order);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +53,19 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index b458a00..d16b899 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 7a4b87e..c08a70f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o madvise.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7cfc1fe..3c1687a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -265,6 +266,102 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_free_slab_pages(struct page *page, int order)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_end = round_up(object_end, PAGE_SIZE);
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	size_t size = padding_end - padding_start;
+
+	if (size)
+		kasan_poison_shadow((void *)padding_start,
+				size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5e61799..b3974c7 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f9d4e8d..c42f6ba 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,15 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -73,12 +79,33 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
+	case KASAN_SLAB_FREE:
 		dump_page(page, "kasan error");
 		dump_stack();
 		break;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3a6e0cf..33868b4 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -795,6 +795,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -969,8 +970,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 9b1f75c..12ffdd0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p);
+		}
 	}
 
 	page->freelist = start;
@@ -1442,6 +1454,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	}
 
 	kmemcheck_free_shadow(page, compound_order(page));
+	kasan_free_slab_pages(page, compound_order(page));
 
 	mod_zone_page_state(page_zone(page),
 		(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2488,6 +2501,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2514,6 +2528,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2897,6 +2913,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3269,6 +3286,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3312,12 +3331,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3333,6 +3354,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 10/13] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 8552986..7811eb2 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1395,6 +1396,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 10/13] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 8552986..7811eb2 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1395,6 +1396,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 12/13] lib: add kasan test module
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index d16b899..faddb0e 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,4 +19,12 @@ config KASAN_SHADOW_OFFSET
 	hex
 	default 0xdfffe90000000000 if X86_64
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m
+	help
+	  This is a test module doing varios nasty things like
+	  out of bounds accesses, use after free. It is usefull for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 84a56f7..d620d27 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_MODULE) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..66a04eb
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v3 12/13] lib: add kasan test module
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index d16b899..faddb0e 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,4 +19,12 @@ config KASAN_SHADOW_OFFSET
 	hex
 	default 0xdfffe90000000000 if X86_64
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m
+	help
+	  This is a test module doing varios nasty things like
+	  out of bounds accesses, use after free. It is usefull for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 84a56f7..d620d27 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_MODULE) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..66a04eb
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 12:44     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Michal Marek

This patch only demonstration how easy this could be achieved.
GCC doesn't support this feature yet. Two patches required for this:
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

In inline instrumentation mode compiler directly inserts code
checking shadow memory instead of __asan_load/__asan_store
calls.
This is usually faster than outline. In some workloads inline is
2 times faster than outline instrumentation.

The downside of inline instrumentation is bloated kernel's .text size:

size noasan/vmlinux
   text     data     bss      dec     hex    filename
11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux

size outline/vmlinux
   text    data     bss      dec      hex    filename
16553474  1602592  950272  19106338  1238a22 outline/vmlinux

size inline/vmlinux
   text    data     bss      dec      hex    filename
32064759  1598688  946176  34609623  21019d7 inline/vmlinux

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile          |  5 +++++
 lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
 mm/kasan/report.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 74 insertions(+)

diff --git a/Makefile b/Makefile
index 6cefe5e..fe7c534 100644
--- a/Makefile
+++ b/Makefile
@@ -773,6 +773,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
 ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
+		 $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
+endif
+
   ifeq ($(CFLAGS_KASAN),)
     $(warning Cannot use CONFIG_KASAN: \
 	      -fsanitize=kernel-address not supported by compiler)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index faddb0e..c4ac040 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -27,4 +27,28 @@ config TEST_KASAN
 	  out of bounds accesses, use after free. It is usefull for testing
 	  kernel debugging features like kernel address sanitizer.
 
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_INLINE if X86_64
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
 endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index c42f6ba..a9262f8 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -212,3 +212,48 @@ void kasan_report_user_access(struct access_info *info)
 		"=================================\n");
 	spin_unlock_irqrestore(&report_lock, flags);
 }
+
+#define CALL_KASAN_REPORT(__addr, __size, __is_write) \
+	struct access_info info;                      \
+	info.access_addr = __addr;                    \
+	info.access_size = __size;                    \
+	info.is_write = __is_write;                   \
+	info.ip = _RET_IP_;                           \
+	kasan_report_error(&info)
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_recover_load##size(unsigned long addr) \
+{                                                         \
+	CALL_KASAN_REPORT(addr, size, false);             \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_recover_load##size)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_recover_store##size(unsigned long addr) \
+{                                                          \
+	CALL_KASAN_REPORT(addr, size, true);               \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_recover_store##size)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_recover_load_n(unsigned long addr, size_t size)
+{
+	CALL_KASAN_REPORT(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_recover_load_n);
+
+void __asan_report_recover_store_n(unsigned long addr, size_t size)
+{
+	CALL_KASAN_REPORT(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_recover_store_n);
-- 
2.1.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
@ 2014-09-24 12:44     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-24 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Michal Marek

This patch only demonstration how easy this could be achieved.
GCC doesn't support this feature yet. Two patches required for this:
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

In inline instrumentation mode compiler directly inserts code
checking shadow memory instead of __asan_load/__asan_store
calls.
This is usually faster than outline. In some workloads inline is
2 times faster than outline instrumentation.

The downside of inline instrumentation is bloated kernel's .text size:

size noasan/vmlinux
   text     data     bss      dec     hex    filename
11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux

size outline/vmlinux
   text    data     bss      dec      hex    filename
16553474  1602592  950272  19106338  1238a22 outline/vmlinux

size inline/vmlinux
   text    data     bss      dec      hex    filename
32064759  1598688  946176  34609623  21019d7 inline/vmlinux

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile          |  5 +++++
 lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
 mm/kasan/report.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 74 insertions(+)

diff --git a/Makefile b/Makefile
index 6cefe5e..fe7c534 100644
--- a/Makefile
+++ b/Makefile
@@ -773,6 +773,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
 ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
+		 $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
+endif
+
   ifeq ($(CFLAGS_KASAN),)
     $(warning Cannot use CONFIG_KASAN: \
 	      -fsanitize=kernel-address not supported by compiler)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index faddb0e..c4ac040 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -27,4 +27,28 @@ config TEST_KASAN
 	  out of bounds accesses, use after free. It is usefull for testing
 	  kernel debugging features like kernel address sanitizer.
 
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_INLINE if X86_64
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
 endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index c42f6ba..a9262f8 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -212,3 +212,48 @@ void kasan_report_user_access(struct access_info *info)
 		"=================================\n");
 	spin_unlock_irqrestore(&report_lock, flags);
 }
+
+#define CALL_KASAN_REPORT(__addr, __size, __is_write) \
+	struct access_info info;                      \
+	info.access_addr = __addr;                    \
+	info.access_size = __size;                    \
+	info.is_write = __is_write;                   \
+	info.ip = _RET_IP_;                           \
+	kasan_report_error(&info)
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_recover_load##size(unsigned long addr) \
+{                                                         \
+	CALL_KASAN_REPORT(addr, size, false);             \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_recover_load##size)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_recover_store##size(unsigned long addr) \
+{                                                          \
+	CALL_KASAN_REPORT(addr, size, true);               \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_recover_store##size)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_recover_load_n(unsigned long addr, size_t size)
+{
+	CALL_KASAN_REPORT(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_recover_load_n);
+
+void __asan_report_recover_store_n(unsigned long addr, size_t size)
+{
+	CALL_KASAN_REPORT(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_recover_store_n);
-- 
2.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-24 15:11     ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2014-09-24 15:11 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

On Wed, 24 Sep 2014 16:43:56 +0400 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Note: patch (https://lkml.org/lkml/2014/9/4/364) for gcc5 support
> somewhat just disappeared from the last mmotm,

hmpf, I must have fat-fingered that one and accidentally lost
the patch.  Fixed, thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-24 15:11     ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2014-09-24 15:11 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

On Wed, 24 Sep 2014 16:43:56 +0400 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Note: patch (https://lkml.org/lkml/2014/9/4/364) for gcc5 support
> somewhat just disappeared from the last mmotm,

hmpf, I must have fat-fingered that one and accidentally lost
the patch.  Fixed, thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 05/13] mm: page_alloc: add kasan hooks on alloc and free paths
  2014-09-24 12:44     ` Andrey Ryabinin
@ 2014-09-25 17:04       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-25 17:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as inaccessible.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  6 ++++++
>  mm/compaction.c       |  2 ++
>  mm/kasan/kasan.c      | 14 ++++++++++++++
>  mm/kasan/kasan.h      |  1 +
>  mm/kasan/report.c     |  7 +++++++
>  mm/page_alloc.c       |  3 +++
>  6 files changed, 33 insertions(+)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 01c99fe..9714fba 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
>
>  void kasan_unpoison_shadow(const void *address, size_t size);
>
> +void kasan_alloc_pages(struct page *page, unsigned int order);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
>  #else /* CONFIG_KASAN */
>
>  static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
> @@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
>  static inline void kasan_enable_local(void) {}
>  static inline void kasan_disable_local(void) {}
>
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +
>  #endif /* CONFIG_KASAN */
>
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 92075d5..686b5c2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -16,6 +16,7 @@
>  #include <linux/sysfs.h>
>  #include <linux/balloon_compaction.h>
>  #include <linux/page-isolation.h>
> +#include <linux/kasan.h>
>  #include "internal.h"
>
>  #ifdef CONFIG_COMPACTION
> @@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
>         list_for_each_entry(page, list, lru) {
>                 arch_alloc_page(page, 0);
>                 kernel_map_pages(page, 1, 1);
> +               kasan_alloc_pages(page, 0);
>         }
>  }
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 454df8d..7cfc1fe 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -251,6 +251,20 @@ static __always_inline void check_memory_region(unsigned long addr,
>         kasan_report_error(&info);
>  }
>
> +void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> +       if (likely(!PageHighMem(page)))
> +               kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +}
> +
> +void kasan_free_pages(struct page *page, unsigned int order)
> +{
> +       if (likely(!PageHighMem(page)))
> +               kasan_poison_shadow(page_address(page),
> +                               PAGE_SIZE << order,
> +                               KASAN_FREE_PAGE);
> +}
> +
>  void __asan_load1(unsigned long addr)
>  {
>         check_memory_region(addr, 1, false);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 5895e31..5e61799 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,7 @@
>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index bf559fa..f9d4e8d 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
>         case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>                 bug_type = "out of bounds access";
>                 break;
> +       case KASAN_FREE_PAGE:
> +               bug_type = "use after free";
> +               break;
>         case KASAN_SHADOW_GAP:
>                 bug_type = "wild memory access";
>                 break;
> @@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
>         page = virt_to_head_page((void *)info->access_addr);
>
>         switch (shadow_val) {
> +       case KASAN_FREE_PAGE:
> +               dump_page(page, "kasan error");
> +               dump_stack();
> +               break;
>         case KASAN_SHADOW_GAP:
>                 pr_err("No metainfo is available for this access.\n");
>                 dump_stack();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ee95d0a..ef3604a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -59,6 +59,7 @@
>  #include <linux/page-debug-flags.h>
>  #include <linux/hugetlb.h>
>  #include <linux/sched/rt.h>
> +#include <linux/kasan.h>
>
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -755,6 +756,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>
>         trace_mm_page_free(page, order);
>         kmemcheck_free_shadow(page, order);
> +       kasan_free_pages(page, order);
>
>         if (PageAnon(page))
>                 page->mapping = NULL;
> @@ -941,6 +943,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
>
>         arch_alloc_page(page, order);
>         kernel_map_pages(page, 1 << order, 1);
> +       kasan_alloc_pages(page, order);
>
>         if (gfp_flags & __GFP_ZERO)
>                 prep_zero_page(page, order, gfp_flags);
> --
> 2.1.1
>


Looks good to me.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 05/13] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2014-09-25 17:04       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-25 17:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as inaccessible.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  6 ++++++
>  mm/compaction.c       |  2 ++
>  mm/kasan/kasan.c      | 14 ++++++++++++++
>  mm/kasan/kasan.h      |  1 +
>  mm/kasan/report.c     |  7 +++++++
>  mm/page_alloc.c       |  3 +++
>  6 files changed, 33 insertions(+)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 01c99fe..9714fba 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
>
>  void kasan_unpoison_shadow(const void *address, size_t size);
>
> +void kasan_alloc_pages(struct page *page, unsigned int order);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
>  #else /* CONFIG_KASAN */
>
>  static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
> @@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
>  static inline void kasan_enable_local(void) {}
>  static inline void kasan_disable_local(void) {}
>
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +
>  #endif /* CONFIG_KASAN */
>
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 92075d5..686b5c2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -16,6 +16,7 @@
>  #include <linux/sysfs.h>
>  #include <linux/balloon_compaction.h>
>  #include <linux/page-isolation.h>
> +#include <linux/kasan.h>
>  #include "internal.h"
>
>  #ifdef CONFIG_COMPACTION
> @@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
>         list_for_each_entry(page, list, lru) {
>                 arch_alloc_page(page, 0);
>                 kernel_map_pages(page, 1, 1);
> +               kasan_alloc_pages(page, 0);
>         }
>  }
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 454df8d..7cfc1fe 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -251,6 +251,20 @@ static __always_inline void check_memory_region(unsigned long addr,
>         kasan_report_error(&info);
>  }
>
> +void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> +       if (likely(!PageHighMem(page)))
> +               kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +}
> +
> +void kasan_free_pages(struct page *page, unsigned int order)
> +{
> +       if (likely(!PageHighMem(page)))
> +               kasan_poison_shadow(page_address(page),
> +                               PAGE_SIZE << order,
> +                               KASAN_FREE_PAGE);
> +}
> +
>  void __asan_load1(unsigned long addr)
>  {
>         check_memory_region(addr, 1, false);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 5895e31..5e61799 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,7 @@
>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index bf559fa..f9d4e8d 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
>         case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>                 bug_type = "out of bounds access";
>                 break;
> +       case KASAN_FREE_PAGE:
> +               bug_type = "use after free";
> +               break;
>         case KASAN_SHADOW_GAP:
>                 bug_type = "wild memory access";
>                 break;
> @@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
>         page = virt_to_head_page((void *)info->access_addr);
>
>         switch (shadow_val) {
> +       case KASAN_FREE_PAGE:
> +               dump_page(page, "kasan error");
> +               dump_stack();
> +               break;
>         case KASAN_SHADOW_GAP:
>                 pr_err("No metainfo is available for this access.\n");
>                 dump_stack();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ee95d0a..ef3604a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -59,6 +59,7 @@
>  #include <linux/page-debug-flags.h>
>  #include <linux/hugetlb.h>
>  #include <linux/sched/rt.h>
> +#include <linux/kasan.h>
>
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -755,6 +756,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>
>         trace_mm_page_free(page, order);
>         kmemcheck_free_shadow(page, order);
> +       kasan_free_pages(page, order);
>
>         if (PageAnon(page))
>                 page->mapping = NULL;
> @@ -941,6 +943,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
>
>         arch_alloc_page(page, order);
>         kernel_map_pages(page, 1 << order, 1);
> +       kasan_alloc_pages(page, order);
>
>         if (gfp_flags & __GFP_ZERO)
>                 prep_zero_page(page, order, gfp_flags);
> --
> 2.1.1
>


Looks good to me.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 08/13] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-09-24 12:44     ` Andrey Ryabinin
@ 2014-09-26  4:03       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26  4:03 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Pekka Enberg, David Rientjes

Looks good to me.

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Wrap access to object's metadata in external functions with
> metadata_access_enable()/metadata_access_disable() function calls.
>
> This hooks separates payload accesses from metadata accesses
> which might be useful for different checkers (e.g. KASan).
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slub.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 82282f5..9b1f75c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -467,13 +467,23 @@ static int slub_debug;
>  static char *slub_debug_slabs;
>  static int disable_higher_order_debug;
>
> +static inline void metadata_access_enable(void)
> +{
> +}
> +
> +static inline void metadata_access_disable(void)
> +{
> +}
> +
>  /*
>   * Object debugging
>   */
>  static void print_section(char *text, u8 *addr, unsigned int length)
>  {
> +       metadata_access_enable();
>         print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
>                         length, 1);
> +       metadata_access_disable();
>  }
>
>  static struct track *get_track(struct kmem_cache *s, void *object,
> @@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
>                 trace.max_entries = TRACK_ADDRS_COUNT;
>                 trace.entries = p->addrs;
>                 trace.skip = 3;
> +               metadata_access_enable();
>                 save_stack_trace(&trace);
> +               metadata_access_disable();
>
>                 /* See rant in lockdep.c */
>                 if (trace.nr_entries != 0 &&
> @@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
>         u8 *fault;
>         u8 *end;
>
> +       metadata_access_enable();
>         fault = memchr_inv(start, value, bytes);
> +       metadata_access_disable();
>         if (!fault)
>                 return 1;
>
> @@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
>         if (!remainder)
>                 return 1;
>
> +       metadata_access_enable();
>         fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
> +       metadata_access_disable();
>         if (!fault)
>                 return 1;
>         while (end > fault && end[-1] == POISON_INUSE)
> --
> 2.1.1
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 08/13] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-09-26  4:03       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26  4:03 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Pekka Enberg, David Rientjes

Looks good to me.

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Wrap access to object's metadata in external functions with
> metadata_access_enable()/metadata_access_disable() function calls.
>
> This hooks separates payload accesses from metadata accesses
> which might be useful for different checkers (e.g. KASan).
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slub.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 82282f5..9b1f75c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -467,13 +467,23 @@ static int slub_debug;
>  static char *slub_debug_slabs;
>  static int disable_higher_order_debug;
>
> +static inline void metadata_access_enable(void)
> +{
> +}
> +
> +static inline void metadata_access_disable(void)
> +{
> +}
> +
>  /*
>   * Object debugging
>   */
>  static void print_section(char *text, u8 *addr, unsigned int length)
>  {
> +       metadata_access_enable();
>         print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
>                         length, 1);
> +       metadata_access_disable();
>  }
>
>  static struct track *get_track(struct kmem_cache *s, void *object,
> @@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
>                 trace.max_entries = TRACK_ADDRS_COUNT;
>                 trace.entries = p->addrs;
>                 trace.skip = 3;
> +               metadata_access_enable();
>                 save_stack_trace(&trace);
> +               metadata_access_disable();
>
>                 /* See rant in lockdep.c */
>                 if (trace.nr_entries != 0 &&
> @@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
>         u8 *fault;
>         u8 *end;
>
> +       metadata_access_enable();
>         fault = memchr_inv(start, value, bytes);
> +       metadata_access_disable();
>         if (!fault)
>                 return 1;
>
> @@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
>         if (!remainder)
>                 return 1;
>
> +       metadata_access_enable();
>         fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
> +       metadata_access_disable();
>         if (!fault)
>                 return 1;
>         while (end > fault && end[-1] == POISON_INUSE)
> --
> 2.1.1
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
  2014-09-24 12:44     ` Andrey Ryabinin
@ 2014-09-26  4:48       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26  4:48 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Pekka Enberg, David Rientjes

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Initially all objects in newly allocated slab page, marked as free.
> Later, when allocation of slub object happens, requested by caller
> number of bytes marked as accessible, and the rest of the object
> (including slub's metadata) marked as redzone (inaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible.
>
> Code in slub.c and slab_common.c files could validly access to object's
> metadata, so instrumentation for this files are disabled.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h | 24 +++++++++++++
>  include/linux/slab.h  | 11 ++++--
>  lib/Kconfig.kasan     |  1 +
>  mm/Makefile           |  3 ++
>  mm/kasan/kasan.c      | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h      |  5 +++
>  mm/kasan/report.c     | 27 ++++++++++++++
>  mm/slab_common.c      |  5 ++-
>  mm/slub.c             | 36 +++++++++++++++++--
>  9 files changed, 203 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 9714fba..4b866fa 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -32,6 +32,17 @@ void kasan_unpoison_shadow(const void *address, size_t size);
>
>  void kasan_alloc_pages(struct page *page, unsigned int order);
>  void kasan_free_pages(struct page *page, unsigned int order);
> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size);
> +void kasan_kfree_large(const void *ptr);
> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
> +void kasan_krealloc(const void *object, size_t new_size);
> +
> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
> +void kasan_slab_free(struct kmem_cache *s, void *object);
> +
> +void kasan_free_slab_pages(struct page *page, int order);
>
>  #else /* CONFIG_KASAN */
>
> @@ -42,6 +53,19 @@ static inline void kasan_disable_local(void) {}
>
>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_mark_slab_padding(struct kmem_cache *s,
> +                                       void *object) {}
> +
> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
> +static inline void kasan_kfree_large(const void *ptr) {}
> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
> +                               size_t size) {}
> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
> +
> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
> +
> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
>
>  #endif /* CONFIG_KASAN */
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index c265bec..5f97037 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -104,6 +104,7 @@
>                                 (unsigned long)ZERO_SIZE_PTR)
>
>  #include <linux/kmemleak.h>
> +#include <linux/kasan.h>
>
>  struct mem_cgroup;
>  /*
> @@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>  static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
>                 gfp_t flags, size_t size)
>  {
> -       return kmem_cache_alloc(s, flags);
> +       void *ret = kmem_cache_alloc(s, flags);
> +
> +       kasan_kmalloc(s, ret, size);
> +       return ret;
>  }
>
>  static __always_inline void *
> @@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>                               gfp_t gfpflags,
>                               int node, size_t size)
>  {
> -       return kmem_cache_alloc_node(s, gfpflags, node);
> +       void *ret = kmem_cache_alloc_node(s, gfpflags, node);
> +
> +       kasan_kmalloc(s, ret, size);
> +       return ret;
>  }
>  #endif /* CONFIG_TRACING */
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index b458a00..d16b899 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
>  config KASAN
>         bool "AddressSanitizer: runtime memory debugger"
>         depends on !MEMORY_HOTPLUG
> +       depends on SLUB_DEBUG


What does SLUB_DEBUG do? I think that generally we don't want any
other *heavy* debug checks to be required for kasan.


>         help
>           Enables address sanitizer - runtime memory debugger,
>           designed to find out-of-bounds accesses and use-after-free bugs.
> diff --git a/mm/Makefile b/mm/Makefile
> index 7a4b87e..c08a70f 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -2,6 +2,9 @@
>  # Makefile for the linux memory manager.
>  #
>
> +KASAN_SANITIZE_slab_common.o := n
> +KASAN_SANITIZE_slub.o := n
> +
>  mmu-y                  := nommu.o
>  mmu-$(CONFIG_MMU)      := gup.o highmem.o madvise.o memory.o mincore.o \
>                            mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 7cfc1fe..3c1687a 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -30,6 +30,7 @@
>  #include <linux/kasan.h>
>
>  #include "kasan.h"
> +#include "../slab.h"
>
>  /*
>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> @@ -265,6 +266,102 @@ void kasan_free_pages(struct page *page, unsigned int order)
>                                 KASAN_FREE_PAGE);
>  }
>
> +void kasan_free_slab_pages(struct page *page, int order)

Doesn't this callback followed by actually freeing the pages, and so
kasan_free_pages callback that will poison the range? If so, I would
prefer to not double poison.


> +{
> +       kasan_poison_shadow(page_address(page),
> +                       PAGE_SIZE << order, KASAN_SLAB_FREE);
> +}
> +
> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
> +{
> +       unsigned long object_end = (unsigned long)object + s->size;
> +       unsigned long padding_end = round_up(object_end, PAGE_SIZE);
> +       unsigned long padding_start = round_up(object_end,
> +                                       KASAN_SHADOW_SCALE_SIZE);
> +       size_t size = padding_end - padding_start;
> +
> +       if (size)
> +               kasan_poison_shadow((void *)padding_start,
> +                               size, KASAN_SLAB_PADDING);
> +}
> +
> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
> +{
> +       kasan_kmalloc(cache, object, cache->object_size);
> +}
> +
> +void kasan_slab_free(struct kmem_cache *cache, void *object)
> +{
> +       unsigned long size = cache->size;
> +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +

Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
used after free.

> +       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
> +               return;
> +
> +       kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
> +}
> +
> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
> +{
> +       unsigned long redzone_start;
> +       unsigned long redzone_end;
> +
> +       if (unlikely(object == NULL))
> +               return;
> +
> +       redzone_start = round_up((unsigned long)(object + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)object + cache->size;
> +
> +       kasan_unpoison_shadow(object, size);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               KASAN_KMALLOC_REDZONE);
> +
> +}
> +EXPORT_SYMBOL(kasan_kmalloc);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size)
> +{
> +       struct page *page;
> +       unsigned long redzone_start;
> +       unsigned long redzone_end;
> +
> +       if (unlikely(ptr == NULL))
> +               return;
> +
> +       page = virt_to_page(ptr);
> +       redzone_start = round_up((unsigned long)(ptr + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));

If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, the object does
not receive any redzone at all. Can we pass full memory block size
from above to fix it? Will compound_order(page) do?

> +
> +       kasan_unpoison_shadow(ptr, size);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               KASAN_PAGE_REDZONE);
> +}
> +
> +void kasan_krealloc(const void *object, size_t size)
> +{
> +       struct page *page;
> +
> +       if (unlikely(object == ZERO_SIZE_PTR))
> +               return;
> +
> +       page = virt_to_head_page(object);
> +
> +       if (unlikely(!PageSlab(page)))
> +               kasan_kmalloc_large(object, size);
> +       else
> +               kasan_kmalloc(page->slab_cache, object, size);
> +}
> +
> +void kasan_kfree_large(const void *ptr)
> +{
> +       struct page *page = virt_to_page(ptr);
> +
> +       kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> +                       KASAN_FREE_PAGE);
> +}
> +
>  void __asan_load1(unsigned long addr)
>  {
>         check_memory_region(addr, 1, false);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 5e61799..b3974c7 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -7,6 +7,11 @@
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index f9d4e8d..c42f6ba 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -24,6 +24,7 @@
>  #include <linux/kasan.h>
>
>  #include "kasan.h"
> +#include "../slab.h"
>
>  /* Shadow layout customization. */
>  #define SHADOW_BYTES_PER_BLOCK 1
> @@ -54,10 +55,15 @@ static void print_error_description(struct access_info *info)
>         shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         switch (shadow_val) {
> +       case KASAN_PAGE_REDZONE:
> +       case KASAN_SLAB_PADDING:
> +       case KASAN_KMALLOC_REDZONE:
>         case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>                 bug_type = "out of bounds access";
>                 break;
>         case KASAN_FREE_PAGE:
> +       case KASAN_SLAB_FREE:
> +       case KASAN_KMALLOC_FREE:
>                 bug_type = "use after free";
>                 break;
>         case KASAN_SHADOW_GAP:
> @@ -73,12 +79,33 @@ static void print_error_description(struct access_info *info)
>  static void print_address_description(struct access_info *info)
>  {
>         struct page *page;
> +       struct kmem_cache *cache;
>         u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         page = virt_to_head_page((void *)info->access_addr);
>
>         switch (shadow_val) {
> +       case KASAN_SLAB_PADDING:
> +               cache = page->slab_cache;
> +               slab_err(cache, page, "access to slab redzone");
> +               dump_stack();
> +               break;
> +       case KASAN_KMALLOC_FREE:
> +       case KASAN_KMALLOC_REDZONE:
> +       case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +               if (PageSlab(page)) {
> +                       void *object;
> +                       void *slab_page = page_address(page);
> +
> +                       cache = page->slab_cache;
> +                       object = virt_to_obj(cache, slab_page,
> +                                       (void *)info->access_addr);
> +                       object_err(cache, page, object, "kasan error");
> +                       break;
> +               }
> +       case KASAN_PAGE_REDZONE:
>         case KASAN_FREE_PAGE:
> +       case KASAN_SLAB_FREE:
>                 dump_page(page, "kasan error");
>                 dump_stack();
>                 break;
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 3a6e0cf..33868b4 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -795,6 +795,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
>         page = alloc_kmem_pages(flags, order);
>         ret = page ? page_address(page) : NULL;
>         kmemleak_alloc(ret, size, 1, flags);
> +       kasan_kmalloc_large(ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmalloc_order);
> @@ -969,8 +970,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
>         if (p)
>                 ks = ksize(p);
>
> -       if (ks >= new_size)
> +       if (ks >= new_size) {
> +               kasan_krealloc((void *)p, new_size);
>                 return (void *)p;
> +       }
>
>         ret = kmalloc_track_caller(new_size, flags);
>         if (ret && p)
> diff --git a/mm/slub.c b/mm/slub.c
> index 9b1f75c..12ffdd0 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -33,6 +33,7 @@
>  #include <linux/stacktrace.h>
>  #include <linux/prefetch.h>
>  #include <linux/memcontrol.h>
> +#include <linux/kasan.h>
>
>  #include <trace/events/kmem.h>
>
> @@ -469,10 +470,12 @@ static int disable_higher_order_debug;
>
>  static inline void metadata_access_enable(void)
>  {
> +       kasan_disable_local();
>  }
>
>  static inline void metadata_access_disable(void)
>  {
> +       kasan_enable_local();
>  }
>
>  /*
> @@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
>  static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
>  {
>         kmemleak_alloc(ptr, size, 1, flags);
> +       kasan_kmalloc_large(ptr, size);
>  }
>
>  static inline void kfree_hook(const void *x)
>  {
>         kmemleak_free(x);
> +       kasan_kfree_large(x);
>  }
>
>  static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
> @@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
>         flags &= gfp_allowed_mask;
>         kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
>         kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
> +       kasan_slab_alloc(s, object);
>  }
>
>  static inline void slab_free_hook(struct kmem_cache *s, void *x)
>  {
>         kmemleak_free_recursive(x, s->flags);
> +       kasan_slab_free(s, x);
>
>         /*
>          * Trouble is that we may no longer disable interrupts in the fast path
> @@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>                                 void *object)
>  {
>         setup_object_debug(s, page, object);
> -       if (unlikely(s->ctor))
> +       if (unlikely(s->ctor)) {
> +               kasan_slab_alloc(s, object);
>                 s->ctor(object);
> +       }
> +       kasan_slab_free(s, object);
>  }
>
>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>                 setup_object(s, page, p);
>                 if (likely(idx < page->objects))
>                         set_freepointer(s, p, p + s->size);

Sorry, I don't fully follow this code, so I will just ask some questions.
Can we have some slab padding after last object in this case as well?

> -               else
> +               else {
>                         set_freepointer(s, p, NULL);
> +                       kasan_mark_slab_padding(s, p);

kasan_mark_slab_padding poisons only up to end of the page. Can there
be multiple pages that we need to poison?

> +               }
>         }
>
>         page->freelist = start;
> @@ -1442,6 +1454,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>         }
>
>         kmemcheck_free_shadow(page, compound_order(page));
> +       kasan_free_slab_pages(page, compound_order(page));
>
>         mod_zone_page_state(page_zone(page),
>                 (s->flags & SLAB_RECLAIM_ACCOUNT) ?
> @@ -2488,6 +2501,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
>  {
>         void *ret = slab_alloc(s, gfpflags, _RET_IP_);
>         trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
> +       kasan_kmalloc(s, ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_trace);
> @@ -2514,6 +2528,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
>
>         trace_kmalloc_node(_RET_IP_, ret,
>                            size, s->size, gfpflags, node);
> +
> +       kasan_kmalloc(s, ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
> @@ -2897,6 +2913,7 @@ static void early_kmem_cache_node_alloc(int node)
>         init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
>         init_tracking(kmem_cache_node, n);
>  #endif
> +       kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
>         init_kmem_cache_node(n);
>         inc_slabs_node(kmem_cache_node, node, page->objects);
>
> @@ -3269,6 +3286,8 @@ void *__kmalloc(size_t size, gfp_t flags)
>
>         trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
>
> +       kasan_kmalloc(s, ret, size);
> +
>         return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc);
> @@ -3312,12 +3331,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>
>         trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
>
> +       kasan_kmalloc(s, ret, size);
> +
>         return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> -size_t ksize(const void *object)
> +static size_t __ksize(const void *object)
>  {
>         struct page *page;
>
> @@ -3333,6 +3354,15 @@ size_t ksize(const void *object)
>
>         return slab_ksize(page->slab_cache);
>  }
> +
> +size_t ksize(const void *object)
> +{
> +       size_t size = __ksize(object);
> +       /* We assume that ksize callers could use whole allocated area,
> +          so we need unpoison this area. */
> +       kasan_krealloc(object, size);
> +       return size;
> +}
>  EXPORT_SYMBOL(ksize);
>
>  void kfree(const void *x)
> --
> 2.1.1
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-09-26  4:48       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26  4:48 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Pekka Enberg, David Rientjes

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Initially all objects in newly allocated slab page, marked as free.
> Later, when allocation of slub object happens, requested by caller
> number of bytes marked as accessible, and the rest of the object
> (including slub's metadata) marked as redzone (inaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible.
>
> Code in slub.c and slab_common.c files could validly access to object's
> metadata, so instrumentation for this files are disabled.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h | 24 +++++++++++++
>  include/linux/slab.h  | 11 ++++--
>  lib/Kconfig.kasan     |  1 +
>  mm/Makefile           |  3 ++
>  mm/kasan/kasan.c      | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h      |  5 +++
>  mm/kasan/report.c     | 27 ++++++++++++++
>  mm/slab_common.c      |  5 ++-
>  mm/slub.c             | 36 +++++++++++++++++--
>  9 files changed, 203 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 9714fba..4b866fa 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -32,6 +32,17 @@ void kasan_unpoison_shadow(const void *address, size_t size);
>
>  void kasan_alloc_pages(struct page *page, unsigned int order);
>  void kasan_free_pages(struct page *page, unsigned int order);
> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size);
> +void kasan_kfree_large(const void *ptr);
> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
> +void kasan_krealloc(const void *object, size_t new_size);
> +
> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
> +void kasan_slab_free(struct kmem_cache *s, void *object);
> +
> +void kasan_free_slab_pages(struct page *page, int order);
>
>  #else /* CONFIG_KASAN */
>
> @@ -42,6 +53,19 @@ static inline void kasan_disable_local(void) {}
>
>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_mark_slab_padding(struct kmem_cache *s,
> +                                       void *object) {}
> +
> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
> +static inline void kasan_kfree_large(const void *ptr) {}
> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
> +                               size_t size) {}
> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
> +
> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
> +
> +static inline void kasan_free_slab_pages(struct page *page, int order) {}
>
>  #endif /* CONFIG_KASAN */
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index c265bec..5f97037 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -104,6 +104,7 @@
>                                 (unsigned long)ZERO_SIZE_PTR)
>
>  #include <linux/kmemleak.h>
> +#include <linux/kasan.h>
>
>  struct mem_cgroup;
>  /*
> @@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>  static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
>                 gfp_t flags, size_t size)
>  {
> -       return kmem_cache_alloc(s, flags);
> +       void *ret = kmem_cache_alloc(s, flags);
> +
> +       kasan_kmalloc(s, ret, size);
> +       return ret;
>  }
>
>  static __always_inline void *
> @@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>                               gfp_t gfpflags,
>                               int node, size_t size)
>  {
> -       return kmem_cache_alloc_node(s, gfpflags, node);
> +       void *ret = kmem_cache_alloc_node(s, gfpflags, node);
> +
> +       kasan_kmalloc(s, ret, size);
> +       return ret;
>  }
>  #endif /* CONFIG_TRACING */
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index b458a00..d16b899 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
>  config KASAN
>         bool "AddressSanitizer: runtime memory debugger"
>         depends on !MEMORY_HOTPLUG
> +       depends on SLUB_DEBUG


What does SLUB_DEBUG do? I think that generally we don't want any
other *heavy* debug checks to be required for kasan.


>         help
>           Enables address sanitizer - runtime memory debugger,
>           designed to find out-of-bounds accesses and use-after-free bugs.
> diff --git a/mm/Makefile b/mm/Makefile
> index 7a4b87e..c08a70f 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -2,6 +2,9 @@
>  # Makefile for the linux memory manager.
>  #
>
> +KASAN_SANITIZE_slab_common.o := n
> +KASAN_SANITIZE_slub.o := n
> +
>  mmu-y                  := nommu.o
>  mmu-$(CONFIG_MMU)      := gup.o highmem.o madvise.o memory.o mincore.o \
>                            mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 7cfc1fe..3c1687a 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -30,6 +30,7 @@
>  #include <linux/kasan.h>
>
>  #include "kasan.h"
> +#include "../slab.h"
>
>  /*
>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> @@ -265,6 +266,102 @@ void kasan_free_pages(struct page *page, unsigned int order)
>                                 KASAN_FREE_PAGE);
>  }
>
> +void kasan_free_slab_pages(struct page *page, int order)

Doesn't this callback followed by actually freeing the pages, and so
kasan_free_pages callback that will poison the range? If so, I would
prefer to not double poison.


> +{
> +       kasan_poison_shadow(page_address(page),
> +                       PAGE_SIZE << order, KASAN_SLAB_FREE);
> +}
> +
> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
> +{
> +       unsigned long object_end = (unsigned long)object + s->size;
> +       unsigned long padding_end = round_up(object_end, PAGE_SIZE);
> +       unsigned long padding_start = round_up(object_end,
> +                                       KASAN_SHADOW_SCALE_SIZE);
> +       size_t size = padding_end - padding_start;
> +
> +       if (size)
> +               kasan_poison_shadow((void *)padding_start,
> +                               size, KASAN_SLAB_PADDING);
> +}
> +
> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
> +{
> +       kasan_kmalloc(cache, object, cache->object_size);
> +}
> +
> +void kasan_slab_free(struct kmem_cache *cache, void *object)
> +{
> +       unsigned long size = cache->size;
> +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +

Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
used after free.

> +       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
> +               return;
> +
> +       kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
> +}
> +
> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
> +{
> +       unsigned long redzone_start;
> +       unsigned long redzone_end;
> +
> +       if (unlikely(object == NULL))
> +               return;
> +
> +       redzone_start = round_up((unsigned long)(object + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)object + cache->size;
> +
> +       kasan_unpoison_shadow(object, size);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               KASAN_KMALLOC_REDZONE);
> +
> +}
> +EXPORT_SYMBOL(kasan_kmalloc);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size)
> +{
> +       struct page *page;
> +       unsigned long redzone_start;
> +       unsigned long redzone_end;
> +
> +       if (unlikely(ptr == NULL))
> +               return;
> +
> +       page = virt_to_page(ptr);
> +       redzone_start = round_up((unsigned long)(ptr + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));

If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, the object does
not receive any redzone at all. Can we pass full memory block size
from above to fix it? Will compound_order(page) do?

> +
> +       kasan_unpoison_shadow(ptr, size);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               KASAN_PAGE_REDZONE);
> +}
> +
> +void kasan_krealloc(const void *object, size_t size)
> +{
> +       struct page *page;
> +
> +       if (unlikely(object == ZERO_SIZE_PTR))
> +               return;
> +
> +       page = virt_to_head_page(object);
> +
> +       if (unlikely(!PageSlab(page)))
> +               kasan_kmalloc_large(object, size);
> +       else
> +               kasan_kmalloc(page->slab_cache, object, size);
> +}
> +
> +void kasan_kfree_large(const void *ptr)
> +{
> +       struct page *page = virt_to_page(ptr);
> +
> +       kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> +                       KASAN_FREE_PAGE);
> +}
> +
>  void __asan_load1(unsigned long addr)
>  {
>         check_memory_region(addr, 1, false);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 5e61799..b3974c7 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -7,6 +7,11 @@
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index f9d4e8d..c42f6ba 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -24,6 +24,7 @@
>  #include <linux/kasan.h>
>
>  #include "kasan.h"
> +#include "../slab.h"
>
>  /* Shadow layout customization. */
>  #define SHADOW_BYTES_PER_BLOCK 1
> @@ -54,10 +55,15 @@ static void print_error_description(struct access_info *info)
>         shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         switch (shadow_val) {
> +       case KASAN_PAGE_REDZONE:
> +       case KASAN_SLAB_PADDING:
> +       case KASAN_KMALLOC_REDZONE:
>         case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>                 bug_type = "out of bounds access";
>                 break;
>         case KASAN_FREE_PAGE:
> +       case KASAN_SLAB_FREE:
> +       case KASAN_KMALLOC_FREE:
>                 bug_type = "use after free";
>                 break;
>         case KASAN_SHADOW_GAP:
> @@ -73,12 +79,33 @@ static void print_error_description(struct access_info *info)
>  static void print_address_description(struct access_info *info)
>  {
>         struct page *page;
> +       struct kmem_cache *cache;
>         u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         page = virt_to_head_page((void *)info->access_addr);
>
>         switch (shadow_val) {
> +       case KASAN_SLAB_PADDING:
> +               cache = page->slab_cache;
> +               slab_err(cache, page, "access to slab redzone");
> +               dump_stack();
> +               break;
> +       case KASAN_KMALLOC_FREE:
> +       case KASAN_KMALLOC_REDZONE:
> +       case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +               if (PageSlab(page)) {
> +                       void *object;
> +                       void *slab_page = page_address(page);
> +
> +                       cache = page->slab_cache;
> +                       object = virt_to_obj(cache, slab_page,
> +                                       (void *)info->access_addr);
> +                       object_err(cache, page, object, "kasan error");
> +                       break;
> +               }
> +       case KASAN_PAGE_REDZONE:
>         case KASAN_FREE_PAGE:
> +       case KASAN_SLAB_FREE:
>                 dump_page(page, "kasan error");
>                 dump_stack();
>                 break;
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 3a6e0cf..33868b4 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -795,6 +795,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
>         page = alloc_kmem_pages(flags, order);
>         ret = page ? page_address(page) : NULL;
>         kmemleak_alloc(ret, size, 1, flags);
> +       kasan_kmalloc_large(ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmalloc_order);
> @@ -969,8 +970,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
>         if (p)
>                 ks = ksize(p);
>
> -       if (ks >= new_size)
> +       if (ks >= new_size) {
> +               kasan_krealloc((void *)p, new_size);
>                 return (void *)p;
> +       }
>
>         ret = kmalloc_track_caller(new_size, flags);
>         if (ret && p)
> diff --git a/mm/slub.c b/mm/slub.c
> index 9b1f75c..12ffdd0 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -33,6 +33,7 @@
>  #include <linux/stacktrace.h>
>  #include <linux/prefetch.h>
>  #include <linux/memcontrol.h>
> +#include <linux/kasan.h>
>
>  #include <trace/events/kmem.h>
>
> @@ -469,10 +470,12 @@ static int disable_higher_order_debug;
>
>  static inline void metadata_access_enable(void)
>  {
> +       kasan_disable_local();
>  }
>
>  static inline void metadata_access_disable(void)
>  {
> +       kasan_enable_local();
>  }
>
>  /*
> @@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
>  static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
>  {
>         kmemleak_alloc(ptr, size, 1, flags);
> +       kasan_kmalloc_large(ptr, size);
>  }
>
>  static inline void kfree_hook(const void *x)
>  {
>         kmemleak_free(x);
> +       kasan_kfree_large(x);
>  }
>
>  static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
> @@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
>         flags &= gfp_allowed_mask;
>         kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
>         kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
> +       kasan_slab_alloc(s, object);
>  }
>
>  static inline void slab_free_hook(struct kmem_cache *s, void *x)
>  {
>         kmemleak_free_recursive(x, s->flags);
> +       kasan_slab_free(s, x);
>
>         /*
>          * Trouble is that we may no longer disable interrupts in the fast path
> @@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>                                 void *object)
>  {
>         setup_object_debug(s, page, object);
> -       if (unlikely(s->ctor))
> +       if (unlikely(s->ctor)) {
> +               kasan_slab_alloc(s, object);
>                 s->ctor(object);
> +       }
> +       kasan_slab_free(s, object);
>  }
>
>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>                 setup_object(s, page, p);
>                 if (likely(idx < page->objects))
>                         set_freepointer(s, p, p + s->size);

Sorry, I don't fully follow this code, so I will just ask some questions.
Can we have some slab padding after last object in this case as well?

> -               else
> +               else {
>                         set_freepointer(s, p, NULL);
> +                       kasan_mark_slab_padding(s, p);

kasan_mark_slab_padding poisons only up to end of the page. Can there
be multiple pages that we need to poison?

> +               }
>         }
>
>         page->freelist = start;
> @@ -1442,6 +1454,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>         }
>
>         kmemcheck_free_shadow(page, compound_order(page));
> +       kasan_free_slab_pages(page, compound_order(page));
>
>         mod_zone_page_state(page_zone(page),
>                 (s->flags & SLAB_RECLAIM_ACCOUNT) ?
> @@ -2488,6 +2501,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
>  {
>         void *ret = slab_alloc(s, gfpflags, _RET_IP_);
>         trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
> +       kasan_kmalloc(s, ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_trace);
> @@ -2514,6 +2528,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
>
>         trace_kmalloc_node(_RET_IP_, ret,
>                            size, s->size, gfpflags, node);
> +
> +       kasan_kmalloc(s, ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
> @@ -2897,6 +2913,7 @@ static void early_kmem_cache_node_alloc(int node)
>         init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
>         init_tracking(kmem_cache_node, n);
>  #endif
> +       kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
>         init_kmem_cache_node(n);
>         inc_slabs_node(kmem_cache_node, node, page->objects);
>
> @@ -3269,6 +3286,8 @@ void *__kmalloc(size_t size, gfp_t flags)
>
>         trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
>
> +       kasan_kmalloc(s, ret, size);
> +
>         return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc);
> @@ -3312,12 +3331,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>
>         trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
>
> +       kasan_kmalloc(s, ret, size);
> +
>         return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> -size_t ksize(const void *object)
> +static size_t __ksize(const void *object)
>  {
>         struct page *page;
>
> @@ -3333,6 +3354,15 @@ size_t ksize(const void *object)
>
>         return slab_ksize(page->slab_cache);
>  }
> +
> +size_t ksize(const void *object)
> +{
> +       size_t size = __ksize(object);
> +       /* We assume that ksize callers could use whole allocated area,
> +          so we need unpoison this area. */
> +       kasan_krealloc(object, size);
> +       return size;
> +}
>  EXPORT_SYMBOL(ksize);
>
>  void kfree(const void *x)
> --
> 2.1.1
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
  2014-09-26  4:48       ` Dmitry Vyukov
@ 2014-09-26  7:25         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26  7:25 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Pekka Enberg, David Rientjes

On 09/26/2014 08:48 AM, Dmitry Vyukov wrote:
> On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
>>  config KASAN
>>         bool "AddressSanitizer: runtime memory debugger"
>>         depends on !MEMORY_HOTPLUG
>> +       depends on SLUB_DEBUG
> 
> 
> What does SLUB_DEBUG do? I think that generally we don't want any
> other *heavy* debug checks to be required for kasan.
> 

SLUB_DEBUG enables support for different debugging features.
It doesn't enables this debugging features by default, it only allows
you to switch them on/off in runtime.
Generally SLUB_DEBUG option is enabled in most kernels. SLUB_DEBUG disabled
only with intention to get minimal kernel.

Without SLUB_DEBUG there will be no redzones, no user tracking info (allocation/free stacktraces).
KASAN won't be so usefull without SLUB_DEBUG.


[...]

>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -30,6 +30,7 @@
>>  #include <linux/kasan.h>
>>
>>  #include "kasan.h"
>> +#include "../slab.h"
>>
>>  /*
>>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
>> @@ -265,6 +266,102 @@ void kasan_free_pages(struct page *page, unsigned int order)
>>                                 KASAN_FREE_PAGE);
>>  }
>>
>> +void kasan_free_slab_pages(struct page *page, int order)
> 
> Doesn't this callback followed by actually freeing the pages, and so
> kasan_free_pages callback that will poison the range? If so, I would
> prefer to not double poison.
> 

Yes, this could be removed.

> 
>> +{
>> +       kasan_poison_shadow(page_address(page),
>> +                       PAGE_SIZE << order, KASAN_SLAB_FREE);
>> +}
>> +
>> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
>> +{
>> +       unsigned long object_end = (unsigned long)object + s->size;
>> +       unsigned long padding_end = round_up(object_end, PAGE_SIZE);
>> +       unsigned long padding_start = round_up(object_end,
>> +                                       KASAN_SHADOW_SCALE_SIZE);
>> +       size_t size = padding_end - padding_start;
>> +
>> +       if (size)
>> +               kasan_poison_shadow((void *)padding_start,
>> +                               size, KASAN_SLAB_PADDING);
>> +}
>> +
>> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
>> +{
>> +       kasan_kmalloc(cache, object, cache->object_size);
>> +}
>> +
>> +void kasan_slab_free(struct kmem_cache *cache, void *object)
>> +{
>> +       unsigned long size = cache->size;
>> +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> +
> 
> Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
> used after free.
> 

Ok.

>> +       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>> +               return;
>> +
>> +       kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>> +}
>> +
>> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
>> +{
>> +       unsigned long redzone_start;
>> +       unsigned long redzone_end;
>> +
>> +       if (unlikely(object == NULL))
>> +               return;
>> +
>> +       redzone_start = round_up((unsigned long)(object + size),
>> +                               KASAN_SHADOW_SCALE_SIZE);
>> +       redzone_end = (unsigned long)object + cache->size;
>> +
>> +       kasan_unpoison_shadow(object, size);
>> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +               KASAN_KMALLOC_REDZONE);
>> +
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc);
>> +
>> +void kasan_kmalloc_large(const void *ptr, size_t size)
>> +{
>> +       struct page *page;
>> +       unsigned long redzone_start;
>> +       unsigned long redzone_end;
>> +
>> +       if (unlikely(ptr == NULL))
>> +               return;
>> +
>> +       page = virt_to_page(ptr);
>> +       redzone_start = round_up((unsigned long)(ptr + size),
>> +                               KASAN_SHADOW_SCALE_SIZE);
>> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> 
> If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, the object does
> not receive any redzone at all. 

If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, there will be redzone
KASAN_SHADOW_SCALE_SIZE + 1 bytes. There will be no readzone if and only if
(size == PAGE_SIZE << compound_order(page))

> Can we pass full memory block size
> from above to fix it? Will compound_order(page) do?
> 

What is full memory block size?
PAGE_SIZE << compound_order(page) is how much was really allocated.


[..]

>>
>>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>>                 setup_object(s, page, p);
>>                 if (likely(idx < page->objects))
>>                         set_freepointer(s, p, p + s->size);
> 
> Sorry, I don't fully follow this code, so I will just ask some questions.
> Can we have some slab padding after last object in this case as well?
> 
This case is for not the last object. Padding is the place after the last object.
The last object initialized bellow in else case.

>> -               else
>> +               else {
>>                         set_freepointer(s, p, NULL);
>> +                       kasan_mark_slab_padding(s, p);
> 
> kasan_mark_slab_padding poisons only up to end of the page. Can there
> be multiple pages that we need to poison?
> 
Yep, that's a good catch.

Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-09-26  7:25         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26  7:25 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Pekka Enberg, David Rientjes

On 09/26/2014 08:48 AM, Dmitry Vyukov wrote:
> On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
>>  config KASAN
>>         bool "AddressSanitizer: runtime memory debugger"
>>         depends on !MEMORY_HOTPLUG
>> +       depends on SLUB_DEBUG
> 
> 
> What does SLUB_DEBUG do? I think that generally we don't want any
> other *heavy* debug checks to be required for kasan.
> 

SLUB_DEBUG enables support for different debugging features.
It doesn't enables this debugging features by default, it only allows
you to switch them on/off in runtime.
Generally SLUB_DEBUG option is enabled in most kernels. SLUB_DEBUG disabled
only with intention to get minimal kernel.

Without SLUB_DEBUG there will be no redzones, no user tracking info (allocation/free stacktraces).
KASAN won't be so usefull without SLUB_DEBUG.


[...]

>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -30,6 +30,7 @@
>>  #include <linux/kasan.h>
>>
>>  #include "kasan.h"
>> +#include "../slab.h"
>>
>>  /*
>>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
>> @@ -265,6 +266,102 @@ void kasan_free_pages(struct page *page, unsigned int order)
>>                                 KASAN_FREE_PAGE);
>>  }
>>
>> +void kasan_free_slab_pages(struct page *page, int order)
> 
> Doesn't this callback followed by actually freeing the pages, and so
> kasan_free_pages callback that will poison the range? If so, I would
> prefer to not double poison.
> 

Yes, this could be removed.

> 
>> +{
>> +       kasan_poison_shadow(page_address(page),
>> +                       PAGE_SIZE << order, KASAN_SLAB_FREE);
>> +}
>> +
>> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
>> +{
>> +       unsigned long object_end = (unsigned long)object + s->size;
>> +       unsigned long padding_end = round_up(object_end, PAGE_SIZE);
>> +       unsigned long padding_start = round_up(object_end,
>> +                                       KASAN_SHADOW_SCALE_SIZE);
>> +       size_t size = padding_end - padding_start;
>> +
>> +       if (size)
>> +               kasan_poison_shadow((void *)padding_start,
>> +                               size, KASAN_SLAB_PADDING);
>> +}
>> +
>> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
>> +{
>> +       kasan_kmalloc(cache, object, cache->object_size);
>> +}
>> +
>> +void kasan_slab_free(struct kmem_cache *cache, void *object)
>> +{
>> +       unsigned long size = cache->size;
>> +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> +
> 
> Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
> used after free.
> 

Ok.

>> +       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>> +               return;
>> +
>> +       kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>> +}
>> +
>> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
>> +{
>> +       unsigned long redzone_start;
>> +       unsigned long redzone_end;
>> +
>> +       if (unlikely(object == NULL))
>> +               return;
>> +
>> +       redzone_start = round_up((unsigned long)(object + size),
>> +                               KASAN_SHADOW_SCALE_SIZE);
>> +       redzone_end = (unsigned long)object + cache->size;
>> +
>> +       kasan_unpoison_shadow(object, size);
>> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +               KASAN_KMALLOC_REDZONE);
>> +
>> +}
>> +EXPORT_SYMBOL(kasan_kmalloc);
>> +
>> +void kasan_kmalloc_large(const void *ptr, size_t size)
>> +{
>> +       struct page *page;
>> +       unsigned long redzone_start;
>> +       unsigned long redzone_end;
>> +
>> +       if (unlikely(ptr == NULL))
>> +               return;
>> +
>> +       page = virt_to_page(ptr);
>> +       redzone_start = round_up((unsigned long)(ptr + size),
>> +                               KASAN_SHADOW_SCALE_SIZE);
>> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> 
> If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, the object does
> not receive any redzone at all. 

If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, there will be redzone
KASAN_SHADOW_SCALE_SIZE + 1 bytes. There will be no readzone if and only if
(size == PAGE_SIZE << compound_order(page))

> Can we pass full memory block size
> from above to fix it? Will compound_order(page) do?
> 

What is full memory block size?
PAGE_SIZE << compound_order(page) is how much was really allocated.


[..]

>>
>>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>>                 setup_object(s, page, p);
>>                 if (likely(idx < page->objects))
>>                         set_freepointer(s, p, p + s->size);
> 
> Sorry, I don't fully follow this code, so I will just ask some questions.
> Can we have some slab padding after last object in this case as well?
> 
This case is for not the last object. Padding is the place after the last object.
The last object initialized bellow in else case.

>> -               else
>> +               else {
>>                         set_freepointer(s, p, NULL);
>> +                       kasan_mark_slab_padding(s, p);
> 
> kasan_mark_slab_padding poisons only up to end of the page. Can there
> be multiple pages that we need to poison?
> 
Yep, that's a good catch.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
  2014-09-26  4:48       ` Dmitry Vyukov
@ 2014-09-26 14:22         ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-09-26 14:22 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Thu, 25 Sep 2014, Dmitry Vyukov wrote:

> > +       depends on SLUB_DEBUG
>
>
> What does SLUB_DEBUG do? I think that generally we don't want any
> other *heavy* debug checks to be required for kasan.

SLUB_DEBUG includes the capabilties for debugging. It does not switch
debug on by default. SLUB_DEBUG_ON will results in a kernel that boots
with active debugging. Without SLUB_DEBUG_ON a kernel parameter activates
debugging.

> > +{
> > +       unsigned long size = cache->size;
> > +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> > +
>
> Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
> used after free.

Add "within the rcu period"

> >  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> > @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> >                 setup_object(s, page, p);
> >                 if (likely(idx < page->objects))
> >                         set_freepointer(s, p, p + s->size);
>
> Sorry, I don't fully follow this code, so I will just ask some questions.
> Can we have some slab padding after last object in this case as well?

This is the free case. If poisoing is enabled then the object will be
overwritten on free. Padding is used depending on the need to align the
object and is optional. Redzoning will occur if requested. Are you asking
for redzoning?

> kasan_mark_slab_padding poisons only up to end of the page. Can there
> be multiple pages that we need to poison?

If there is a higher order page then only the end portion needs to be
poisoned. Objects may straddle order 0 boundaries then.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-09-26 14:22         ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-09-26 14:22 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Thu, 25 Sep 2014, Dmitry Vyukov wrote:

> > +       depends on SLUB_DEBUG
>
>
> What does SLUB_DEBUG do? I think that generally we don't want any
> other *heavy* debug checks to be required for kasan.

SLUB_DEBUG includes the capabilties for debugging. It does not switch
debug on by default. SLUB_DEBUG_ON will results in a kernel that boots
with active debugging. Without SLUB_DEBUG_ON a kernel parameter activates
debugging.

> > +{
> > +       unsigned long size = cache->size;
> > +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> > +
>
> Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
> used after free.

Add "within the rcu period"

> >  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> > @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> >                 setup_object(s, page, p);
> >                 if (likely(idx < page->objects))
> >                         set_freepointer(s, p, p + s->size);
>
> Sorry, I don't fully follow this code, so I will just ask some questions.
> Can we have some slab padding after last object in this case as well?

This is the free case. If poisoing is enabled then the object will be
overwritten on free. Padding is used depending on the need to align the
object and is optional. Redzoning will occur if requested. Are you asking
for redzoning?

> kasan_mark_slab_padding poisons only up to end of the page. Can there
> be multiple pages that we need to poison?

If there is a higher order page then only the end portion needs to be
poisoned. Objects may straddle order 0 boundaries then.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
  2014-09-26  7:25         ` Andrey Ryabinin
@ 2014-09-26 15:52           ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 15:52 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Pekka Enberg, David Rientjes

On Fri, Sep 26, 2014 at 12:25 AM, Andrey Ryabinin
<a.ryabinin@samsung.com> wrote:
> On 09/26/2014 08:48 AM, Dmitry Vyukov wrote:
>> On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>>> --- a/lib/Kconfig.kasan
>>> +++ b/lib/Kconfig.kasan
>>> @@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
>>>  config KASAN
>>>         bool "AddressSanitizer: runtime memory debugger"
>>>         depends on !MEMORY_HOTPLUG
>>> +       depends on SLUB_DEBUG
>>
>>
>> What does SLUB_DEBUG do? I think that generally we don't want any
>> other *heavy* debug checks to be required for kasan.
>>
>
> SLUB_DEBUG enables support for different debugging features.
> It doesn't enables this debugging features by default, it only allows
> you to switch them on/off in runtime.
> Generally SLUB_DEBUG option is enabled in most kernels. SLUB_DEBUG disabled
> only with intention to get minimal kernel.
>
> Without SLUB_DEBUG there will be no redzones, no user tracking info (allocation/free stacktraces).
> KASAN won't be so usefull without SLUB_DEBUG.

Ack.

>>> --- a/mm/kasan/kasan.c
>>> +++ b/mm/kasan/kasan.c
>>> @@ -30,6 +30,7 @@
>>>  #include <linux/kasan.h>
>>>
>>>  #include "kasan.h"
>>> +#include "../slab.h"
>>>
>>>  /*
>>>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
>>> @@ -265,6 +266,102 @@ void kasan_free_pages(struct page *page, unsigned int order)
>>>                                 KASAN_FREE_PAGE);
>>>  }
>>>
>>> +void kasan_free_slab_pages(struct page *page, int order)
>>
>> Doesn't this callback followed by actually freeing the pages, and so
>> kasan_free_pages callback that will poison the range? If so, I would
>> prefer to not double poison.
>>
>
> Yes, this could be removed.
>>> +{
>>> +       kasan_poison_shadow(page_address(page),
>>> +                       PAGE_SIZE << order, KASAN_SLAB_FREE);
>>> +}
>>> +
>>> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
>>> +{
>>> +       unsigned long object_end = (unsigned long)object + s->size;
>>> +       unsigned long padding_end = round_up(object_end, PAGE_SIZE);
>>> +       unsigned long padding_start = round_up(object_end,
>>> +                                       KASAN_SHADOW_SCALE_SIZE);
>>> +       size_t size = padding_end - padding_start;
>>> +
>>> +       if (size)
>>> +               kasan_poison_shadow((void *)padding_start,
>>> +                               size, KASAN_SLAB_PADDING);
>>> +}
>>> +
>>> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
>>> +{
>>> +       kasan_kmalloc(cache, object, cache->object_size);
>>> +}
>>> +
>>> +void kasan_slab_free(struct kmem_cache *cache, void *object)
>>> +{
>>> +       unsigned long size = cache->size;
>>> +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>>> +
>>
>> Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
>> used after free.
>>
>
> Ok.
>
>>> +       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>>> +               return;
>>> +
>>> +       kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>>> +}
>>> +
>>> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
>>> +{
>>> +       unsigned long redzone_start;
>>> +       unsigned long redzone_end;
>>> +
>>> +       if (unlikely(object == NULL))
>>> +               return;
>>> +
>>> +       redzone_start = round_up((unsigned long)(object + size),
>>> +                               KASAN_SHADOW_SCALE_SIZE);
>>> +       redzone_end = (unsigned long)object + cache->size;
>>> +
>>> +       kasan_unpoison_shadow(object, size);
>>> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>>> +               KASAN_KMALLOC_REDZONE);
>>> +
>>> +}
>>> +EXPORT_SYMBOL(kasan_kmalloc);
>>> +
>>> +void kasan_kmalloc_large(const void *ptr, size_t size)
>>> +{
>>> +       struct page *page;
>>> +       unsigned long redzone_start;
>>> +       unsigned long redzone_end;
>>> +
>>> +       if (unlikely(ptr == NULL))
>>> +               return;
>>> +
>>> +       page = virt_to_page(ptr);
>>> +       redzone_start = round_up((unsigned long)(ptr + size),
>>> +                               KASAN_SHADOW_SCALE_SIZE);
>>> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
>>
>> If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, the object does
>> not receive any redzone at all.
>
> If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, there will be redzone
> KASAN_SHADOW_SCALE_SIZE + 1 bytes. There will be no readzone if and only if
> (size == PAGE_SIZE << compound_order(page))

Ah, OK, I misread the code.
The current code looks fine.

>> Can we pass full memory block size
>> from above to fix it? Will compound_order(page) do?
>>
>
> What is full memory block size?
> PAGE_SIZE << compound_order(page) is how much was really allocated.
>>>
>>>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>>> @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>>>                 setup_object(s, page, p);
>>>                 if (likely(idx < page->objects))
>>>                         set_freepointer(s, p, p + s->size);
>>
>> Sorry, I don't fully follow this code, so I will just ask some questions.
>> Can we have some slab padding after last object in this case as well?
>>
> This case is for not the last object. Padding is the place after the last object.
> The last object initialized bellow in else case.
>
>>> -               else
>>> +               else {
>>>                         set_freepointer(s, p, NULL);
>>> +                       kasan_mark_slab_padding(s, p);
>>
>> kasan_mark_slab_padding poisons only up to end of the page. Can there
>> be multiple pages that we need to poison?
>>
> Yep, that's a good catch.
>
> Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-09-26 15:52           ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 15:52 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Pekka Enberg, David Rientjes

On Fri, Sep 26, 2014 at 12:25 AM, Andrey Ryabinin
<a.ryabinin@samsung.com> wrote:
> On 09/26/2014 08:48 AM, Dmitry Vyukov wrote:
>> On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>>> --- a/lib/Kconfig.kasan
>>> +++ b/lib/Kconfig.kasan
>>> @@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
>>>  config KASAN
>>>         bool "AddressSanitizer: runtime memory debugger"
>>>         depends on !MEMORY_HOTPLUG
>>> +       depends on SLUB_DEBUG
>>
>>
>> What does SLUB_DEBUG do? I think that generally we don't want any
>> other *heavy* debug checks to be required for kasan.
>>
>
> SLUB_DEBUG enables support for different debugging features.
> It doesn't enables this debugging features by default, it only allows
> you to switch them on/off in runtime.
> Generally SLUB_DEBUG option is enabled in most kernels. SLUB_DEBUG disabled
> only with intention to get minimal kernel.
>
> Without SLUB_DEBUG there will be no redzones, no user tracking info (allocation/free stacktraces).
> KASAN won't be so usefull without SLUB_DEBUG.

Ack.

>>> --- a/mm/kasan/kasan.c
>>> +++ b/mm/kasan/kasan.c
>>> @@ -30,6 +30,7 @@
>>>  #include <linux/kasan.h>
>>>
>>>  #include "kasan.h"
>>> +#include "../slab.h"
>>>
>>>  /*
>>>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
>>> @@ -265,6 +266,102 @@ void kasan_free_pages(struct page *page, unsigned int order)
>>>                                 KASAN_FREE_PAGE);
>>>  }
>>>
>>> +void kasan_free_slab_pages(struct page *page, int order)
>>
>> Doesn't this callback followed by actually freeing the pages, and so
>> kasan_free_pages callback that will poison the range? If so, I would
>> prefer to not double poison.
>>
>
> Yes, this could be removed.
>>> +{
>>> +       kasan_poison_shadow(page_address(page),
>>> +                       PAGE_SIZE << order, KASAN_SLAB_FREE);
>>> +}
>>> +
>>> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object)
>>> +{
>>> +       unsigned long object_end = (unsigned long)object + s->size;
>>> +       unsigned long padding_end = round_up(object_end, PAGE_SIZE);
>>> +       unsigned long padding_start = round_up(object_end,
>>> +                                       KASAN_SHADOW_SCALE_SIZE);
>>> +       size_t size = padding_end - padding_start;
>>> +
>>> +       if (size)
>>> +               kasan_poison_shadow((void *)padding_start,
>>> +                               size, KASAN_SLAB_PADDING);
>>> +}
>>> +
>>> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
>>> +{
>>> +       kasan_kmalloc(cache, object, cache->object_size);
>>> +}
>>> +
>>> +void kasan_slab_free(struct kmem_cache *cache, void *object)
>>> +{
>>> +       unsigned long size = cache->size;
>>> +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>>> +
>>
>> Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
>> used after free.
>>
>
> Ok.
>
>>> +       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>>> +               return;
>>> +
>>> +       kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>>> +}
>>> +
>>> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
>>> +{
>>> +       unsigned long redzone_start;
>>> +       unsigned long redzone_end;
>>> +
>>> +       if (unlikely(object == NULL))
>>> +               return;
>>> +
>>> +       redzone_start = round_up((unsigned long)(object + size),
>>> +                               KASAN_SHADOW_SCALE_SIZE);
>>> +       redzone_end = (unsigned long)object + cache->size;
>>> +
>>> +       kasan_unpoison_shadow(object, size);
>>> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>>> +               KASAN_KMALLOC_REDZONE);
>>> +
>>> +}
>>> +EXPORT_SYMBOL(kasan_kmalloc);
>>> +
>>> +void kasan_kmalloc_large(const void *ptr, size_t size)
>>> +{
>>> +       struct page *page;
>>> +       unsigned long redzone_start;
>>> +       unsigned long redzone_end;
>>> +
>>> +       if (unlikely(ptr == NULL))
>>> +               return;
>>> +
>>> +       page = virt_to_page(ptr);
>>> +       redzone_start = round_up((unsigned long)(ptr + size),
>>> +                               KASAN_SHADOW_SCALE_SIZE);
>>> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
>>
>> If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, the object does
>> not receive any redzone at all.
>
> If size == N*PAGE_SIZE - KASAN_SHADOW_SCALE_SIZE - 1, there will be redzone
> KASAN_SHADOW_SCALE_SIZE + 1 bytes. There will be no readzone if and only if
> (size == PAGE_SIZE << compound_order(page))

Ah, OK, I misread the code.
The current code looks fine.

>> Can we pass full memory block size
>> from above to fix it? Will compound_order(page) do?
>>
>
> What is full memory block size?
> PAGE_SIZE << compound_order(page) is how much was really allocated.
>>>
>>>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>>> @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>>>                 setup_object(s, page, p);
>>>                 if (likely(idx < page->objects))
>>>                         set_freepointer(s, p, p + s->size);
>>
>> Sorry, I don't fully follow this code, so I will just ask some questions.
>> Can we have some slab padding after last object in this case as well?
>>
> This case is for not the last object. Padding is the place after the last object.
> The last object initialized bellow in else case.
>
>>> -               else
>>> +               else {
>>>                         set_freepointer(s, p, NULL);
>>> +                       kasan_mark_slab_padding(s, p);
>>
>> kasan_mark_slab_padding poisons only up to end of the page. Can there
>> be multiple pages that we need to poison?
>>
> Yep, that's a good catch.
>
> Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
  2014-09-26 14:22         ` Christoph Lameter
@ 2014-09-26 15:55           ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 15:55 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Fri, Sep 26, 2014 at 7:22 AM, Christoph Lameter <cl@linux.com> wrote:
> On Thu, 25 Sep 2014, Dmitry Vyukov wrote:
>
>> > +       depends on SLUB_DEBUG
>>
>>
>> What does SLUB_DEBUG do? I think that generally we don't want any
>> other *heavy* debug checks to be required for kasan.
>
> SLUB_DEBUG includes the capabilties for debugging. It does not switch
> debug on by default. SLUB_DEBUG_ON will results in a kernel that boots
> with active debugging. Without SLUB_DEBUG_ON a kernel parameter activates
> debugging.

Ack
thanks for explanation


>> > +{
>> > +       unsigned long size = cache->size;
>> > +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> > +
>>
>> Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
>> used after free.
>
> Add "within the rcu period"
>
>> >  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> > @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> >                 setup_object(s, page, p);
>> >                 if (likely(idx < page->objects))
>> >                         set_freepointer(s, p, p + s->size);
>>
>> Sorry, I don't fully follow this code, so I will just ask some questions.
>> Can we have some slab padding after last object in this case as well?
>
> This is the free case. If poisoing is enabled then the object will be
> overwritten on free. Padding is used depending on the need to align the
> object and is optional. Redzoning will occur if requested. Are you asking
> for redzoning?

I am not asking for redzoning yet.


>> kasan_mark_slab_padding poisons only up to end of the page. Can there
>> be multiple pages that we need to poison?
>
> If there is a higher order page then only the end portion needs to be
> poisoned. Objects may straddle order 0 boundaries then.
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-09-26 15:55           ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 15:55 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Fri, Sep 26, 2014 at 7:22 AM, Christoph Lameter <cl@linux.com> wrote:
> On Thu, 25 Sep 2014, Dmitry Vyukov wrote:
>
>> > +       depends on SLUB_DEBUG
>>
>>
>> What does SLUB_DEBUG do? I think that generally we don't want any
>> other *heavy* debug checks to be required for kasan.
>
> SLUB_DEBUG includes the capabilties for debugging. It does not switch
> debug on by default. SLUB_DEBUG_ON will results in a kernel that boots
> with active debugging. Without SLUB_DEBUG_ON a kernel parameter activates
> debugging.

Ack
thanks for explanation


>> > +{
>> > +       unsigned long size = cache->size;
>> > +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> > +
>>
>> Add a comment saying that SLAB_DESTROY_BY_RCU objects can be "legally"
>> used after free.
>
> Add "within the rcu period"
>
>> >  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> > @@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> >                 setup_object(s, page, p);
>> >                 if (likely(idx < page->objects))
>> >                         set_freepointer(s, p, p + s->size);
>>
>> Sorry, I don't fully follow this code, so I will just ask some questions.
>> Can we have some slab padding after last object in this case as well?
>
> This is the free case. If poisoing is enabled then the object will be
> overwritten on free. Padding is used depending on the need to align the
> object and is optional. Redzoning will occur if requested. Are you asking
> for redzoning?

I am not asking for redzoning yet.


>> kasan_mark_slab_padding poisons only up to end of the page. Can there
>> be multiple pages that we need to poison?
>
> If there is a higher order page then only the end portion needs to be
> poisoned. Objects may straddle order 0 boundaries then.
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-09-26 17:01     ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-26 17:01 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones

On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
> Hi.
> 
> This is a third iteration of kerenel address sanitizer (KASan).
> 
> KASan is a runtime memory debugger designed to find use-after-free
> and out-of-bounds bugs.
> 
> Currently KASAN supported only for x86_64 architecture and requires kernel
> to be build with SLUB allocator.
> KASAN uses compile-time instrumentation for checking every memory access, therefore you
> will need a fresh GCC >= v5.0.0.

Hi Andrey,

I tried this patchset, with the latest gcc, and I'm seeing the following:

arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
/home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
/home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
/home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
/home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
/home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow


What am I missing?


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-26 17:01     ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-09-26 17:01 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones

On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
> Hi.
> 
> This is a third iteration of kerenel address sanitizer (KASan).
> 
> KASan is a runtime memory debugger designed to find use-after-free
> and out-of-bounds bugs.
> 
> Currently KASAN supported only for x86_64 architecture and requires kernel
> to be build with SLUB allocator.
> KASAN uses compile-time instrumentation for checking every memory access, therefore you
> will need a fresh GCC >= v5.0.0.

Hi Andrey,

I tried this patchset, with the latest gcc, and I'm seeing the following:

arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
/home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
/home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
/home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
/home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
/home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow


What am I missing?


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-26 17:01     ` Sasha Levin
@ 2014-09-26 17:07       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:07 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones

On Fri, Sep 26, 2014 at 10:01 AM, Sasha Levin <sasha.levin@oracle.com> wrote:
> On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
>> Hi.
>>
>> This is a third iteration of kerenel address sanitizer (KASan).
>>
>> KASan is a runtime memory debugger designed to find use-after-free
>> and out-of-bounds bugs.
>>
>> Currently KASAN supported only for x86_64 architecture and requires kernel
>> to be build with SLUB allocator.
>> KASAN uses compile-time instrumentation for checking every memory access, therefore you
>> will need a fresh GCC >= v5.0.0.
>
> Hi Andrey,
>
> I tried this patchset, with the latest gcc, and I'm seeing the following:
>
> arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
> /home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
> /home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
> /home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
> /home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
> /home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
> init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow
>
>
> What am I missing?


Emission of __asan_init_vx needs to be disabled when
-fsanitize=kernel-address. Our kernel does not boot with them at all.
It probably hits some limit for something that can be increased. But I
don't want to investigate what that limit is, as __asan_init is not
needed for kasan at all.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-26 17:07       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:07 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones

On Fri, Sep 26, 2014 at 10:01 AM, Sasha Levin <sasha.levin@oracle.com> wrote:
> On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
>> Hi.
>>
>> This is a third iteration of kerenel address sanitizer (KASan).
>>
>> KASan is a runtime memory debugger designed to find use-after-free
>> and out-of-bounds bugs.
>>
>> Currently KASAN supported only for x86_64 architecture and requires kernel
>> to be build with SLUB allocator.
>> KASAN uses compile-time instrumentation for checking every memory access, therefore you
>> will need a fresh GCC >= v5.0.0.
>
> Hi Andrey,
>
> I tried this patchset, with the latest gcc, and I'm seeing the following:
>
> arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
> /home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
> /home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
> /home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
> /home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
> /home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
> init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow
>
>
> What am I missing?


Emission of __asan_init_vx needs to be disabled when
-fsanitize=kernel-address. Our kernel does not boot with them at all.
It probably hits some limit for something that can be increased. But I
don't want to investigate what that limit is, as __asan_init is not
needed for kasan at all.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-09-24 12:44     ` Andrey Ryabinin
@ 2014-09-26 17:10       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:10 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Catalin Marinas

Looks good to me.

We can disable kasan instrumentation of this file as well.

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> kmalloc internally round up allocation size, and kmemleak
> uses rounded up size as object's size. This makes kasan
> to complain while kmemleak scans memory or calculates of object's
> checksum. The simplest solution here is to disable kasan.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/kmemleak.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index 3cda50c..9bda1b3 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -98,6 +98,7 @@
>  #include <asm/processor.h>
>  #include <linux/atomic.h>
>
> +#include <linux/kasan.h>
>  #include <linux/kmemcheck.h>
>  #include <linux/kmemleak.h>
>  #include <linux/memory_hotplug.h>
> @@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
>         if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
>                 return false;
>
> +       kasan_disable_local();
>         object->checksum = crc32(0, (void *)object->pointer, object->size);
> +       kasan_enable_local();
> +
>         return object->checksum != old_csum;
>  }
>
> @@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
>                                                   BYTES_PER_POINTER))
>                         continue;
>
> +               kasan_disable_local();
>                 pointer = *ptr;
> +               kasan_enable_local();
>
>                 object = find_and_get_object(pointer, 1);
>                 if (!object)
> --
> 2.1.1
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-09-26 17:10       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:10 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Catalin Marinas

Looks good to me.

We can disable kasan instrumentation of this file as well.

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> kmalloc internally round up allocation size, and kmemleak
> uses rounded up size as object's size. This makes kasan
> to complain while kmemleak scans memory or calculates of object's
> checksum. The simplest solution here is to disable kasan.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/kmemleak.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index 3cda50c..9bda1b3 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -98,6 +98,7 @@
>  #include <asm/processor.h>
>  #include <linux/atomic.h>
>
> +#include <linux/kasan.h>
>  #include <linux/kmemcheck.h>
>  #include <linux/kmemleak.h>
>  #include <linux/memory_hotplug.h>
> @@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
>         if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
>                 return false;
>
> +       kasan_disable_local();
>         object->checksum = crc32(0, (void *)object->pointer, object->size);
> +       kasan_enable_local();
> +
>         return object->checksum != old_csum;
>  }
>
> @@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
>                                                   BYTES_PER_POINTER))
>                         continue;
>
> +               kasan_disable_local();
>                 pointer = *ptr;
> +               kasan_enable_local();
>
>                 object = find_and_get_object(pointer, 1);
>                 if (!object)
> --
> 2.1.1
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 12/13] lib: add kasan test module
  2014-09-24 12:44     ` Andrey Ryabinin
@ 2014-09-26 17:11       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:11 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm

Looks good to me.

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This is a test module doing varios nasty things like
> out of bounds accesses, use after free. It is usefull for testing
> kernel debugging features like kernel address sanitizer.
>
> It mostly concentrates on testing of slab allocator, but we
> might want to add more different stuff here in future (like
> stack/global variables out of bounds accesses and so on).
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  lib/Kconfig.kasan |   8 ++
>  lib/Makefile      |   1 +
>  lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 263 insertions(+)
>  create mode 100644 lib/test_kasan.c
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index d16b899..faddb0e 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -19,4 +19,12 @@ config KASAN_SHADOW_OFFSET
>         hex
>         default 0xdfffe90000000000 if X86_64
>
> +config TEST_KASAN
> +       tristate "Module for testing kasan for bug detection"
> +       depends on m
> +       help
> +         This is a test module doing varios nasty things like
> +         out of bounds accesses, use after free. It is usefull for testing
> +         kernel debugging features like kernel address sanitizer.
> +
>  endif
> diff --git a/lib/Makefile b/lib/Makefile
> index 84a56f7..d620d27 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_MODULE) += test_module.o
>  obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
>  obj-$(CONFIG_TEST_BPF) += test_bpf.o
>  obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
> +obj-$(CONFIG_TEST_KASAN) += test_kasan.o
>
>  ifeq ($(CONFIG_DEBUG_KOBJECT),y)
>  CFLAGS_kobject.o += -DDEBUG
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> new file mode 100644
> index 0000000..66a04eb
> --- /dev/null
> +++ b/lib/test_kasan.c
> @@ -0,0 +1,254 @@
> +/*
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +
> +#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
> +
> +#include <linux/kernel.h>
> +#include <linux/printk.h>
> +#include <linux/slab.h>
> +#include <linux/string.h>
> +#include <linux/module.h>
> +
> +static noinline void __init kmalloc_oob_right(void)
> +{
> +       char *ptr;
> +       size_t size = 123;
> +
> +       pr_info("out-of-bounds to right\n");
> +       ptr = kmalloc(size , GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 'x';
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_oob_left(void)
> +{
> +       char *ptr;
> +       size_t size = 15;
> +
> +       pr_info("out-of-bounds to left\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       *ptr = *(ptr - 1);
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_node_oob_right(void)
> +{
> +       char *ptr;
> +       size_t size = 4096;
> +
> +       pr_info("kmalloc_node(): out-of-bounds to right\n");
> +       ptr = kmalloc_node(size , GFP_KERNEL, 0);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 0;
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_large_oob_rigth(void)
> +{
> +       char *ptr;
> +       size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
> +
> +       pr_info("kmalloc large allocation: out-of-bounds to right\n");
> +       ptr = kmalloc(size , GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 0;
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_oob_krealloc_more(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size1 = 17;
> +       size_t size2 = 19;
> +
> +       pr_info("out-of-bounds after krealloc more\n");
> +       ptr1 = kmalloc(size1, GFP_KERNEL);
> +       ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               return;
> +       }
> +
> +       ptr2[size2] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_krealloc_less(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size1 = 17;
> +       size_t size2 = 15;
> +
> +       pr_info("out-of-bounds after krealloc less\n");
> +       ptr1 = kmalloc(size1, GFP_KERNEL);
> +       ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               return;
> +       }
> +       ptr2[size1] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_16(void)
> +{
> +       struct {
> +               u64 words[2];
> +       } *ptr1, *ptr2;
> +
> +       pr_info("kmalloc out-of-bounds for 16-bytes access\n");
> +       ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
> +       ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               kfree(ptr2);
> +               return;
> +       }
> +       *ptr1 = *ptr2;
> +       kfree(ptr1);
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_in_memset(void)
> +{
> +       char *ptr;
> +       size_t size = 666;
> +
> +       pr_info("out-of-bounds in memset\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       memset(ptr, 0, size+5);
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_uaf(void)
> +{
> +       char *ptr;
> +       size_t size = 10;
> +
> +       pr_info("use-after-free\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr);
> +       *(ptr + 8) = 'x';
> +}
> +
> +static noinline void __init kmalloc_uaf_memset(void)
> +{
> +       char *ptr;
> +       size_t size = 33;
> +
> +       pr_info("use-after-free in memset\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr);
> +       memset(ptr, 0, size);
> +}
> +
> +static noinline void __init kmalloc_uaf2(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size = 43;
> +
> +       pr_info("use-after-free after another kmalloc\n");
> +       ptr1 = kmalloc(size, GFP_KERNEL);
> +       if (!ptr1) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr1);
> +       ptr2 = kmalloc(size, GFP_KERNEL);
> +       if (!ptr2) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr1[40] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmem_cache_oob(void)
> +{
> +       char *p;
> +       size_t size = 200;
> +       struct kmem_cache *cache = kmem_cache_create("test_cache",
> +                                               size, 0,
> +                                               0, NULL);
> +       if (!cache) {
> +               pr_err("Cache allocation failed\n");
> +               return;
> +       }
> +       pr_info("out-of-bounds in kmem_cache_alloc\n");
> +       p = kmem_cache_alloc(cache, GFP_KERNEL);
> +       if (!p) {
> +               pr_err("Allocation failed\n");
> +               kmem_cache_destroy(cache);
> +               return;
> +       }
> +
> +       *p = p[size];
> +       kmem_cache_free(cache, p);
> +       kmem_cache_destroy(cache);
> +}
> +
> +int __init kmalloc_tests_init(void)
> +{
> +       kmalloc_oob_right();
> +       kmalloc_oob_left();
> +       kmalloc_node_oob_right();
> +       kmalloc_large_oob_rigth();
> +       kmalloc_oob_krealloc_more();
> +       kmalloc_oob_krealloc_less();
> +       kmalloc_oob_16();
> +       kmalloc_oob_in_memset();
> +       kmalloc_uaf();
> +       kmalloc_uaf_memset();
> +       kmalloc_uaf2();
> +       kmem_cache_oob();
> +       return -EAGAIN;
> +}
> +
> +module_init(kmalloc_tests_init);
> +MODULE_LICENSE("GPL");
> --
> 2.1.1
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 12/13] lib: add kasan test module
@ 2014-09-26 17:11       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:11 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm

Looks good to me.

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This is a test module doing varios nasty things like
> out of bounds accesses, use after free. It is usefull for testing
> kernel debugging features like kernel address sanitizer.
>
> It mostly concentrates on testing of slab allocator, but we
> might want to add more different stuff here in future (like
> stack/global variables out of bounds accesses and so on).
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  lib/Kconfig.kasan |   8 ++
>  lib/Makefile      |   1 +
>  lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 263 insertions(+)
>  create mode 100644 lib/test_kasan.c
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index d16b899..faddb0e 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -19,4 +19,12 @@ config KASAN_SHADOW_OFFSET
>         hex
>         default 0xdfffe90000000000 if X86_64
>
> +config TEST_KASAN
> +       tristate "Module for testing kasan for bug detection"
> +       depends on m
> +       help
> +         This is a test module doing varios nasty things like
> +         out of bounds accesses, use after free. It is usefull for testing
> +         kernel debugging features like kernel address sanitizer.
> +
>  endif
> diff --git a/lib/Makefile b/lib/Makefile
> index 84a56f7..d620d27 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_MODULE) += test_module.o
>  obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
>  obj-$(CONFIG_TEST_BPF) += test_bpf.o
>  obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
> +obj-$(CONFIG_TEST_KASAN) += test_kasan.o
>
>  ifeq ($(CONFIG_DEBUG_KOBJECT),y)
>  CFLAGS_kobject.o += -DDEBUG
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> new file mode 100644
> index 0000000..66a04eb
> --- /dev/null
> +++ b/lib/test_kasan.c
> @@ -0,0 +1,254 @@
> +/*
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +
> +#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
> +
> +#include <linux/kernel.h>
> +#include <linux/printk.h>
> +#include <linux/slab.h>
> +#include <linux/string.h>
> +#include <linux/module.h>
> +
> +static noinline void __init kmalloc_oob_right(void)
> +{
> +       char *ptr;
> +       size_t size = 123;
> +
> +       pr_info("out-of-bounds to right\n");
> +       ptr = kmalloc(size , GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 'x';
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_oob_left(void)
> +{
> +       char *ptr;
> +       size_t size = 15;
> +
> +       pr_info("out-of-bounds to left\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       *ptr = *(ptr - 1);
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_node_oob_right(void)
> +{
> +       char *ptr;
> +       size_t size = 4096;
> +
> +       pr_info("kmalloc_node(): out-of-bounds to right\n");
> +       ptr = kmalloc_node(size , GFP_KERNEL, 0);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 0;
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_large_oob_rigth(void)
> +{
> +       char *ptr;
> +       size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
> +
> +       pr_info("kmalloc large allocation: out-of-bounds to right\n");
> +       ptr = kmalloc(size , GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 0;
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_oob_krealloc_more(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size1 = 17;
> +       size_t size2 = 19;
> +
> +       pr_info("out-of-bounds after krealloc more\n");
> +       ptr1 = kmalloc(size1, GFP_KERNEL);
> +       ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               return;
> +       }
> +
> +       ptr2[size2] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_krealloc_less(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size1 = 17;
> +       size_t size2 = 15;
> +
> +       pr_info("out-of-bounds after krealloc less\n");
> +       ptr1 = kmalloc(size1, GFP_KERNEL);
> +       ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               return;
> +       }
> +       ptr2[size1] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_16(void)
> +{
> +       struct {
> +               u64 words[2];
> +       } *ptr1, *ptr2;
> +
> +       pr_info("kmalloc out-of-bounds for 16-bytes access\n");
> +       ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
> +       ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               kfree(ptr2);
> +               return;
> +       }
> +       *ptr1 = *ptr2;
> +       kfree(ptr1);
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_in_memset(void)
> +{
> +       char *ptr;
> +       size_t size = 666;
> +
> +       pr_info("out-of-bounds in memset\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       memset(ptr, 0, size+5);
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_uaf(void)
> +{
> +       char *ptr;
> +       size_t size = 10;
> +
> +       pr_info("use-after-free\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr);
> +       *(ptr + 8) = 'x';
> +}
> +
> +static noinline void __init kmalloc_uaf_memset(void)
> +{
> +       char *ptr;
> +       size_t size = 33;
> +
> +       pr_info("use-after-free in memset\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr);
> +       memset(ptr, 0, size);
> +}
> +
> +static noinline void __init kmalloc_uaf2(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size = 43;
> +
> +       pr_info("use-after-free after another kmalloc\n");
> +       ptr1 = kmalloc(size, GFP_KERNEL);
> +       if (!ptr1) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr1);
> +       ptr2 = kmalloc(size, GFP_KERNEL);
> +       if (!ptr2) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr1[40] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmem_cache_oob(void)
> +{
> +       char *p;
> +       size_t size = 200;
> +       struct kmem_cache *cache = kmem_cache_create("test_cache",
> +                                               size, 0,
> +                                               0, NULL);
> +       if (!cache) {
> +               pr_err("Cache allocation failed\n");
> +               return;
> +       }
> +       pr_info("out-of-bounds in kmem_cache_alloc\n");
> +       p = kmem_cache_alloc(cache, GFP_KERNEL);
> +       if (!p) {
> +               pr_err("Allocation failed\n");
> +               kmem_cache_destroy(cache);
> +               return;
> +       }
> +
> +       *p = p[size];
> +       kmem_cache_free(cache, p);
> +       kmem_cache_destroy(cache);
> +}
> +
> +int __init kmalloc_tests_init(void)
> +{
> +       kmalloc_oob_right();
> +       kmalloc_oob_left();
> +       kmalloc_node_oob_right();
> +       kmalloc_large_oob_rigth();
> +       kmalloc_oob_krealloc_more();
> +       kmalloc_oob_krealloc_less();
> +       kmalloc_oob_16();
> +       kmalloc_oob_in_memset();
> +       kmalloc_uaf();
> +       kmalloc_uaf_memset();
> +       kmalloc_uaf2();
> +       kmem_cache_oob();
> +       return -EAGAIN;
> +}
> +
> +module_init(kmalloc_tests_init);
> +MODULE_LICENSE("GPL");
> --
> 2.1.1
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-26 17:01     ` Sasha Levin
@ 2014-09-26 17:17       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26 17:17 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

2014-09-26 21:01 GMT+04:00 Sasha Levin <sasha.levin@oracle.com>:
> On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
>> Hi.
>>
>> This is a third iteration of kerenel address sanitizer (KASan).
>>
>> KASan is a runtime memory debugger designed to find use-after-free
>> and out-of-bounds bugs.
>>
>> Currently KASAN supported only for x86_64 architecture and requires kernel
>> to be build with SLUB allocator.
>> KASAN uses compile-time instrumentation for checking every memory access, therefore you
>> will need a fresh GCC >= v5.0.0.
>
> Hi Andrey,
>
> I tried this patchset, with the latest gcc, and I'm seeing the following:
>
> arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
> /home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
> /home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
> /home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
> /home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
> /home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
> init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow
>
>
> What am I missing?
>

__asan_init_v* is a version of compiler's api. Recently it was changed
in gcc - https://gcc.gnu.org/ml/gcc-patches/2014-09/msg01872.html

To fix this, just add:

void __asan_init_v4(void) {}
EXPORT_SYMBOL(__asan_init_v4);

 to the mm/kasan/kasan.c

I'll fix this in next spin.

>
> Thanks,
> Sasha
>
> --



-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-26 17:17       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26 17:17 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

2014-09-26 21:01 GMT+04:00 Sasha Levin <sasha.levin@oracle.com>:
> On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
>> Hi.
>>
>> This is a third iteration of kerenel address sanitizer (KASan).
>>
>> KASan is a runtime memory debugger designed to find use-after-free
>> and out-of-bounds bugs.
>>
>> Currently KASAN supported only for x86_64 architecture and requires kernel
>> to be build with SLUB allocator.
>> KASAN uses compile-time instrumentation for checking every memory access, therefore you
>> will need a fresh GCC >= v5.0.0.
>
> Hi Andrey,
>
> I tried this patchset, with the latest gcc, and I'm seeing the following:
>
> arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
> /home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
> /home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
> /home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
> /home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
> /home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
> init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow
>
>
> What am I missing?
>

__asan_init_v* is a version of compiler's api. Recently it was changed
in gcc - https://gcc.gnu.org/ml/gcc-patches/2014-09/msg01872.html

To fix this, just add:

void __asan_init_v4(void) {}
EXPORT_SYMBOL(__asan_init_v4);

 to the mm/kasan/kasan.c

I'll fix this in next spin.

>
> Thanks,
> Sasha
>
> --



-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
  2014-09-24 12:44     ` Andrey Ryabinin
@ 2014-09-26 17:18       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:18 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Michal Marek

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This patch only demonstration how easy this could be achieved.
> GCC doesn't support this feature yet. Two patches required for this:
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html
>
> In inline instrumentation mode compiler directly inserts code
> checking shadow memory instead of __asan_load/__asan_store
> calls.
> This is usually faster than outline. In some workloads inline is
> 2 times faster than outline instrumentation.
>
> The downside of inline instrumentation is bloated kernel's .text size:
>
> size noasan/vmlinux
>    text     data     bss      dec     hex    filename
> 11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux
>
> size outline/vmlinux
>    text    data     bss      dec      hex    filename
> 16553474  1602592  950272  19106338  1238a22 outline/vmlinux
>
> size inline/vmlinux
>    text    data     bss      dec      hex    filename
> 32064759  1598688  946176  34609623  21019d7 inline/vmlinux
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Makefile          |  5 +++++
>  lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
>  mm/kasan/report.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 74 insertions(+)
>
> diff --git a/Makefile b/Makefile
> index 6cefe5e..fe7c534 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -773,6 +773,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
>  endif
>
>  ifdef CONFIG_KASAN
> +ifdef CONFIG_KASAN_INLINE
> +CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
> +                $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
> +endif
> +
>    ifeq ($(CFLAGS_KASAN),)
>      $(warning Cannot use CONFIG_KASAN: \
>               -fsanitize=kernel-address not supported by compiler)
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index faddb0e..c4ac040 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -27,4 +27,28 @@ config TEST_KASAN
>           out of bounds accesses, use after free. It is usefull for testing
>           kernel debugging features like kernel address sanitizer.
>
> +choice
> +       prompt "Instrumentation type"
> +       depends on KASAN
> +       default KASAN_INLINE if X86_64
> +
> +config KASAN_OUTLINE
> +       bool "Outline instrumentation"
> +       help
> +         Before every memory access compiler insert function call
> +         __asan_load*/__asan_store*. These functions performs check
> +         of shadow memory. This is slower than inline instrumentation,
> +         however it doesn't bloat size of kernel's .text section so
> +         much as inline does.
> +
> +config KASAN_INLINE
> +       bool "Inline instrumentation"
> +       help
> +         Compiler directly inserts code checking shadow memory before
> +         memory accesses. This is faster than outline (in some workloads
> +         it gives about x2 boost over outline instrumentation), but
> +         make kernel's .text size much bigger.
> +
> +endchoice
> +
>  endif
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index c42f6ba..a9262f8 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -212,3 +212,48 @@ void kasan_report_user_access(struct access_info *info)
>                 "=================================\n");
>         spin_unlock_irqrestore(&report_lock, flags);
>  }
> +
> +#define CALL_KASAN_REPORT(__addr, __size, __is_write) \
> +       struct access_info info;                      \
> +       info.access_addr = __addr;                    \
> +       info.access_size = __size;                    \
> +       info.is_write = __is_write;                   \
> +       info.ip = _RET_IP_;                           \
> +       kasan_report_error(&info)
> +
> +#define DEFINE_ASAN_REPORT_LOAD(size)                     \
> +void __asan_report_recover_load##size(unsigned long addr) \
> +{                                                         \
> +       CALL_KASAN_REPORT(addr, size, false);             \
> +}                                                         \
> +EXPORT_SYMBOL(__asan_report_recover_load##size)
> +
> +#define DEFINE_ASAN_REPORT_STORE(size)                     \
> +void __asan_report_recover_store##size(unsigned long addr) \
> +{                                                          \
> +       CALL_KASAN_REPORT(addr, size, true);               \
> +}                                                          \
> +EXPORT_SYMBOL(__asan_report_recover_store##size)
> +
> +DEFINE_ASAN_REPORT_LOAD(1);
> +DEFINE_ASAN_REPORT_LOAD(2);
> +DEFINE_ASAN_REPORT_LOAD(4);
> +DEFINE_ASAN_REPORT_LOAD(8);
> +DEFINE_ASAN_REPORT_LOAD(16);
> +DEFINE_ASAN_REPORT_STORE(1);
> +DEFINE_ASAN_REPORT_STORE(2);
> +DEFINE_ASAN_REPORT_STORE(4);
> +DEFINE_ASAN_REPORT_STORE(8);
> +DEFINE_ASAN_REPORT_STORE(16);
> +
> +void __asan_report_recover_load_n(unsigned long addr, size_t size)
> +{
> +       CALL_KASAN_REPORT(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_load_n);
> +
> +void __asan_report_recover_store_n(unsigned long addr, size_t size)
> +{
> +       CALL_KASAN_REPORT(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_store_n);
> --
> 2.1.1
>



Yikes!
So this works during bootstrap, for user memory accesses, valloc
memory, etc, right?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
@ 2014-09-26 17:18       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:18 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Michal Marek

On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This patch only demonstration how easy this could be achieved.
> GCC doesn't support this feature yet. Two patches required for this:
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html
>
> In inline instrumentation mode compiler directly inserts code
> checking shadow memory instead of __asan_load/__asan_store
> calls.
> This is usually faster than outline. In some workloads inline is
> 2 times faster than outline instrumentation.
>
> The downside of inline instrumentation is bloated kernel's .text size:
>
> size noasan/vmlinux
>    text     data     bss      dec     hex    filename
> 11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux
>
> size outline/vmlinux
>    text    data     bss      dec      hex    filename
> 16553474  1602592  950272  19106338  1238a22 outline/vmlinux
>
> size inline/vmlinux
>    text    data     bss      dec      hex    filename
> 32064759  1598688  946176  34609623  21019d7 inline/vmlinux
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Makefile          |  5 +++++
>  lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
>  mm/kasan/report.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 74 insertions(+)
>
> diff --git a/Makefile b/Makefile
> index 6cefe5e..fe7c534 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -773,6 +773,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
>  endif
>
>  ifdef CONFIG_KASAN
> +ifdef CONFIG_KASAN_INLINE
> +CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
> +                $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
> +endif
> +
>    ifeq ($(CFLAGS_KASAN),)
>      $(warning Cannot use CONFIG_KASAN: \
>               -fsanitize=kernel-address not supported by compiler)
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index faddb0e..c4ac040 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -27,4 +27,28 @@ config TEST_KASAN
>           out of bounds accesses, use after free. It is usefull for testing
>           kernel debugging features like kernel address sanitizer.
>
> +choice
> +       prompt "Instrumentation type"
> +       depends on KASAN
> +       default KASAN_INLINE if X86_64
> +
> +config KASAN_OUTLINE
> +       bool "Outline instrumentation"
> +       help
> +         Before every memory access compiler insert function call
> +         __asan_load*/__asan_store*. These functions performs check
> +         of shadow memory. This is slower than inline instrumentation,
> +         however it doesn't bloat size of kernel's .text section so
> +         much as inline does.
> +
> +config KASAN_INLINE
> +       bool "Inline instrumentation"
> +       help
> +         Compiler directly inserts code checking shadow memory before
> +         memory accesses. This is faster than outline (in some workloads
> +         it gives about x2 boost over outline instrumentation), but
> +         make kernel's .text size much bigger.
> +
> +endchoice
> +
>  endif
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index c42f6ba..a9262f8 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -212,3 +212,48 @@ void kasan_report_user_access(struct access_info *info)
>                 "=================================\n");
>         spin_unlock_irqrestore(&report_lock, flags);
>  }
> +
> +#define CALL_KASAN_REPORT(__addr, __size, __is_write) \
> +       struct access_info info;                      \
> +       info.access_addr = __addr;                    \
> +       info.access_size = __size;                    \
> +       info.is_write = __is_write;                   \
> +       info.ip = _RET_IP_;                           \
> +       kasan_report_error(&info)
> +
> +#define DEFINE_ASAN_REPORT_LOAD(size)                     \
> +void __asan_report_recover_load##size(unsigned long addr) \
> +{                                                         \
> +       CALL_KASAN_REPORT(addr, size, false);             \
> +}                                                         \
> +EXPORT_SYMBOL(__asan_report_recover_load##size)
> +
> +#define DEFINE_ASAN_REPORT_STORE(size)                     \
> +void __asan_report_recover_store##size(unsigned long addr) \
> +{                                                          \
> +       CALL_KASAN_REPORT(addr, size, true);               \
> +}                                                          \
> +EXPORT_SYMBOL(__asan_report_recover_store##size)
> +
> +DEFINE_ASAN_REPORT_LOAD(1);
> +DEFINE_ASAN_REPORT_LOAD(2);
> +DEFINE_ASAN_REPORT_LOAD(4);
> +DEFINE_ASAN_REPORT_LOAD(8);
> +DEFINE_ASAN_REPORT_LOAD(16);
> +DEFINE_ASAN_REPORT_STORE(1);
> +DEFINE_ASAN_REPORT_STORE(2);
> +DEFINE_ASAN_REPORT_STORE(4);
> +DEFINE_ASAN_REPORT_STORE(8);
> +DEFINE_ASAN_REPORT_STORE(16);
> +
> +void __asan_report_recover_load_n(unsigned long addr, size_t size)
> +{
> +       CALL_KASAN_REPORT(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_load_n);
> +
> +void __asan_report_recover_store_n(unsigned long addr, size_t size)
> +{
> +       CALL_KASAN_REPORT(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_store_n);
> --
> 2.1.1
>



Yikes!
So this works during bootstrap, for user memory accesses, valloc
memory, etc, right?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-26 17:07       ` Dmitry Vyukov
@ 2014-09-26 17:22         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26 17:22 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Sasha Levin, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

2014-09-26 21:07 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
> On Fri, Sep 26, 2014 at 10:01 AM, Sasha Levin <sasha.levin@oracle.com> wrote:
>> On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
>>> Hi.
>>>
>>> This is a third iteration of kerenel address sanitizer (KASan).
>>>
>>> KASan is a runtime memory debugger designed to find use-after-free
>>> and out-of-bounds bugs.
>>>
>>> Currently KASAN supported only for x86_64 architecture and requires kernel
>>> to be build with SLUB allocator.
>>> KASAN uses compile-time instrumentation for checking every memory access, therefore you
>>> will need a fresh GCC >= v5.0.0.
>>
>> Hi Andrey,
>>
>> I tried this patchset, with the latest gcc, and I'm seeing the following:
>>
>> arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
>> /home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
>> /home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
>> /home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
>> /home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
>> /home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
>> init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow
>>
>>
>> What am I missing?
>
>
> Emission of __asan_init_vx needs to be disabled when
> -fsanitize=kernel-address. Our kernel does not boot with them at all.
> It probably hits some limit for something that can be increased. But I
> don't want to investigate what that limit is, as __asan_init is not
> needed for kasan at all.
>

__asan_init_vx maybe not needed for kernel, but we still need somehow
to identify
compiler's asan version (e.g. for globals).
We could add some define to GCC or just something like this in kernel:
#if __GNUC__ == 5
#define ASAN_V4
....

-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-26 17:22         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26 17:22 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Sasha Levin, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

2014-09-26 21:07 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
> On Fri, Sep 26, 2014 at 10:01 AM, Sasha Levin <sasha.levin@oracle.com> wrote:
>> On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
>>> Hi.
>>>
>>> This is a third iteration of kerenel address sanitizer (KASan).
>>>
>>> KASan is a runtime memory debugger designed to find use-after-free
>>> and out-of-bounds bugs.
>>>
>>> Currently KASAN supported only for x86_64 architecture and requires kernel
>>> to be build with SLUB allocator.
>>> KASAN uses compile-time instrumentation for checking every memory access, therefore you
>>> will need a fresh GCC >= v5.0.0.
>>
>> Hi Andrey,
>>
>> I tried this patchset, with the latest gcc, and I'm seeing the following:
>>
>> arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
>> /home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
>> /home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
>> /home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
>> /home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
>> /home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
>> init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow
>>
>>
>> What am I missing?
>
>
> Emission of __asan_init_vx needs to be disabled when
> -fsanitize=kernel-address. Our kernel does not boot with them at all.
> It probably hits some limit for something that can be increased. But I
> don't want to investigate what that limit is, as __asan_init is not
> needed for kasan at all.
>

__asan_init_vx maybe not needed for kernel, but we still need somehow
to identify
compiler's asan version (e.g. for globals).
We could add some define to GCC or just something like this in kernel:
#if __GNUC__ == 5
#define ASAN_V4
....

-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-26 17:22         ` Andrey Ryabinin
@ 2014-09-26 17:29           ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:29 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Sasha Levin, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

On Fri, Sep 26, 2014 at 10:22 AM, Andrey Ryabinin
<ryabinin.a.a@gmail.com> wrote:
> 2014-09-26 21:07 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>> On Fri, Sep 26, 2014 at 10:01 AM, Sasha Levin <sasha.levin@oracle.com> wrote:
>>> On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
>>>> Hi.
>>>>
>>>> This is a third iteration of kerenel address sanitizer (KASan).
>>>>
>>>> KASan is a runtime memory debugger designed to find use-after-free
>>>> and out-of-bounds bugs.
>>>>
>>>> Currently KASAN supported only for x86_64 architecture and requires kernel
>>>> to be build with SLUB allocator.
>>>> KASAN uses compile-time instrumentation for checking every memory access, therefore you
>>>> will need a fresh GCC >= v5.0.0.
>>>
>>> Hi Andrey,
>>>
>>> I tried this patchset, with the latest gcc, and I'm seeing the following:
>>>
>>> arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
>>> /home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
>>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
>>> /home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
>>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
>>> /home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
>>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
>>> /home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
>>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
>>> /home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
>>> init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow
>>>
>>>
>>> What am I missing?
>>
>>
>> Emission of __asan_init_vx needs to be disabled when
>> -fsanitize=kernel-address. Our kernel does not boot with them at all.
>> It probably hits some limit for something that can be increased. But I
>> don't want to investigate what that limit is, as __asan_init is not
>> needed for kasan at all.
>>
>
> __asan_init_vx maybe not needed for kernel, but we still need somehow
> to identify
> compiler's asan version (e.g. for globals).
> We could add some define to GCC or just something like this in kernel:
> #if __GNUC__ == 5
> #define ASAN_V4
> ....


This looks good to me.
The versioning won't work the same way it works for clang/compiler-rt
and gcc/libgcc. Because clang/compiler-rt are both part of the same
repo and always versioned simultaneously. While kernel and gcc are
versioned independently, so once you bump API version you break all
users who use old gcc.

So in kernel we will need to support all API versions, and the
following looks like a much simpler way to identify current API
version:
> #if __GNUC__ == 5
> #define ASAN_V4

Note that in user-space asan the other important purpose of
__asan_init is to trigger asan runtime initialization as early as
possible. This is not needed for kernel.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-26 17:29           ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-26 17:29 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Sasha Levin, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

On Fri, Sep 26, 2014 at 10:22 AM, Andrey Ryabinin
<ryabinin.a.a@gmail.com> wrote:
> 2014-09-26 21:07 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>> On Fri, Sep 26, 2014 at 10:01 AM, Sasha Levin <sasha.levin@oracle.com> wrote:
>>> On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
>>>> Hi.
>>>>
>>>> This is a third iteration of kerenel address sanitizer (KASan).
>>>>
>>>> KASan is a runtime memory debugger designed to find use-after-free
>>>> and out-of-bounds bugs.
>>>>
>>>> Currently KASAN supported only for x86_64 architecture and requires kernel
>>>> to be build with SLUB allocator.
>>>> KASAN uses compile-time instrumentation for checking every memory access, therefore you
>>>> will need a fresh GCC >= v5.0.0.
>>>
>>> Hi Andrey,
>>>
>>> I tried this patchset, with the latest gcc, and I'm seeing the following:
>>>
>>> arch/x86/kernel/head.o: In function `_GLOBAL__sub_I_00099_0_reserve_ebda_region':
>>> /home/sasha/linux-next/arch/x86/kernel/head.c:71: undefined reference to `__asan_init_v4'
>>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0___ksymtab_system_state':
>>> /home/sasha/linux-next/init/main.c:1034: undefined reference to `__asan_init_v4'
>>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_init_uts_ns':
>>> /home/sasha/linux-next/init/version.c:50: undefined reference to `__asan_init_v4'
>>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_root_mountflags':
>>> /home/sasha/linux-next/init/do_mounts.c:638: undefined reference to `__asan_init_v4'
>>> init/built-in.o: In function `_GLOBAL__sub_I_00099_0_rd_prompt':
>>> /home/sasha/linux-next/init/do_mounts_rd.c:361: undefined reference to `__asan_init_v4'
>>> init/built-in.o:/home/sasha/linux-next/init/do_mounts_md.c:312: more undefined references to `__asan_init_v4' follow
>>>
>>>
>>> What am I missing?
>>
>>
>> Emission of __asan_init_vx needs to be disabled when
>> -fsanitize=kernel-address. Our kernel does not boot with them at all.
>> It probably hits some limit for something that can be increased. But I
>> don't want to investigate what that limit is, as __asan_init is not
>> needed for kasan at all.
>>
>
> __asan_init_vx maybe not needed for kernel, but we still need somehow
> to identify
> compiler's asan version (e.g. for globals).
> We could add some define to GCC or just something like this in kernel:
> #if __GNUC__ == 5
> #define ASAN_V4
> ....


This looks good to me.
The versioning won't work the same way it works for clang/compiler-rt
and gcc/libgcc. Because clang/compiler-rt are both part of the same
repo and always versioned simultaneously. While kernel and gcc are
versioned independently, so once you bump API version you break all
users who use old gcc.

So in kernel we will need to support all API versions, and the
following looks like a much simpler way to identify current API
version:
> #if __GNUC__ == 5
> #define ASAN_V4

Note that in user-space asan the other important purpose of
__asan_init is to trigger asan runtime initialization as early as
possible. This is not needed for kernel.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
  2014-09-26 17:18       ` Dmitry Vyukov
@ 2014-09-26 17:33         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26 17:33 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, Michal Marek

2014-09-26 21:18 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>
> Yikes!
> So this works during bootstrap, for user memory accesses, valloc
> memory, etc, right?
>

Yes, this works. Userspace memory access in instrumented code will
produce general protection fault,
so it won't be unnoticed.


-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
@ 2014-09-26 17:33         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26 17:33 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, Michal Marek

2014-09-26 21:18 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>
> Yikes!
> So this works during bootstrap, for user memory accesses, valloc
> memory, etc, right?
>

Yes, this works. Userspace memory access in instrumented code will
produce general protection fault,
so it won't be unnoticed.


-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-09-26 17:10       ` Dmitry Vyukov
@ 2014-09-26 17:36         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26 17:36 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, Catalin Marinas

2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
> Looks good to me.
>
> We can disable kasan instrumentation of this file as well.
>

Yes, but why? I don't think we need that.

> On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>> kmalloc internally round up allocation size, and kmemleak
>> uses rounded up size as object's size. This makes kasan
>> to complain while kmemleak scans memory or calculates of object's
>> checksum. The simplest solution here is to disable kasan.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  mm/kmemleak.c | 6 ++++++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>> index 3cda50c..9bda1b3 100644
>> --- a/mm/kmemleak.c
>> +++ b/mm/kmemleak.c
>> @@ -98,6 +98,7 @@
>>  #include <asm/processor.h>
>>  #include <linux/atomic.h>
>>
>> +#include <linux/kasan.h>
>>  #include <linux/kmemcheck.h>
>>  #include <linux/kmemleak.h>
>>  #include <linux/memory_hotplug.h>
>> @@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
>>         if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
>>                 return false;
>>
>> +       kasan_disable_local();
>>         object->checksum = crc32(0, (void *)object->pointer, object->size);
>> +       kasan_enable_local();
>> +
>>         return object->checksum != old_csum;
>>  }
>>
>> @@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
>>                                                   BYTES_PER_POINTER))
>>                         continue;
>>
>> +               kasan_disable_local();
>>                 pointer = *ptr;
>> +               kasan_enable_local();
>>
>>                 object = find_and_get_object(pointer, 1);
>>                 if (!object)
>> --
>> 2.1.1
>>
>


-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-09-26 17:36         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-26 17:36 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, Catalin Marinas

2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
> Looks good to me.
>
> We can disable kasan instrumentation of this file as well.
>

Yes, but why? I don't think we need that.

> On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>> kmalloc internally round up allocation size, and kmemleak
>> uses rounded up size as object's size. This makes kasan
>> to complain while kmemleak scans memory or calculates of object's
>> checksum. The simplest solution here is to disable kasan.
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> ---
>>  mm/kmemleak.c | 6 ++++++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>> index 3cda50c..9bda1b3 100644
>> --- a/mm/kmemleak.c
>> +++ b/mm/kmemleak.c
>> @@ -98,6 +98,7 @@
>>  #include <asm/processor.h>
>>  #include <linux/atomic.h>
>>
>> +#include <linux/kasan.h>
>>  #include <linux/kmemcheck.h>
>>  #include <linux/kmemleak.h>
>>  #include <linux/memory_hotplug.h>
>> @@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
>>         if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
>>                 return false;
>>
>> +       kasan_disable_local();
>>         object->checksum = crc32(0, (void *)object->pointer, object->size);
>> +       kasan_enable_local();
>> +
>>         return object->checksum != old_csum;
>>  }
>>
>> @@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
>>                                                   BYTES_PER_POINTER))
>>                         continue;
>>
>> +               kasan_disable_local();
>>                 pointer = *ptr;
>> +               kasan_enable_local();
>>
>>                 object = find_and_get_object(pointer, 1);
>>                 if (!object)
>> --
>> 2.1.1
>>
>


-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-26 17:29           ` Dmitry Vyukov
@ 2014-09-26 18:48             ` Yuri Gribov
  -1 siblings, 0 replies; 862+ messages in thread
From: Yuri Gribov @ 2014-09-26 18:48 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, Sasha Levin, Andrey Ryabinin, LKML,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

On Fri, Sep 26, 2014 at 9:29 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> So in kernel we will need to support all API versions, and the
> following looks like a much simpler way to identify current API
> version:
>> #if __GNUC__ == 5
>> #define ASAN_V4

What about having compiler(s) predefine some __SANITIZE_ADDRESS_ABI__
macro for this? Hacking on __GNUC__ may not work given the zoo of GCC
versions out there (FSF, Linaro, vendor toolchains, etc.)?

-Y

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-26 18:48             ` Yuri Gribov
  0 siblings, 0 replies; 862+ messages in thread
From: Yuri Gribov @ 2014-09-26 18:48 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, Sasha Levin, Andrey Ryabinin, LKML,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

On Fri, Sep 26, 2014 at 9:29 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> So in kernel we will need to support all API versions, and the
> following looks like a much simpler way to identify current API
> version:
>> #if __GNUC__ == 5
>> #define ASAN_V4

What about having compiler(s) predefine some __SANITIZE_ADDRESS_ABI__
macro for this? Hacking on __GNUC__ may not work given the zoo of GCC
versions out there (FSF, Linaro, vendor toolchains, etc.)?

-Y

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-09-26 17:36         ` Andrey Ryabinin
@ 2014-09-29 14:10           ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:10 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, Catalin Marinas

On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>> Looks good to me.
>>
>> We can disable kasan instrumentation of this file as well.
>>
>
> Yes, but why? I don't think we need that.


Just gut feeling. Such tools usually don't play well together. For
example, due to asan quarantine lots of leaks will be missed (if we
pretend that tools work together, end users will use them together and
miss bugs). I won't be surprised if leak detector touches freed
objects under some circumstances as well.
We can do this if/when discover actual compatibility issues, of course.


>> On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>>> kmalloc internally round up allocation size, and kmemleak
>>> uses rounded up size as object's size. This makes kasan
>>> to complain while kmemleak scans memory or calculates of object's
>>> checksum. The simplest solution here is to disable kasan.
>>>
>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>> ---
>>>  mm/kmemleak.c | 6 ++++++
>>>  1 file changed, 6 insertions(+)
>>>
>>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>>> index 3cda50c..9bda1b3 100644
>>> --- a/mm/kmemleak.c
>>> +++ b/mm/kmemleak.c
>>> @@ -98,6 +98,7 @@
>>>  #include <asm/processor.h>
>>>  #include <linux/atomic.h>
>>>
>>> +#include <linux/kasan.h>
>>>  #include <linux/kmemcheck.h>
>>>  #include <linux/kmemleak.h>
>>>  #include <linux/memory_hotplug.h>
>>> @@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
>>>         if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
>>>                 return false;
>>>
>>> +       kasan_disable_local();
>>>         object->checksum = crc32(0, (void *)object->pointer, object->size);
>>> +       kasan_enable_local();
>>> +
>>>         return object->checksum != old_csum;
>>>  }
>>>
>>> @@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
>>>                                                   BYTES_PER_POINTER))
>>>                         continue;
>>>
>>> +               kasan_disable_local();
>>>                 pointer = *ptr;
>>> +               kasan_enable_local();
>>>
>>>                 object = find_and_get_object(pointer, 1);
>>>                 if (!object)
>>> --
>>> 2.1.1
>>>
>>
>
>
> --
> Best regards,
> Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-09-29 14:10           ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:10 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, Catalin Marinas

On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>> Looks good to me.
>>
>> We can disable kasan instrumentation of this file as well.
>>
>
> Yes, but why? I don't think we need that.


Just gut feeling. Such tools usually don't play well together. For
example, due to asan quarantine lots of leaks will be missed (if we
pretend that tools work together, end users will use them together and
miss bugs). I won't be surprised if leak detector touches freed
objects under some circumstances as well.
We can do this if/when discover actual compatibility issues, of course.


>> On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>>> kmalloc internally round up allocation size, and kmemleak
>>> uses rounded up size as object's size. This makes kasan
>>> to complain while kmemleak scans memory or calculates of object's
>>> checksum. The simplest solution here is to disable kasan.
>>>
>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>> ---
>>>  mm/kmemleak.c | 6 ++++++
>>>  1 file changed, 6 insertions(+)
>>>
>>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>>> index 3cda50c..9bda1b3 100644
>>> --- a/mm/kmemleak.c
>>> +++ b/mm/kmemleak.c
>>> @@ -98,6 +98,7 @@
>>>  #include <asm/processor.h>
>>>  #include <linux/atomic.h>
>>>
>>> +#include <linux/kasan.h>
>>>  #include <linux/kmemcheck.h>
>>>  #include <linux/kmemleak.h>
>>>  #include <linux/memory_hotplug.h>
>>> @@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
>>>         if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
>>>                 return false;
>>>
>>> +       kasan_disable_local();
>>>         object->checksum = crc32(0, (void *)object->pointer, object->size);
>>> +       kasan_enable_local();
>>> +
>>>         return object->checksum != old_csum;
>>>  }
>>>
>>> @@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
>>>                                                   BYTES_PER_POINTER))
>>>                         continue;
>>>
>>> +               kasan_disable_local();
>>>                 pointer = *ptr;
>>> +               kasan_enable_local();
>>>
>>>                 object = find_and_get_object(pointer, 1);
>>>                 if (!object)
>>> --
>>> 2.1.1
>>>
>>
>
>
> --
> Best regards,
> Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-26 18:48             ` Yuri Gribov
@ 2014-09-29 14:22               ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:22 UTC (permalink / raw)
  To: Yuri Gribov
  Cc: Andrey Ryabinin, Sasha Levin, Andrey Ryabinin, LKML,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

On Fri, Sep 26, 2014 at 10:48 PM, Yuri Gribov <tetra2005@gmail.com> wrote:
> On Fri, Sep 26, 2014 at 9:29 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>> So in kernel we will need to support all API versions, and the
>> following looks like a much simpler way to identify current API
>> version:
>>> #if __GNUC__ == 5
>>> #define ASAN_V4
>
> What about having compiler(s) predefine some __SANITIZE_ADDRESS_ABI__
> macro for this? Hacking on __GNUC__ may not work given the zoo of GCC
> versions out there (FSF, Linaro, vendor toolchains, etc.)?

I don't have strong preference here... well, actually just not competent :)

But on the second though... what do we want to do with pre-build
modules? Can you envision that somebody distributes binary modules
built with asan?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-29 14:22               ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:22 UTC (permalink / raw)
  To: Yuri Gribov
  Cc: Andrey Ryabinin, Sasha Levin, Andrey Ryabinin, LKML,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

On Fri, Sep 26, 2014 at 10:48 PM, Yuri Gribov <tetra2005@gmail.com> wrote:
> On Fri, Sep 26, 2014 at 9:29 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>> So in kernel we will need to support all API versions, and the
>> following looks like a much simpler way to identify current API
>> version:
>>> #if __GNUC__ == 5
>>> #define ASAN_V4
>
> What about having compiler(s) predefine some __SANITIZE_ADDRESS_ABI__
> macro for this? Hacking on __GNUC__ may not work given the zoo of GCC
> versions out there (FSF, Linaro, vendor toolchains, etc.)?

I don't have strong preference here... well, actually just not competent :)

But on the second though... what do we want to do with pre-build
modules? Can you envision that somebody distributes binary modules
built with asan?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
  2014-09-24 12:44     ` Andrey Ryabinin
@ 2014-09-29 14:27       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:27 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Michal Marek

On Wed, Sep 24, 2014 at 4:44 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This patch only demonstration how easy this could be achieved.
> GCC doesn't support this feature yet. Two patches required for this:
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html
>
> In inline instrumentation mode compiler directly inserts code
> checking shadow memory instead of __asan_load/__asan_store
> calls.
> This is usually faster than outline. In some workloads inline is
> 2 times faster than outline instrumentation.
>
> The downside of inline instrumentation is bloated kernel's .text size:
>
> size noasan/vmlinux
>    text     data     bss      dec     hex    filename
> 11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux
>
> size outline/vmlinux
>    text    data     bss      dec      hex    filename
> 16553474  1602592  950272  19106338  1238a22 outline/vmlinux
>
> size inline/vmlinux
>    text    data     bss      dec      hex    filename
> 32064759  1598688  946176  34609623  21019d7 inline/vmlinux
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Makefile          |  5 +++++
>  lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
>  mm/kasan/report.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 74 insertions(+)
>
> diff --git a/Makefile b/Makefile
> index 6cefe5e..fe7c534 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -773,6 +773,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
>  endif
>
>  ifdef CONFIG_KASAN
> +ifdef CONFIG_KASAN_INLINE
> +CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
> +                $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
> +endif
> +
>    ifeq ($(CFLAGS_KASAN),)
>      $(warning Cannot use CONFIG_KASAN: \
>               -fsanitize=kernel-address not supported by compiler)
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index faddb0e..c4ac040 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -27,4 +27,28 @@ config TEST_KASAN
>           out of bounds accesses, use after free. It is usefull for testing
>           kernel debugging features like kernel address sanitizer.
>
> +choice
> +       prompt "Instrumentation type"
> +       depends on KASAN
> +       default KASAN_INLINE if X86_64
> +
> +config KASAN_OUTLINE
> +       bool "Outline instrumentation"
> +       help
> +         Before every memory access compiler insert function call
> +         __asan_load*/__asan_store*. These functions performs check
> +         of shadow memory. This is slower than inline instrumentation,
> +         however it doesn't bloat size of kernel's .text section so
> +         much as inline does.
> +
> +config KASAN_INLINE
> +       bool "Inline instrumentation"
> +       help
> +         Compiler directly inserts code checking shadow memory before
> +         memory accesses. This is faster than outline (in some workloads
> +         it gives about x2 boost over outline instrumentation), but
> +         make kernel's .text size much bigger.
> +
> +endchoice
> +
>  endif
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index c42f6ba..a9262f8 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -212,3 +212,48 @@ void kasan_report_user_access(struct access_info *info)
>                 "=================================\n");
>         spin_unlock_irqrestore(&report_lock, flags);
>  }
> +
> +#define CALL_KASAN_REPORT(__addr, __size, __is_write) \
> +       struct access_info info;                      \
> +       info.access_addr = __addr;                    \
> +       info.access_size = __size;                    \
> +       info.is_write = __is_write;                   \
> +       info.ip = _RET_IP_;                           \
> +       kasan_report_error(&info)


Make it a function. And also call it from check_memory_region.
It uses _RET_IP_, but check_memory_region uses _RET_IP_ as well and
relies on __always_inline.


Otherwise looks good to me.


> +#define DEFINE_ASAN_REPORT_LOAD(size)                     \
> +void __asan_report_recover_load##size(unsigned long addr) \
> +{                                                         \
> +       CALL_KASAN_REPORT(addr, size, false);             \
> +}                                                         \
> +EXPORT_SYMBOL(__asan_report_recover_load##size)
> +
> +#define DEFINE_ASAN_REPORT_STORE(size)                     \
> +void __asan_report_recover_store##size(unsigned long addr) \
> +{                                                          \
> +       CALL_KASAN_REPORT(addr, size, true);               \
> +}                                                          \
> +EXPORT_SYMBOL(__asan_report_recover_store##size)
> +
> +DEFINE_ASAN_REPORT_LOAD(1);
> +DEFINE_ASAN_REPORT_LOAD(2);
> +DEFINE_ASAN_REPORT_LOAD(4);
> +DEFINE_ASAN_REPORT_LOAD(8);
> +DEFINE_ASAN_REPORT_LOAD(16);
> +DEFINE_ASAN_REPORT_STORE(1);
> +DEFINE_ASAN_REPORT_STORE(2);
> +DEFINE_ASAN_REPORT_STORE(4);
> +DEFINE_ASAN_REPORT_STORE(8);
> +DEFINE_ASAN_REPORT_STORE(16);
> +
> +void __asan_report_recover_load_n(unsigned long addr, size_t size)
> +{
> +       CALL_KASAN_REPORT(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_load_n);
> +
> +void __asan_report_recover_store_n(unsigned long addr, size_t size)
> +{
> +       CALL_KASAN_REPORT(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_store_n);
> --
> 2.1.1
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
@ 2014-09-29 14:27       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:27 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Michal Marek

On Wed, Sep 24, 2014 at 4:44 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This patch only demonstration how easy this could be achieved.
> GCC doesn't support this feature yet. Two patches required for this:
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html
>
> In inline instrumentation mode compiler directly inserts code
> checking shadow memory instead of __asan_load/__asan_store
> calls.
> This is usually faster than outline. In some workloads inline is
> 2 times faster than outline instrumentation.
>
> The downside of inline instrumentation is bloated kernel's .text size:
>
> size noasan/vmlinux
>    text     data     bss      dec     hex    filename
> 11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux
>
> size outline/vmlinux
>    text    data     bss      dec      hex    filename
> 16553474  1602592  950272  19106338  1238a22 outline/vmlinux
>
> size inline/vmlinux
>    text    data     bss      dec      hex    filename
> 32064759  1598688  946176  34609623  21019d7 inline/vmlinux
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Makefile          |  5 +++++
>  lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
>  mm/kasan/report.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 74 insertions(+)
>
> diff --git a/Makefile b/Makefile
> index 6cefe5e..fe7c534 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -773,6 +773,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
>  endif
>
>  ifdef CONFIG_KASAN
> +ifdef CONFIG_KASAN_INLINE
> +CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
> +                $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
> +endif
> +
>    ifeq ($(CFLAGS_KASAN),)
>      $(warning Cannot use CONFIG_KASAN: \
>               -fsanitize=kernel-address not supported by compiler)
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index faddb0e..c4ac040 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -27,4 +27,28 @@ config TEST_KASAN
>           out of bounds accesses, use after free. It is usefull for testing
>           kernel debugging features like kernel address sanitizer.
>
> +choice
> +       prompt "Instrumentation type"
> +       depends on KASAN
> +       default KASAN_INLINE if X86_64
> +
> +config KASAN_OUTLINE
> +       bool "Outline instrumentation"
> +       help
> +         Before every memory access compiler insert function call
> +         __asan_load*/__asan_store*. These functions performs check
> +         of shadow memory. This is slower than inline instrumentation,
> +         however it doesn't bloat size of kernel's .text section so
> +         much as inline does.
> +
> +config KASAN_INLINE
> +       bool "Inline instrumentation"
> +       help
> +         Compiler directly inserts code checking shadow memory before
> +         memory accesses. This is faster than outline (in some workloads
> +         it gives about x2 boost over outline instrumentation), but
> +         make kernel's .text size much bigger.
> +
> +endchoice
> +
>  endif
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index c42f6ba..a9262f8 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -212,3 +212,48 @@ void kasan_report_user_access(struct access_info *info)
>                 "=================================\n");
>         spin_unlock_irqrestore(&report_lock, flags);
>  }
> +
> +#define CALL_KASAN_REPORT(__addr, __size, __is_write) \
> +       struct access_info info;                      \
> +       info.access_addr = __addr;                    \
> +       info.access_size = __size;                    \
> +       info.is_write = __is_write;                   \
> +       info.ip = _RET_IP_;                           \
> +       kasan_report_error(&info)


Make it a function. And also call it from check_memory_region.
It uses _RET_IP_, but check_memory_region uses _RET_IP_ as well and
relies on __always_inline.


Otherwise looks good to me.


> +#define DEFINE_ASAN_REPORT_LOAD(size)                     \
> +void __asan_report_recover_load##size(unsigned long addr) \
> +{                                                         \
> +       CALL_KASAN_REPORT(addr, size, false);             \
> +}                                                         \
> +EXPORT_SYMBOL(__asan_report_recover_load##size)
> +
> +#define DEFINE_ASAN_REPORT_STORE(size)                     \
> +void __asan_report_recover_store##size(unsigned long addr) \
> +{                                                          \
> +       CALL_KASAN_REPORT(addr, size, true);               \
> +}                                                          \
> +EXPORT_SYMBOL(__asan_report_recover_store##size)
> +
> +DEFINE_ASAN_REPORT_LOAD(1);
> +DEFINE_ASAN_REPORT_LOAD(2);
> +DEFINE_ASAN_REPORT_LOAD(4);
> +DEFINE_ASAN_REPORT_LOAD(8);
> +DEFINE_ASAN_REPORT_LOAD(16);
> +DEFINE_ASAN_REPORT_STORE(1);
> +DEFINE_ASAN_REPORT_STORE(2);
> +DEFINE_ASAN_REPORT_STORE(4);
> +DEFINE_ASAN_REPORT_STORE(8);
> +DEFINE_ASAN_REPORT_STORE(16);
> +
> +void __asan_report_recover_load_n(unsigned long addr, size_t size)
> +{
> +       CALL_KASAN_REPORT(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_load_n);
> +
> +void __asan_report_recover_store_n(unsigned long addr, size_t size)
> +{
> +       CALL_KASAN_REPORT(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_store_n);
> --
> 2.1.1
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
  2014-09-29 14:28           ` Dmitry Vyukov
@ 2014-09-29 14:27             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-29 14:27 UTC (permalink / raw)
  To: Dmitry Vyukov, Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Michal Marek

On 09/29/2014 06:28 PM, Dmitry Vyukov wrote:
> On Fri, Sep 26, 2014 at 9:33 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>> 2014-09-26 21:18 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>>
>>> Yikes!
>>> So this works during bootstrap, for user memory accesses, valloc
>>> memory, etc, right?
>>>
>>
>> Yes, this works. Userspace memory access in instrumented code will
>> produce general protection fault,
>> so it won't be unnoticed.
> 
> 
> Great!
> What happens during early bootstrap when shadow is not mapped yet?
> 

Shadow mapped very early. Any instrumented code executes only after shadow mapped.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
@ 2014-09-29 14:27             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-09-29 14:27 UTC (permalink / raw)
  To: Dmitry Vyukov, Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Michal Marek

On 09/29/2014 06:28 PM, Dmitry Vyukov wrote:
> On Fri, Sep 26, 2014 at 9:33 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>> 2014-09-26 21:18 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>>
>>> Yikes!
>>> So this works during bootstrap, for user memory accesses, valloc
>>> memory, etc, right?
>>>
>>
>> Yes, this works. Userspace memory access in instrumented code will
>> produce general protection fault,
>> so it won't be unnoticed.
> 
> 
> Great!
> What happens during early bootstrap when shadow is not mapped yet?
> 

Shadow mapped very early. Any instrumented code executes only after shadow mapped.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
  2014-09-26 17:33         ` Andrey Ryabinin
@ 2014-09-29 14:28           ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:28 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, Michal Marek

On Fri, Sep 26, 2014 at 9:33 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> 2014-09-26 21:18 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>
>> Yikes!
>> So this works during bootstrap, for user memory accesses, valloc
>> memory, etc, right?
>>
>
> Yes, this works. Userspace memory access in instrumented code will
> produce general protection fault,
> so it won't be unnoticed.


Great!
What happens during early bootstrap when shadow is not mapped yet?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v3 13/13] kasan: introduce inline instrumentation
@ 2014-09-29 14:28           ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:28 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, Michal Marek

On Fri, Sep 26, 2014 at 9:33 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> 2014-09-26 21:18 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>
>> Yikes!
>> So this works during bootstrap, for user memory accesses, valloc
>> memory, etc, right?
>>
>
> Yes, this works. Userspace memory access in instrumented code will
> produce general protection fault,
> so it won't be unnoticed.


Great!
What happens during early bootstrap when shadow is not mapped yet?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-29 14:22               ` Dmitry Vyukov
@ 2014-09-29 14:36                 ` Peter Zijlstra
  -1 siblings, 0 replies; 862+ messages in thread
From: Peter Zijlstra @ 2014-09-29 14:36 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Yuri Gribov, Andrey Ryabinin, Sasha Levin, Andrey Ryabinin, LKML,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Alexander Viro, Dave Jones

On Mon, Sep 29, 2014 at 06:22:46PM +0400, Dmitry Vyukov wrote:
> But on the second though... what do we want to do with pre-build
> modules? Can you envision that somebody distributes binary modules
> built with asan?

Nobody should ever care about binary modules other than inflicting the
maximum pain and breakage on whoever does so.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-29 14:36                 ` Peter Zijlstra
  0 siblings, 0 replies; 862+ messages in thread
From: Peter Zijlstra @ 2014-09-29 14:36 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Yuri Gribov, Andrey Ryabinin, Sasha Levin, Andrey Ryabinin, LKML,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Alexander Viro, Dave Jones

On Mon, Sep 29, 2014 at 06:22:46PM +0400, Dmitry Vyukov wrote:
> But on the second though... what do we want to do with pre-build
> modules? Can you envision that somebody distributes binary modules
> built with asan?

Nobody should ever care about binary modules other than inflicting the
maximum pain and breakage on whoever does so.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-29 14:36                 ` Peter Zijlstra
@ 2014-09-29 14:48                   ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Yuri Gribov, Andrey Ryabinin, Sasha Levin, Andrey Ryabinin, LKML,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Alexander Viro, Dave Jones

OK, great, then we can do __SANITIZE_ADDRESS_ABI__

On Mon, Sep 29, 2014 at 6:36 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Mon, Sep 29, 2014 at 06:22:46PM +0400, Dmitry Vyukov wrote:
>> But on the second though... what do we want to do with pre-build
>> modules? Can you envision that somebody distributes binary modules
>> built with asan?
>
> Nobody should ever care about binary modules other than inflicting the
> maximum pain and breakage on whoever does so.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-09-29 14:48                   ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-09-29 14:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Yuri Gribov, Andrey Ryabinin, Sasha Levin, Andrey Ryabinin, LKML,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, linux-kbuild, x86, linux-mm,
	Randy Dunlap, Alexander Viro, Dave Jones

OK, great, then we can do __SANITIZE_ADDRESS_ABI__

On Mon, Sep 29, 2014 at 6:36 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Mon, Sep 29, 2014 at 06:22:46PM +0400, Dmitry Vyukov wrote:
>> But on the second though... what do we want to do with pre-build
>> modules? Can you envision that somebody distributes binary modules
>> built with asan?
>
> Nobody should ever care about binary modules other than inflicting the
> maximum pain and breakage on whoever does so.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-09-29 14:10           ` Dmitry Vyukov
@ 2014-10-01 10:39             ` Catalin Marinas
  -1 siblings, 0 replies; 862+ messages in thread
From: Catalin Marinas @ 2014-10-01 10:39 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

On Mon, Sep 29, 2014 at 03:10:01PM +0100, Dmitry Vyukov wrote:
> On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> > 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
> >> Looks good to me.
> >>
> >> We can disable kasan instrumentation of this file as well.
> >
> > Yes, but why? I don't think we need that.
> 
> Just gut feeling. Such tools usually don't play well together. For
> example, due to asan quarantine lots of leaks will be missed (if we
> pretend that tools work together, end users will use them together and
> miss bugs). I won't be surprised if leak detector touches freed
> objects under some circumstances as well.
> We can do this if/when discover actual compatibility issues, of course.

I think it's worth testing them together first.

One issue, as mentioned in the patch log, is that the size information
that kmemleak gets is the one from the kmem_cache object rather than the
original allocation size, so this would be rounded up.

Kmemleak should not touch freed objects (if an object is freed during a
scan, it is protected by some lock until the scan completes). There is a
bug however which I haven't got to fixing it yet, if kmemleak fails for
some reason (cannot allocate memory) and disables itself, it may access
some freed object (though usually hard to trigger).

-- 
Catalin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-10-01 10:39             ` Catalin Marinas
  0 siblings, 0 replies; 862+ messages in thread
From: Catalin Marinas @ 2014-10-01 10:39 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrey Ryabinin, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

On Mon, Sep 29, 2014 at 03:10:01PM +0100, Dmitry Vyukov wrote:
> On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> > 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
> >> Looks good to me.
> >>
> >> We can disable kasan instrumentation of this file as well.
> >
> > Yes, but why? I don't think we need that.
> 
> Just gut feeling. Such tools usually don't play well together. For
> example, due to asan quarantine lots of leaks will be missed (if we
> pretend that tools work together, end users will use them together and
> miss bugs). I won't be surprised if leak detector touches freed
> objects under some circumstances as well.
> We can do this if/when discover actual compatibility issues, of course.

I think it's worth testing them together first.

One issue, as mentioned in the patch log, is that the size information
that kmemleak gets is the one from the kmem_cache object rather than the
original allocation size, so this would be rounded up.

Kmemleak should not touch freed objects (if an object is freed during a
scan, it is protected by some lock until the scan completes). There is a
bug however which I haven't got to fixing it yet, if kmemleak fails for
some reason (cannot allocate memory) and disables itself, it may access
some freed object (though usually hard to trigger).

-- 
Catalin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-10-01 10:39             ` Catalin Marinas
@ 2014-10-01 11:45               ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-01 11:45 UTC (permalink / raw)
  To: Catalin Marinas, Dmitry Vyukov
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm

On 10/01/2014 02:39 PM, Catalin Marinas wrote:
> On Mon, Sep 29, 2014 at 03:10:01PM +0100, Dmitry Vyukov wrote:
>> On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>> 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>>> Looks good to me.
>>>>
>>>> We can disable kasan instrumentation of this file as well.
>>>
>>> Yes, but why? I don't think we need that.
>>
>> Just gut feeling. Such tools usually don't play well together. For
>> example, due to asan quarantine lots of leaks will be missed (if we
>> pretend that tools work together, end users will use them together and
>> miss bugs). I won't be surprised if leak detector touches freed
>> objects under some circumstances as well.
>> We can do this if/when discover actual compatibility issues, of course.
> 
> I think it's worth testing them together first.
> 

I did test them together. With this patch applied both tools works without problems.


> One issue, as mentioned in the patch log, is that the size information
> that kmemleak gets is the one from the kmem_cache object rather than the
> original allocation size, so this would be rounded up.
> 
> Kmemleak should not touch freed objects (if an object is freed during a
> scan, it is protected by some lock until the scan completes). There is a
> bug however which I haven't got to fixing it yet, if kmemleak fails for
> some reason (cannot allocate memory) and disables itself, it may access
> some freed object (though usually hard to trigger).
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-10-01 11:45               ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-01 11:45 UTC (permalink / raw)
  To: Catalin Marinas, Dmitry Vyukov
  Cc: Andrey Ryabinin, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm

On 10/01/2014 02:39 PM, Catalin Marinas wrote:
> On Mon, Sep 29, 2014 at 03:10:01PM +0100, Dmitry Vyukov wrote:
>> On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>> 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>>> Looks good to me.
>>>>
>>>> We can disable kasan instrumentation of this file as well.
>>>
>>> Yes, but why? I don't think we need that.
>>
>> Just gut feeling. Such tools usually don't play well together. For
>> example, due to asan quarantine lots of leaks will be missed (if we
>> pretend that tools work together, end users will use them together and
>> miss bugs). I won't be surprised if leak detector touches freed
>> objects under some circumstances as well.
>> We can do this if/when discover actual compatibility issues, of course.
> 
> I think it's worth testing them together first.
> 

I did test them together. With this patch applied both tools works without problems.


> One issue, as mentioned in the patch log, is that the size information
> that kmemleak gets is the one from the kmem_cache object rather than the
> original allocation size, so this would be rounded up.
> 
> Kmemleak should not touch freed objects (if an object is freed during a
> scan, it is protected by some lock until the scan completes). There is a
> bug however which I haven't got to fixing it yet, if kmemleak fails for
> some reason (cannot allocate memory) and disables itself, it may access
> some freed object (though usually hard to trigger).
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-10-01 11:45               ` Andrey Ryabinin
@ 2014-10-01 13:27                 ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-10-01 13:27 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Catalin Marinas, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

On Wed, Oct 1, 2014 at 3:45 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> On 10/01/2014 02:39 PM, Catalin Marinas wrote:
>> On Mon, Sep 29, 2014 at 03:10:01PM +0100, Dmitry Vyukov wrote:
>>> On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>>> 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>>>> Looks good to me.
>>>>>
>>>>> We can disable kasan instrumentation of this file as well.
>>>>
>>>> Yes, but why? I don't think we need that.
>>>
>>> Just gut feeling. Such tools usually don't play well together. For
>>> example, due to asan quarantine lots of leaks will be missed (if we
>>> pretend that tools work together, end users will use them together and
>>> miss bugs). I won't be surprised if leak detector touches freed
>>> objects under some circumstances as well.
>>> We can do this if/when discover actual compatibility issues, of course.
>>
>> I think it's worth testing them together first.
>>
>
> I did test them together. With this patch applied both tools works without problems.

What do you mean "works without problems"? Are you sure that kmemleak
still detects all leaks it is intended to detect?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-10-01 13:27                 ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-10-01 13:27 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Catalin Marinas, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

On Wed, Oct 1, 2014 at 3:45 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> On 10/01/2014 02:39 PM, Catalin Marinas wrote:
>> On Mon, Sep 29, 2014 at 03:10:01PM +0100, Dmitry Vyukov wrote:
>>> On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>>> 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>>>> Looks good to me.
>>>>>
>>>>> We can disable kasan instrumentation of this file as well.
>>>>
>>>> Yes, but why? I don't think we need that.
>>>
>>> Just gut feeling. Such tools usually don't play well together. For
>>> example, due to asan quarantine lots of leaks will be missed (if we
>>> pretend that tools work together, end users will use them together and
>>> miss bugs). I won't be surprised if leak detector touches freed
>>> objects under some circumstances as well.
>>> We can do this if/when discover actual compatibility issues, of course.
>>
>> I think it's worth testing them together first.
>>
>
> I did test them together. With this patch applied both tools works without problems.

What do you mean "works without problems"? Are you sure that kmemleak
still detects all leaks it is intended to detect?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-10-01 13:27                 ` Dmitry Vyukov
@ 2014-10-01 14:11                   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-01 14:11 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Catalin Marinas, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

On 10/01/2014 05:27 PM, Dmitry Vyukov wrote:
> On Wed, Oct 1, 2014 at 3:45 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>> On 10/01/2014 02:39 PM, Catalin Marinas wrote:
>>> On Mon, Sep 29, 2014 at 03:10:01PM +0100, Dmitry Vyukov wrote:
>>>> On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>>>> 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>>>>> Looks good to me.
>>>>>>
>>>>>> We can disable kasan instrumentation of this file as well.
>>>>>
>>>>> Yes, but why? I don't think we need that.
>>>>
>>>> Just gut feeling. Such tools usually don't play well together. For
>>>> example, due to asan quarantine lots of leaks will be missed (if we
>>>> pretend that tools work together, end users will use them together and
>>>> miss bugs). I won't be surprised if leak detector touches freed
>>>> objects under some circumstances as well.
>>>> We can do this if/when discover actual compatibility issues, of course.
>>>
>>> I think it's worth testing them together first.
>>>
>>
>> I did test them together. With this patch applied both tools works without problems.
> 
> What do you mean "works without problems"? Are you sure that kmemleak
> still detects all leaks it is intended to detect?
> 

Yes I'm sure about that. And how kasan could affect on kmemleak's capability to detect leaks?


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-10-01 14:11                   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-01 14:11 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Catalin Marinas, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

On 10/01/2014 05:27 PM, Dmitry Vyukov wrote:
> On Wed, Oct 1, 2014 at 3:45 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>> On 10/01/2014 02:39 PM, Catalin Marinas wrote:
>>> On Mon, Sep 29, 2014 at 03:10:01PM +0100, Dmitry Vyukov wrote:
>>>> On Fri, Sep 26, 2014 at 9:36 PM, Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>>>> 2014-09-26 21:10 GMT+04:00 Dmitry Vyukov <dvyukov@google.com>:
>>>>>> Looks good to me.
>>>>>>
>>>>>> We can disable kasan instrumentation of this file as well.
>>>>>
>>>>> Yes, but why? I don't think we need that.
>>>>
>>>> Just gut feeling. Such tools usually don't play well together. For
>>>> example, due to asan quarantine lots of leaks will be missed (if we
>>>> pretend that tools work together, end users will use them together and
>>>> miss bugs). I won't be surprised if leak detector touches freed
>>>> objects under some circumstances as well.
>>>> We can do this if/when discover actual compatibility issues, of course.
>>>
>>> I think it's worth testing them together first.
>>>
>>
>> I did test them together. With this patch applied both tools works without problems.
> 
> What do you mean "works without problems"? Are you sure that kmemleak
> still detects all leaks it is intended to detect?
> 

Yes I'm sure about that. And how kasan could affect on kmemleak's capability to detect leaks?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-10-01 14:11                   ` Andrey Ryabinin
@ 2014-10-01 14:24                     ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-10-01 14:24 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Catalin Marinas, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

On Wed, Oct 1, 2014 at 6:11 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>>>>>>>
>>>>>>> We can disable kasan instrumentation of this file as well.
>>>>>>
>>>>>> Yes, but why? I don't think we need that.
>>>>>
>>>>> Just gut feeling. Such tools usually don't play well together. For
>>>>> example, due to asan quarantine lots of leaks will be missed (if we
>>>>> pretend that tools work together, end users will use them together and
>>>>> miss bugs). I won't be surprised if leak detector touches freed
>>>>> objects under some circumstances as well.
>>>>> We can do this if/when discover actual compatibility issues, of course.
>>>>
>>>> I think it's worth testing them together first.
>>>>
>>>
>>> I did test them together. With this patch applied both tools works without problems.
>>
>> What do you mean "works without problems"? Are you sure that kmemleak
>> still detects all leaks it is intended to detect?
>>
>
> Yes I'm sure about that. And how kasan could affect on kmemleak's capability to detect leaks?


Ah, OK, we don't have quarantine.
The idea is that redzones and quarantine will contain parasitical
pointers (quarantine is exactly a linked list of freed objects).

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-10-01 14:24                     ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-10-01 14:24 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Catalin Marinas, Andrey Ryabinin, LKML, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

On Wed, Oct 1, 2014 at 6:11 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>>>>>>>
>>>>>>> We can disable kasan instrumentation of this file as well.
>>>>>>
>>>>>> Yes, but why? I don't think we need that.
>>>>>
>>>>> Just gut feeling. Such tools usually don't play well together. For
>>>>> example, due to asan quarantine lots of leaks will be missed (if we
>>>>> pretend that tools work together, end users will use them together and
>>>>> miss bugs). I won't be surprised if leak detector touches freed
>>>>> objects under some circumstances as well.
>>>>> We can do this if/when discover actual compatibility issues, of course.
>>>>
>>>> I think it's worth testing them together first.
>>>>
>>>
>>> I did test them together. With this patch applied both tools works without problems.
>>
>> What do you mean "works without problems"? Are you sure that kmemleak
>> still detects all leaks it is intended to detect?
>>
>
> Yes I'm sure about that. And how kasan could affect on kmemleak's capability to detect leaks?


Ah, OK, we don't have quarantine.
The idea is that redzones and quarantine will contain parasitical
pointers (quarantine is exactly a linked list of freed objects).

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-09-11  5:31         ` Andrey Ryabinin
@ 2014-10-01 15:31           ` H. Peter Anvin
  -1 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-10-01 15:31 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/10/2014 10:31 PM, Andrey Ryabinin wrote:
> On 09/11/2014 08:01 AM, H. Peter Anvin wrote:
>> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>>> This patch add arch specific code for kernel address sanitizer.
>>>
>>> 16TB of virtual addressed used for shadow memory.
>>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>>> to 0xffff900000000000.
>>
>> NAK on this.
>>
>> 0xffff880000000000 is the lowest usable address because we have agreed
>> to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
>> other non-OS uses.
>>
>> Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
>> zone higher up in memory?
>>
> 
> I already answered to Dave why I choose to place shadow bellow PAGE_OFFSET (answer copied bellow).
> In short - yes, shadow could be higher. But for some sort of kernel bugs we could have confusing oopses in kasan kernel.
> 

Confusing how?  I presume you are talking about something trying to
touch a non-canonical address, which is usually a very blatant type of bug.

	-hpa



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-10-01 15:31           ` H. Peter Anvin
  0 siblings, 0 replies; 862+ messages in thread
From: H. Peter Anvin @ 2014-10-01 15:31 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 09/10/2014 10:31 PM, Andrey Ryabinin wrote:
> On 09/11/2014 08:01 AM, H. Peter Anvin wrote:
>> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>>> This patch add arch specific code for kernel address sanitizer.
>>>
>>> 16TB of virtual addressed used for shadow memory.
>>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>>> to 0xffff900000000000.
>>
>> NAK on this.
>>
>> 0xffff880000000000 is the lowest usable address because we have agreed
>> to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
>> other non-OS uses.
>>
>> Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
>> zone higher up in memory?
>>
> 
> I already answered to Dave why I choose to place shadow bellow PAGE_OFFSET (answer copied bellow).
> In short - yes, shadow could be higher. But for some sort of kernel bugs we could have confusing oopses in kasan kernel.
> 

Confusing how?  I presume you are talking about something trying to
touch a non-canonical address, which is usually a very blatant type of bug.

	-hpa


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
  2014-10-01 15:31           ` H. Peter Anvin
@ 2014-10-01 16:28             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-01 16:28 UTC (permalink / raw)
  To: H. Peter Anvin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 10/01/2014 07:31 PM, H. Peter Anvin wrote:
> On 09/10/2014 10:31 PM, Andrey Ryabinin wrote:
>> On 09/11/2014 08:01 AM, H. Peter Anvin wrote:
>>> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>>>> This patch add arch specific code for kernel address sanitizer.
>>>>
>>>> 16TB of virtual addressed used for shadow memory.
>>>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>>>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>>>> to 0xffff900000000000.
>>>
>>> NAK on this.
>>>
>>> 0xffff880000000000 is the lowest usable address because we have agreed
>>> to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
>>> other non-OS uses.
>>>
>>> Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
>>> zone higher up in memory?
>>>
>>
>> I already answered to Dave why I choose to place shadow bellow PAGE_OFFSET (answer copied bellow).
>> In short - yes, shadow could be higher. But for some sort of kernel bugs we could have confusing oopses in kasan kernel.
>>
> 
> Confusing how?  I presume you are talking about something trying to
> touch a non-canonical address, which is usually a very blatant type of bug.
> 
> 	-hpa
> 

For those kinds of bugs we normally get general protection fault.

With inline instrumented kasan we could get either general protection fault,
or unhandled page fault on "kasan_mem_to_shadow(non_canonical_address)" address.
I assume that the last case could be a bit confusing.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
@ 2014-10-01 16:28             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-01 16:28 UTC (permalink / raw)
  To: H. Peter Anvin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

On 10/01/2014 07:31 PM, H. Peter Anvin wrote:
> On 09/10/2014 10:31 PM, Andrey Ryabinin wrote:
>> On 09/11/2014 08:01 AM, H. Peter Anvin wrote:
>>> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>>>> This patch add arch specific code for kernel address sanitizer.
>>>>
>>>> 16TB of virtual addressed used for shadow memory.
>>>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>>>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>>>> to 0xffff900000000000.
>>>
>>> NAK on this.
>>>
>>> 0xffff880000000000 is the lowest usable address because we have agreed
>>> to leave 0xffff800000000000-0xffff880000000000 for the hypervisor or
>>> other non-OS uses.
>>>
>>> Bumping PAGE_OFFSET seems needlessly messy, why not just designate a
>>> zone higher up in memory?
>>>
>>
>> I already answered to Dave why I choose to place shadow bellow PAGE_OFFSET (answer copied bellow).
>> In short - yes, shadow could be higher. But for some sort of kernel bugs we could have confusing oopses in kasan kernel.
>>
> 
> Confusing how?  I presume you are talking about something trying to
> touch a non-canonical address, which is usually a very blatant type of bug.
> 
> 	-hpa
> 

For those kinds of bugs we normally get general protection fault.

With inline instrumented kasan we could get either general protection fault,
or unhandled page fault on "kasan_mem_to_shadow(non_canonical_address)" address.
I assume that the last case could be a bit confusing.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v4 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2014-10-06 15:53   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v5.0.0.

Patches are based on motm-2014-10-02-16-22 tree and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v4

Changes since v3:

    - Rebased on top of the motm-2014-10-02-16-22.
    - Added comment explaining why rcu slabs are not poisoned in kasan_slab_free().
    - Removed 'Do not use slub poisoning with KASan because poisoning
       overwrites user-tracking info' paragraph from Documentation/kasan.txt
       cause this is absolutely wrong. Poisoning overwrites only object's data
       and doesn't touch metadata, so it works fine with KASan.

    - Removed useless kasan_free_slab_pages().
    - Fixed kasan_mark_slab_padding(). In v3 kasan_mark_slab_padding could
        left some memory unpoisoned.

    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html

    - Replaced CALL_KASAN_REPORT define with inline function
        (patch "kasan: introduce inline instrumentation")

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Comparison with other debuggin features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.

Andrey Ryabinin (13):
  Add kernel address sanitizer infrastructure.
  efi: libstub: disable KASAN for efistub
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  kasan: introduce inline instrumentation

 Documentation/kasan.txt               | 174 ++++++++++++++
 Makefile                              |  15 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 +++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 +++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   5 +
 include/linux/kasan.h                 |  69 ++++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |   9 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 +++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 430 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  54 +++++
 mm/kasan/report.c                     | 238 +++++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 ++++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1570 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

--
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>

-- 
2.1.2


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v4 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-10-06 15:53   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v5.0.0.

Patches are based on motm-2014-10-02-16-22 tree and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v4

Changes since v3:

    - Rebased on top of the motm-2014-10-02-16-22.
    - Added comment explaining why rcu slabs are not poisoned in kasan_slab_free().
    - Removed 'Do not use slub poisoning with KASan because poisoning
       overwrites user-tracking info' paragraph from Documentation/kasan.txt
       cause this is absolutely wrong. Poisoning overwrites only object's data
       and doesn't touch metadata, so it works fine with KASan.

    - Removed useless kasan_free_slab_pages().
    - Fixed kasan_mark_slab_padding(). In v3 kasan_mark_slab_padding could
        left some memory unpoisoned.

    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html

    - Replaced CALL_KASAN_REPORT define with inline function
        (patch "kasan: introduce inline instrumentation")

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Comparison with other debuggin features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.

Andrey Ryabinin (13):
  Add kernel address sanitizer infrastructure.
  efi: libstub: disable KASAN for efistub
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  kasan: introduce inline instrumentation

 Documentation/kasan.txt               | 174 ++++++++++++++
 Makefile                              |  15 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 +++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 +++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   5 +
 include/linux/kasan.h                 |  69 ++++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |   9 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 +++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 430 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  54 +++++
 mm/kasan/report.c                     | 238 +++++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 ++++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1570 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

--
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>

-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v4 01/13] Add kernel address sanitizer infrastructure.
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Jiri Kosina, Michal Marek, Ingo Molnar, Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore fresh GCC >= v5.0.0 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 174 +++++++++++++++++++++++++
 Makefile                |  11 +-
 include/linux/kasan.h   |  42 ++++++
 include/linux/sched.h   |   3 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  15 +++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 336 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  27 ++++
 mm/kasan/report.c       | 169 ++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 ++
 12 files changed, 791 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..577de3a
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,174 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN uses compile-time instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 5.0.0.
+
+Currently KASAN is supported only for x86_64 architecture and requires kernel
+to be built with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow on each memory
+access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow
+memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
+scale and offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
diff --git a/Makefile b/Makefile
index e90dce2..6f8be78 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -426,7 +426,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -757,6 +757,13 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 706a9f7..3c3ef5d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1661,6 +1661,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ddd070a..bb26ec3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..54cf44f
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,15 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+endif
diff --git a/mm/Makefile b/mm/Makefile
index ba3ec4e..40d58a8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -70,3 +70,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..8ce738e
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,336 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..9a9fe9f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,27 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..89a9aa1
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,169 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..c1517e2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 01/13] Add kernel address sanitizer infrastructure.
@ 2014-10-06 15:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Jiri Kosina, Michal Marek, Ingo Molnar, Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore fresh GCC >= v5.0.0 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 174 +++++++++++++++++++++++++
 Makefile                |  11 +-
 include/linux/kasan.h   |  42 ++++++
 include/linux/sched.h   |   3 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  15 +++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 336 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  27 ++++
 mm/kasan/report.c       | 169 ++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 ++
 12 files changed, 791 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..577de3a
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,174 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN uses compile-time instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 5.0.0.
+
+Currently KASAN is supported only for x86_64 architecture and requires kernel
+to be built with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow on each memory
+access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow
+memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
+scale and offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
diff --git a/Makefile b/Makefile
index e90dce2..6f8be78 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -426,7 +426,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -757,6 +757,13 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 706a9f7..3c3ef5d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1661,6 +1661,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ddd070a..bb26ec3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..54cf44f
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,15 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+endif
diff --git a/mm/Makefile b/mm/Makefile
index ba3ec4e..40d58a8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -70,3 +70,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..8ce738e
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,336 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..9a9fe9f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,27 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..89a9aa1
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,169 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..c1517e2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 02/13] efi: libstub: disable KASAN for efistub
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

KASan as many other options should be disabled for this stub
to prevent build failures.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 drivers/firmware/efi/libstub/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 02/13] efi: libstub: disable KASAN for efistub
@ 2014-10-06 15:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

KASan as many other options should be disabled for this stub
to prevent build failures.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 drivers/firmware/efi/libstub/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 03/13] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index ea51f2c..8d9a3c6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 03/13] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
@ 2014-10-06 15:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index ea51f2c..8d9a3c6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 04/13] x86_64: add KASan support
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/boot/Makefile            |  2 +
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/include/asm/kasan.h      | 27 ++++++++++++
 arch/x86/kernel/Makefile          |  2 +
 arch/x86/kernel/dumpstack.c       |  5 ++-
 arch/x86/kernel/head64.c          |  9 +++-
 arch/x86/kernel/head_64.S         | 28 +++++++++++++
 arch/x86/mm/Makefile              |  3 ++
 arch/x86/mm/init.c                |  3 ++
 arch/x86/mm/kasan_init_64.c       | 87 +++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |  2 +-
 arch/x86/realmode/rm/Makefile     |  1 +
 arch/x86/vdso/Makefile            |  1 +
 lib/Kconfig.kasan                 |  6 +++
 15 files changed, 175 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 77c0ae3..c7c04f5 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -137,6 +137,7 @@ config X86
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
 	select X86_FEATURE_NAMES if PROC_FS
+	select HAVE_ARCH_KASAN if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 704f58a..21faab6b7 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..056c943
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_map_shadow(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_map_shadow(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 8f1e774..9d46ee8 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..4a5a597 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,7 @@
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
 #include <asm/init.h>
+#include <asm/kasan.h>
 #include <asm/page.h>
 #include <asm/page_types.h>
 #include <asm/sections.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..c6ea8a4
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,87 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 54cf44f..b458a00 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -12,4 +13,9 @@ config KASAN
 	  of available memory and brings about ~x3 performance slowdown.
 	  For better error detection enable CONFIG_STACKTRACE,
 	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+	default 0xdfffe90000000000 if X86_64
+
 endif
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 04/13] x86_64: add KASan support
@ 2014-10-06 15:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/boot/Makefile            |  2 +
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/include/asm/kasan.h      | 27 ++++++++++++
 arch/x86/kernel/Makefile          |  2 +
 arch/x86/kernel/dumpstack.c       |  5 ++-
 arch/x86/kernel/head64.c          |  9 +++-
 arch/x86/kernel/head_64.S         | 28 +++++++++++++
 arch/x86/mm/Makefile              |  3 ++
 arch/x86/mm/init.c                |  3 ++
 arch/x86/mm/kasan_init_64.c       | 87 +++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |  2 +-
 arch/x86/realmode/rm/Makefile     |  1 +
 arch/x86/vdso/Makefile            |  1 +
 lib/Kconfig.kasan                 |  6 +++
 15 files changed, 175 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 77c0ae3..c7c04f5 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -137,6 +137,7 @@ config X86
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
 	select X86_FEATURE_NAMES if PROC_FS
+	select HAVE_ARCH_KASAN if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 704f58a..21faab6b7 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..056c943
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_map_shadow(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_map_shadow(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 8f1e774..9d46ee8 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..4a5a597 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,7 @@
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
 #include <asm/init.h>
+#include <asm/kasan.h>
 #include <asm/page.h>
 #include <asm/page_types.h>
 #include <asm/sections.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..c6ea8a4
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,87 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 54cf44f..b458a00 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -12,4 +13,9 @@ config KASAN
 	  of available memory and brings about ~x3 performance slowdown.
 	  For better error detection enable CONFIG_STACKTRACE,
 	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+	default 0xdfffe90000000000 if X86_64
+
 endif
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 05/13] mm: page_alloc: add kasan hooks on alloc and free paths
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index edba18a..834f846 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 8ce738e..5782082 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -259,6 +259,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report_error(&info);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9a9fe9f..ee572c4 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 89a9aa1..707323b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 53e10ff..88b5032 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -59,6 +59,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -755,6 +756,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -941,6 +943,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 05/13] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2014-10-06 15:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:53 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index edba18a..834f846 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 8ce738e..5782082 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -259,6 +259,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report_error(&info);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9a9fe9f..ee572c4 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 89a9aa1..707323b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 53e10ff..88b5032 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -59,6 +59,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -755,6 +756,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -941,6 +943,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 06/13] mm: slub: introduce virt_to_obj function.
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 06/13] mm: slub: introduce virt_to_obj function.
@ 2014-10-06 15:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 07/13] mm: slub: share slab_err and object_err functions
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 4 ++++
 mm/slub.c                | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..8fed60d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index ae7b9f1..82282f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 07/13] mm: slub: share slab_err and object_err functions
@ 2014-10-06 15:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 4 ++++
 mm/slub.c                | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..8fed60d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index ae7b9f1..82282f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 08/13] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 82282f5..9b1f75c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 08/13] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-10-06 15:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 82282f5..9b1f75c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 09/13] mm: slub: add kernel address sanitizer support for slub allocator
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 35 ++++++++++++++++++--
 9 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index b458a00..d16b899 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 40d58a8..bef873a 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 5782082..d4552a2 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 static inline bool kasan_enabled(void)
 {
@@ -273,6 +274,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index ee572c4..b70a3d1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 707323b..03ce28e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -73,11 +78,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3a6e0cf..33868b4 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -795,6 +795,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -969,8 +970,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 9b1f75c..3863e20 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2488,6 +2500,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2514,6 +2527,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2897,6 +2912,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3269,6 +3285,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3312,12 +3330,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3333,6 +3353,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 09/13] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-10-06 15:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 35 ++++++++++++++++++--
 9 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index b458a00..d16b899 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 40d58a8..bef873a 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 5782082..d4552a2 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 static inline bool kasan_enabled(void)
 {
@@ -273,6 +274,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index ee572c4..b70a3d1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 707323b..03ce28e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -73,11 +78,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3a6e0cf..33868b4 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -795,6 +795,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -969,8 +970,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 9b1f75c..3863e20 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2488,6 +2500,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2514,6 +2527,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2897,6 +2912,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3269,6 +3285,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3312,12 +3330,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3333,6 +3353,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 10/13] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 876ac08..584b283 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1395,6 +1396,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 10/13] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-10-06 15:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 876ac08..584b283 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -1395,6 +1396,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 			kmem_cache_free(dentry_cache, dentry); 
 			return NULL;
 		}
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 11/13] kmemleak: disable kasan instrumentation for kmemleak
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 11/13] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-10-06 15:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 12/13] lib: add kasan test module
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index d16b899..94293c8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,4 +19,12 @@ config KASAN_SHADOW_OFFSET
 	hex
 	default 0xdfffe90000000000 if X86_64
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 84000ec..b387570 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..66a04eb
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v4 12/13] lib: add kasan test module
@ 2014-10-06 15:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index d16b899..94293c8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,4 +19,12 @@ config KASAN_SHADOW_OFFSET
 	hex
 	default 0xdfffe90000000000 if X86_64
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 84000ec..b387570 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..66a04eb
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC PATCH v4 13/13] kasan: introduce inline instrumentation
  2014-10-06 15:53   ` Andrey Ryabinin
@ 2014-10-06 15:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Michal Marek

This patch only demonstration how easy this could be achieved.
GCC doesn't support this feature yet. Two patches required for this:
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

In inline instrumentation mode compiler directly inserts code
checking shadow memory instead of __asan_load/__asan_store
calls.
This is usually faster than outline. In some workloads inline is
2 times faster than outline instrumentation.

The downside of inline instrumentation is bloated kernel's .text size:

size noasan/vmlinux
   text     data     bss      dec     hex    filename
11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux

size outline/vmlinux
   text    data     bss      dec      hex    filename
16553474  1602592  950272  19106338  1238a22 outline/vmlinux

size inline/vmlinux
   text    data     bss      dec      hex    filename
32064759  1598688  946176  34609623  21019d7 inline/vmlinux

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile          |  6 +++++-
 lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
 mm/kasan/kasan.c  | 14 +-------------
 mm/kasan/kasan.h  | 22 ++++++++++++++++++++++
 mm/kasan/report.c | 37 +++++++++++++++++++++++++++++++++++++
 5 files changed, 89 insertions(+), 14 deletions(-)

diff --git a/Makefile b/Makefile
index 6f8be78..01cfa71 100644
--- a/Makefile
+++ b/Makefile
@@ -758,7 +758,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
 ifdef CONFIG_KASAN
-  ifeq ($(CFLAGS_KASAN),)
+ifdef CONFIG_KASAN_INLINE
+CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
+		 $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
+endif
+  ifeq ($(strip $(CFLAGS_KASAN)),)
     $(warning Cannot use CONFIG_KASAN: \
 	      -fsanitize=kernel-address not supported by compiler)
   endif
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 94293c8..ec5d680 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -27,4 +27,28 @@ config TEST_KASAN
 	  out of bounds accesses, use after free. It is useful for testing
 	  kernel debugging features like kernel address sanitizer.
 
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_INLINE if X86_64
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
 endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index d4552a2..6e34fdb 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -32,11 +32,6 @@
 #include "kasan.h"
 #include "../slab.h"
 
-static inline bool kasan_enabled(void)
-{
-	return !current->kasan_depth;
-}
-
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
  * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
@@ -250,14 +245,7 @@ static __always_inline void check_memory_region(unsigned long addr,
 	if (likely(!memory_is_poisoned(addr, size)))
 		return;
 
-	if (likely(!kasan_enabled()))
-		return;
-
-	info.access_addr = addr;
-	info.access_size = size;
-	info.is_write = write;
-	info.ip = _RET_IP_;
-	kasan_report_error(&info);
+	kasan_report(addr, size, write);
 }
 
 void kasan_alloc_pages(struct page *page, unsigned int order)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index b70a3d1..049349b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -29,4 +29,26 @@ static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
 	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
 }
 
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
 #endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 03ce28e..39ec639 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -199,3 +199,40 @@ void kasan_report_user_access(struct access_info *info)
 		"=================================\n");
 	spin_unlock_irqrestore(&report_lock, flags);
 }
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_recover_load##size(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_recover_load##size)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_recover_store##size(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_recover_store##size)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_recover_load_n(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_recover_load_n);
+
+void __asan_report_recover_store_n(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_recover_store_n);
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [RFC PATCH v4 13/13] kasan: introduce inline instrumentation
@ 2014-10-06 15:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-06 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	Michal Marek

This patch only demonstration how easy this could be achieved.
GCC doesn't support this feature yet. Two patches required for this:
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

In inline instrumentation mode compiler directly inserts code
checking shadow memory instead of __asan_load/__asan_store
calls.
This is usually faster than outline. In some workloads inline is
2 times faster than outline instrumentation.

The downside of inline instrumentation is bloated kernel's .text size:

size noasan/vmlinux
   text     data     bss      dec     hex    filename
11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux

size outline/vmlinux
   text    data     bss      dec      hex    filename
16553474  1602592  950272  19106338  1238a22 outline/vmlinux

size inline/vmlinux
   text    data     bss      dec      hex    filename
32064759  1598688  946176  34609623  21019d7 inline/vmlinux

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile          |  6 +++++-
 lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
 mm/kasan/kasan.c  | 14 +-------------
 mm/kasan/kasan.h  | 22 ++++++++++++++++++++++
 mm/kasan/report.c | 37 +++++++++++++++++++++++++++++++++++++
 5 files changed, 89 insertions(+), 14 deletions(-)

diff --git a/Makefile b/Makefile
index 6f8be78..01cfa71 100644
--- a/Makefile
+++ b/Makefile
@@ -758,7 +758,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
 ifdef CONFIG_KASAN
-  ifeq ($(CFLAGS_KASAN),)
+ifdef CONFIG_KASAN_INLINE
+CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
+		 $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
+endif
+  ifeq ($(strip $(CFLAGS_KASAN)),)
     $(warning Cannot use CONFIG_KASAN: \
 	      -fsanitize=kernel-address not supported by compiler)
   endif
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 94293c8..ec5d680 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -27,4 +27,28 @@ config TEST_KASAN
 	  out of bounds accesses, use after free. It is useful for testing
 	  kernel debugging features like kernel address sanitizer.
 
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_INLINE if X86_64
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
 endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index d4552a2..6e34fdb 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -32,11 +32,6 @@
 #include "kasan.h"
 #include "../slab.h"
 
-static inline bool kasan_enabled(void)
-{
-	return !current->kasan_depth;
-}
-
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
  * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
@@ -250,14 +245,7 @@ static __always_inline void check_memory_region(unsigned long addr,
 	if (likely(!memory_is_poisoned(addr, size)))
 		return;
 
-	if (likely(!kasan_enabled()))
-		return;
-
-	info.access_addr = addr;
-	info.access_size = size;
-	info.is_write = write;
-	info.ip = _RET_IP_;
-	kasan_report_error(&info);
+	kasan_report(addr, size, write);
 }
 
 void kasan_alloc_pages(struct page *page, unsigned int order)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index b70a3d1..049349b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -29,4 +29,26 @@ static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
 	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
 }
 
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
 #endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 03ce28e..39ec639 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -199,3 +199,40 @@ void kasan_report_user_access(struct access_info *info)
 		"=================================\n");
 	spin_unlock_irqrestore(&report_lock, flags);
 }
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_recover_load##size(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_recover_load##size)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_recover_store##size(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_recover_store##size)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_recover_load_n(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_recover_load_n);
+
+void __asan_report_recover_store_n(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_recover_store_n);
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v4 13/13] kasan: introduce inline instrumentation
  2014-10-06 15:54     ` Andrey Ryabinin
@ 2014-10-07  9:17       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-10-07  9:17 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Michal Marek

On Mon, Oct 6, 2014 at 7:54 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This patch only demonstration how easy this could be achieved.
> GCC doesn't support this feature yet. Two patches required for this:
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html
>
> In inline instrumentation mode compiler directly inserts code
> checking shadow memory instead of __asan_load/__asan_store
> calls.
> This is usually faster than outline. In some workloads inline is
> 2 times faster than outline instrumentation.
>
> The downside of inline instrumentation is bloated kernel's .text size:
>
> size noasan/vmlinux
>    text     data     bss      dec     hex    filename
> 11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux
>
> size outline/vmlinux
>    text    data     bss      dec      hex    filename
> 16553474  1602592  950272  19106338  1238a22 outline/vmlinux
>
> size inline/vmlinux
>    text    data     bss      dec      hex    filename
> 32064759  1598688  946176  34609623  21019d7 inline/vmlinux
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Makefile          |  6 +++++-
>  lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
>  mm/kasan/kasan.c  | 14 +-------------
>  mm/kasan/kasan.h  | 22 ++++++++++++++++++++++
>  mm/kasan/report.c | 37 +++++++++++++++++++++++++++++++++++++
>  5 files changed, 89 insertions(+), 14 deletions(-)
>
> diff --git a/Makefile b/Makefile
> index 6f8be78..01cfa71 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -758,7 +758,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
>  endif
>
>  ifdef CONFIG_KASAN
> -  ifeq ($(CFLAGS_KASAN),)
> +ifdef CONFIG_KASAN_INLINE
> +CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
> +                $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
> +endif
> +  ifeq ($(strip $(CFLAGS_KASAN)),)
>      $(warning Cannot use CONFIG_KASAN: \
>               -fsanitize=kernel-address not supported by compiler)
>    endif
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 94293c8..ec5d680 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -27,4 +27,28 @@ config TEST_KASAN
>           out of bounds accesses, use after free. It is useful for testing
>           kernel debugging features like kernel address sanitizer.
>
> +choice
> +       prompt "Instrumentation type"
> +       depends on KASAN
> +       default KASAN_INLINE if X86_64
> +
> +config KASAN_OUTLINE
> +       bool "Outline instrumentation"
> +       help
> +         Before every memory access compiler insert function call
> +         __asan_load*/__asan_store*. These functions performs check
> +         of shadow memory. This is slower than inline instrumentation,
> +         however it doesn't bloat size of kernel's .text section so
> +         much as inline does.
> +
> +config KASAN_INLINE
> +       bool "Inline instrumentation"
> +       help
> +         Compiler directly inserts code checking shadow memory before
> +         memory accesses. This is faster than outline (in some workloads
> +         it gives about x2 boost over outline instrumentation), but
> +         make kernel's .text size much bigger.
> +
> +endchoice
> +
>  endif
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index d4552a2..6e34fdb 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -32,11 +32,6 @@
>  #include "kasan.h"
>  #include "../slab.h"
>
> -static inline bool kasan_enabled(void)
> -{
> -       return !current->kasan_depth;
> -}
> -
>  /*
>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
>   * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
> @@ -250,14 +245,7 @@ static __always_inline void check_memory_region(unsigned long addr,
>         if (likely(!memory_is_poisoned(addr, size)))
>                 return;
>
> -       if (likely(!kasan_enabled()))
> -               return;
> -
> -       info.access_addr = addr;
> -       info.access_size = size;
> -       info.is_write = write;
> -       info.ip = _RET_IP_;
> -       kasan_report_error(&info);
> +       kasan_report(addr, size, write);
>  }
>
>  void kasan_alloc_pages(struct page *page, unsigned int order)
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index b70a3d1..049349b 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -29,4 +29,26 @@ static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
>         return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
>  }
>
> +static inline bool kasan_enabled(void)
> +{
> +       return !current->kasan_depth;
> +}
> +
> +static __always_inline void kasan_report(unsigned long addr,
> +                                       size_t size,
> +                                       bool is_write)
> +{
> +       struct access_info info;
> +
> +       if (likely(!kasan_enabled()))
> +               return;

/\/\/\/\/\

that's smart

> +       info.access_addr = addr;
> +       info.access_size = size;
> +       info.is_write = is_write;
> +       info.ip = _RET_IP_;
> +       kasan_report_error(&info);
> +}
> +
> +
>  #endif
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 03ce28e..39ec639 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -199,3 +199,40 @@ void kasan_report_user_access(struct access_info *info)
>                 "=================================\n");
>         spin_unlock_irqrestore(&report_lock, flags);
>  }
> +
> +#define DEFINE_ASAN_REPORT_LOAD(size)                     \
> +void __asan_report_recover_load##size(unsigned long addr) \
> +{                                                         \
> +       kasan_report(addr, size, false);                  \
> +}                                                         \
> +EXPORT_SYMBOL(__asan_report_recover_load##size)
> +
> +#define DEFINE_ASAN_REPORT_STORE(size)                     \
> +void __asan_report_recover_store##size(unsigned long addr) \
> +{                                                          \
> +       kasan_report(addr, size, true);                    \
> +}                                                          \
> +EXPORT_SYMBOL(__asan_report_recover_store##size)
> +
> +DEFINE_ASAN_REPORT_LOAD(1);
> +DEFINE_ASAN_REPORT_LOAD(2);
> +DEFINE_ASAN_REPORT_LOAD(4);
> +DEFINE_ASAN_REPORT_LOAD(8);
> +DEFINE_ASAN_REPORT_LOAD(16);
> +DEFINE_ASAN_REPORT_STORE(1);
> +DEFINE_ASAN_REPORT_STORE(2);
> +DEFINE_ASAN_REPORT_STORE(4);
> +DEFINE_ASAN_REPORT_STORE(8);
> +DEFINE_ASAN_REPORT_STORE(16);
> +
> +void __asan_report_recover_load_n(unsigned long addr, size_t size)
> +{
> +       kasan_report(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_load_n);
> +
> +void __asan_report_recover_store_n(unsigned long addr, size_t size)
> +{
> +       kasan_report(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_store_n);
> --
> 2.1.2
>

looks good to me

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [RFC PATCH v4 13/13] kasan: introduce inline instrumentation
@ 2014-10-07  9:17       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-10-07  9:17 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, Michal Marek

On Mon, Oct 6, 2014 at 7:54 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This patch only demonstration how easy this could be achieved.
> GCC doesn't support this feature yet. Two patches required for this:
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
>     https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html
>
> In inline instrumentation mode compiler directly inserts code
> checking shadow memory instead of __asan_load/__asan_store
> calls.
> This is usually faster than outline. In some workloads inline is
> 2 times faster than outline instrumentation.
>
> The downside of inline instrumentation is bloated kernel's .text size:
>
> size noasan/vmlinux
>    text     data     bss      dec     hex    filename
> 11759720  1566560  946176  14272456  d9c7c8  noasan/vmlinux
>
> size outline/vmlinux
>    text    data     bss      dec      hex    filename
> 16553474  1602592  950272  19106338  1238a22 outline/vmlinux
>
> size inline/vmlinux
>    text    data     bss      dec      hex    filename
> 32064759  1598688  946176  34609623  21019d7 inline/vmlinux
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Makefile          |  6 +++++-
>  lib/Kconfig.kasan | 24 ++++++++++++++++++++++++
>  mm/kasan/kasan.c  | 14 +-------------
>  mm/kasan/kasan.h  | 22 ++++++++++++++++++++++
>  mm/kasan/report.c | 37 +++++++++++++++++++++++++++++++++++++
>  5 files changed, 89 insertions(+), 14 deletions(-)
>
> diff --git a/Makefile b/Makefile
> index 6f8be78..01cfa71 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -758,7 +758,11 @@ KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
>  endif
>
>  ifdef CONFIG_KASAN
> -  ifeq ($(CFLAGS_KASAN),)
> +ifdef CONFIG_KASAN_INLINE
> +CFLAGS_KASAN += $(call cc-option, -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET)) \
> +                $(call cc-option, --param asan-instrumentation-with-call-threshold=10000)
> +endif
> +  ifeq ($(strip $(CFLAGS_KASAN)),)
>      $(warning Cannot use CONFIG_KASAN: \
>               -fsanitize=kernel-address not supported by compiler)
>    endif
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 94293c8..ec5d680 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -27,4 +27,28 @@ config TEST_KASAN
>           out of bounds accesses, use after free. It is useful for testing
>           kernel debugging features like kernel address sanitizer.
>
> +choice
> +       prompt "Instrumentation type"
> +       depends on KASAN
> +       default KASAN_INLINE if X86_64
> +
> +config KASAN_OUTLINE
> +       bool "Outline instrumentation"
> +       help
> +         Before every memory access compiler insert function call
> +         __asan_load*/__asan_store*. These functions performs check
> +         of shadow memory. This is slower than inline instrumentation,
> +         however it doesn't bloat size of kernel's .text section so
> +         much as inline does.
> +
> +config KASAN_INLINE
> +       bool "Inline instrumentation"
> +       help
> +         Compiler directly inserts code checking shadow memory before
> +         memory accesses. This is faster than outline (in some workloads
> +         it gives about x2 boost over outline instrumentation), but
> +         make kernel's .text size much bigger.
> +
> +endchoice
> +
>  endif
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index d4552a2..6e34fdb 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -32,11 +32,6 @@
>  #include "kasan.h"
>  #include "../slab.h"
>
> -static inline bool kasan_enabled(void)
> -{
> -       return !current->kasan_depth;
> -}
> -
>  /*
>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
>   * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
> @@ -250,14 +245,7 @@ static __always_inline void check_memory_region(unsigned long addr,
>         if (likely(!memory_is_poisoned(addr, size)))
>                 return;
>
> -       if (likely(!kasan_enabled()))
> -               return;
> -
> -       info.access_addr = addr;
> -       info.access_size = size;
> -       info.is_write = write;
> -       info.ip = _RET_IP_;
> -       kasan_report_error(&info);
> +       kasan_report(addr, size, write);
>  }
>
>  void kasan_alloc_pages(struct page *page, unsigned int order)
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index b70a3d1..049349b 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -29,4 +29,26 @@ static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
>         return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
>  }
>
> +static inline bool kasan_enabled(void)
> +{
> +       return !current->kasan_depth;
> +}
> +
> +static __always_inline void kasan_report(unsigned long addr,
> +                                       size_t size,
> +                                       bool is_write)
> +{
> +       struct access_info info;
> +
> +       if (likely(!kasan_enabled()))
> +               return;

/\/\/\/\/\

that's smart

> +       info.access_addr = addr;
> +       info.access_size = size;
> +       info.is_write = is_write;
> +       info.ip = _RET_IP_;
> +       kasan_report_error(&info);
> +}
> +
> +
>  #endif
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 03ce28e..39ec639 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -199,3 +199,40 @@ void kasan_report_user_access(struct access_info *info)
>                 "=================================\n");
>         spin_unlock_irqrestore(&report_lock, flags);
>  }
> +
> +#define DEFINE_ASAN_REPORT_LOAD(size)                     \
> +void __asan_report_recover_load##size(unsigned long addr) \
> +{                                                         \
> +       kasan_report(addr, size, false);                  \
> +}                                                         \
> +EXPORT_SYMBOL(__asan_report_recover_load##size)
> +
> +#define DEFINE_ASAN_REPORT_STORE(size)                     \
> +void __asan_report_recover_store##size(unsigned long addr) \
> +{                                                          \
> +       kasan_report(addr, size, true);                    \
> +}                                                          \
> +EXPORT_SYMBOL(__asan_report_recover_store##size)
> +
> +DEFINE_ASAN_REPORT_LOAD(1);
> +DEFINE_ASAN_REPORT_LOAD(2);
> +DEFINE_ASAN_REPORT_LOAD(4);
> +DEFINE_ASAN_REPORT_LOAD(8);
> +DEFINE_ASAN_REPORT_LOAD(16);
> +DEFINE_ASAN_REPORT_STORE(1);
> +DEFINE_ASAN_REPORT_STORE(2);
> +DEFINE_ASAN_REPORT_STORE(4);
> +DEFINE_ASAN_REPORT_STORE(8);
> +DEFINE_ASAN_REPORT_STORE(16);
> +
> +void __asan_report_recover_load_n(unsigned long addr, size_t size)
> +{
> +       kasan_report(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_load_n);
> +
> +void __asan_report_recover_store_n(unsigned long addr, size_t size)
> +{
> +       kasan_report(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_report_recover_store_n);
> --
> 2.1.2
>

looks good to me

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v4 02/13] efi: libstub: disable KASAN for efistub
  2014-10-06 15:53     ` Andrey Ryabinin
@ 2014-10-07  9:19       ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-10-07  9:19 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm

looks good to me

On Mon, Oct 6, 2014 at 7:53 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> KASan as many other options should be disabled for this stub
> to prevent build failures.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  drivers/firmware/efi/libstub/Makefile | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index b14bc2b..c5533c7 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -19,6 +19,7 @@ KBUILD_CFLAGS                 := $(cflags-y) \
>                                    $(call cc-option,-fno-stack-protector)
>
>  GCOV_PROFILE                   := n
> +KASAN_SANITIZE                 := n
>
>  lib-y                          := efi-stub-helper.o
>  lib-$(CONFIG_EFI_ARMSTUB)      += arm-stub.o fdt.o
> --
> 2.1.2
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v4 02/13] efi: libstub: disable KASAN for efistub
@ 2014-10-07  9:19       ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-10-07  9:19 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm

looks good to me

On Mon, Oct 6, 2014 at 7:53 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> KASan as many other options should be disabled for this stub
> to prevent build failures.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  drivers/firmware/efi/libstub/Makefile | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index b14bc2b..c5533c7 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -19,6 +19,7 @@ KBUILD_CFLAGS                 := $(cflags-y) \
>                                    $(call cc-option,-fno-stack-protector)
>
>  GCOV_PROFILE                   := n
> +KASAN_SANITIZE                 := n
>
>  lib-y                          := efi-stub-helper.o
>  lib-$(CONFIG_EFI_ARMSTUB)      += arm-stub.o fdt.o
> --
> 2.1.2
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
  2014-09-24 12:43   ` Andrey Ryabinin
@ 2014-10-16 17:18     ` Yuri Gribov
  -1 siblings, 0 replies; 862+ messages in thread
From: Yuri Gribov @ 2014-10-16 17:18 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Konstantin Khlebnikov, Sasha Levin,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones

On Wed, Sep 24, 2014 at 4:43 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Hi.
>
> This is a third iteration of kerenel address sanitizer (KASan).
>
> ...
>
> KASAN uses compile-time instrumentation for checking every memory access, therefore you
> will need a fresh GCC >= v5.0.0.

FYI I've backported Kasan patches to GCC 4.9 branch. They'll be in
upcoming 4.9 release.

-Y

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger.
@ 2014-10-16 17:18     ` Yuri Gribov
  0 siblings, 0 replies; 862+ messages in thread
From: Yuri Gribov @ 2014-10-16 17:18 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Konstantin Khlebnikov, Sasha Levin,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	linux-kbuild, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones

On Wed, Sep 24, 2014 at 4:43 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Hi.
>
> This is a third iteration of kerenel address sanitizer (KASan).
>
> ...
>
> KASAN uses compile-time instrumentation for checking every memory access, therefore you
> will need a fresh GCC >= v5.0.0.

FYI I've backported Kasan patches to GCC 4.9 branch. They'll be in
upcoming 4.9 release.

-Y

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v5 00/12] Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2014-10-27 16:46   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	linux-kernel

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based on mmotm-2014-10-23-16-26 tree and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v5

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Comparison with other debuggin features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.

Andrey Ryabinin (12):
  Add kernel address sanitizer infrastructure.
  kasan: Add support for upcoming GCC 5.0 asan ABI changes
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module

 Documentation/kasan.txt               | 174 ++++++++++++
 Makefile                              |  11 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 ++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 ++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 ++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   6 +
 include/linux/kasan.h                 |  69 +++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |   9 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  30 +++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 480 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  32 +++
 mm/kasan/report.c                     | 201 ++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 +++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1534 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
-- 
2.1.2


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v5 00/12] Kernel address sanitizer - runtime memory debugger.
@ 2014-10-27 16:46   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	linux-kernel

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based on mmotm-2014-10-23-16-26 tree and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v5

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Comparison with other debuggin features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.

Andrey Ryabinin (12):
  Add kernel address sanitizer infrastructure.
  kasan: Add support for upcoming GCC 5.0 asan ABI changes
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module

 Documentation/kasan.txt               | 174 ++++++++++++
 Makefile                              |  11 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 ++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 ++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 ++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   6 +
 include/linux/kasan.h                 |  69 +++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |   9 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  30 +++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 480 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  32 +++
 mm/kasan/report.c                     | 201 ++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 +++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1534 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v5 01/12] Add kernel address sanitizer infrastructure.
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore fresh GCC >= v5.0.0 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 174 ++++++++++++++++++
 Makefile                              |  11 +-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  42 +++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  15 ++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 336 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  27 +++
 mm/kasan/report.c                     | 169 +++++++++++++++++
 scripts/Makefile.lib                  |  10 +
 13 files changed, 792 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..12c50da
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,174 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASan uses compile-time instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires kernel
+to be built with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow on each memory
+access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow
+memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
+scale and offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
diff --git a/Makefile b/Makefile
index 382e69c..539e572 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -426,7 +426,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -757,6 +757,13 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 706a9f7..3c3ef5d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1661,6 +1661,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ddd070a..bb26ec3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..54cf44f
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,15 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 27ddb80..63b7871 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,3 +71,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..8ce738e
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,336 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..9a9fe9f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,27 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..89a9aa1
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,169 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..c1517e2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 01/12] Add kernel address sanitizer infrastructure.
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore fresh GCC >= v5.0.0 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 174 ++++++++++++++++++
 Makefile                              |  11 +-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  42 +++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  15 ++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 336 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  27 +++
 mm/kasan/report.c                     | 169 +++++++++++++++++
 scripts/Makefile.lib                  |  10 +
 13 files changed, 792 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..12c50da
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,174 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASan uses compile-time instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires kernel
+to be built with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow on each memory
+access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow
+memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
+scale and offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
diff --git a/Makefile b/Makefile
index 382e69c..539e572 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -426,7 +426,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -757,6 +757,13 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 706a9f7..3c3ef5d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1661,6 +1661,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ddd070a..bb26ec3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..54cf44f
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,15 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 27ddb80..63b7871 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,3 +71,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..8ce738e
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,336 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..9a9fe9f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,27 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..89a9aa1
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,169 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..c1517e2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 02/12] kasan: Add support for upcoming GCC 5.0 asan ABI changes
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

GCC 5.0 will have some changes in asan ABI.
New function (__asan_load*_noabort()/__asan_store*_noabort)
will be introduced.
By default, for -fsanitize=kernel-address GCC 5.0 will
generate __asan_load*_noabort() functions instead of __asan_load*()

Details in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

We still need __asan_load*() for GCC 4.9.2, so this patch just adds aliases.

Note: Patch for GCC hasn't been upstreamed yet.
I'm adding this patch in advance, to avoid breaking KASan
in future GCC update.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kasan/kasan.c | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 8ce738e..11fa3f8 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -334,3 +334,41 @@ EXPORT_SYMBOL(__asan_storeN);
 /* to shut up compiler complaints */
 void __asan_handle_no_return(void) {}
 EXPORT_SYMBOL(__asan_handle_no_return);
+
+
+/* GCC 5.0 has different function names by default */
+void __asan_load1_noabort(unsigned long) __attribute__((alias("__asan_load1")));
+EXPORT_SYMBOL(__asan_load1_noabort);
+
+void __asan_load2_noabort(unsigned long) __attribute__((alias("__asan_load2")));
+EXPORT_SYMBOL(__asan_load2_noabort);
+
+void __asan_load4_noabort(unsigned long) __attribute__((alias("__asan_load4")));
+EXPORT_SYMBOL(__asan_load4_noabort);
+
+void __asan_load8_noabort(unsigned long) __attribute__((alias("__asan_load8")));
+EXPORT_SYMBOL(__asan_load8_noabort);
+
+void __asan_load16_noabort(unsigned long) __attribute__((alias("__asan_load16")));
+EXPORT_SYMBOL(__asan_load16_noabort);
+
+void __asan_loadN_noabort(unsigned long) __attribute__((alias("__asan_loadN")));
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_store1_noabort(unsigned long) __attribute__((alias("__asan_store1")));
+EXPORT_SYMBOL(__asan_store1_noabort);
+
+void __asan_store2_noabort(unsigned long) __attribute__((alias("__asan_store2")));
+EXPORT_SYMBOL(__asan_store2_noabort);
+
+void __asan_store4_noabort(unsigned long) __attribute__((alias("__asan_store4")));
+EXPORT_SYMBOL(__asan_store4_noabort);
+
+void __asan_store8_noabort(unsigned long) __attribute__((alias("__asan_store8")));
+EXPORT_SYMBOL(__asan_store8_noabort);
+
+void __asan_store16_noabort(unsigned long) __attribute__((alias("__asan_store16")));
+EXPORT_SYMBOL(__asan_store16_noabort);
+
+void __asan_storeN_noabort(unsigned long) __attribute__((alias("__asan_storeN")));
+EXPORT_SYMBOL(__asan_storeN_noabort);
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 02/12] kasan: Add support for upcoming GCC 5.0 asan ABI changes
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

GCC 5.0 will have some changes in asan ABI.
New function (__asan_load*_noabort()/__asan_store*_noabort)
will be introduced.
By default, for -fsanitize=kernel-address GCC 5.0 will
generate __asan_load*_noabort() functions instead of __asan_load*()

Details in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

We still need __asan_load*() for GCC 4.9.2, so this patch just adds aliases.

Note: Patch for GCC hasn't been upstreamed yet.
I'm adding this patch in advance, to avoid breaking KASan
in future GCC update.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kasan/kasan.c | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 8ce738e..11fa3f8 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -334,3 +334,41 @@ EXPORT_SYMBOL(__asan_storeN);
 /* to shut up compiler complaints */
 void __asan_handle_no_return(void) {}
 EXPORT_SYMBOL(__asan_handle_no_return);
+
+
+/* GCC 5.0 has different function names by default */
+void __asan_load1_noabort(unsigned long) __attribute__((alias("__asan_load1")));
+EXPORT_SYMBOL(__asan_load1_noabort);
+
+void __asan_load2_noabort(unsigned long) __attribute__((alias("__asan_load2")));
+EXPORT_SYMBOL(__asan_load2_noabort);
+
+void __asan_load4_noabort(unsigned long) __attribute__((alias("__asan_load4")));
+EXPORT_SYMBOL(__asan_load4_noabort);
+
+void __asan_load8_noabort(unsigned long) __attribute__((alias("__asan_load8")));
+EXPORT_SYMBOL(__asan_load8_noabort);
+
+void __asan_load16_noabort(unsigned long) __attribute__((alias("__asan_load16")));
+EXPORT_SYMBOL(__asan_load16_noabort);
+
+void __asan_loadN_noabort(unsigned long) __attribute__((alias("__asan_loadN")));
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_store1_noabort(unsigned long) __attribute__((alias("__asan_store1")));
+EXPORT_SYMBOL(__asan_store1_noabort);
+
+void __asan_store2_noabort(unsigned long) __attribute__((alias("__asan_store2")));
+EXPORT_SYMBOL(__asan_store2_noabort);
+
+void __asan_store4_noabort(unsigned long) __attribute__((alias("__asan_store4")));
+EXPORT_SYMBOL(__asan_store4_noabort);
+
+void __asan_store8_noabort(unsigned long) __attribute__((alias("__asan_store8")));
+EXPORT_SYMBOL(__asan_store8_noabort);
+
+void __asan_store16_noabort(unsigned long) __attribute__((alias("__asan_store16")));
+EXPORT_SYMBOL(__asan_store16_noabort);
+
+void __asan_storeN_noabort(unsigned long) __attribute__((alias("__asan_storeN")));
+EXPORT_SYMBOL(__asan_storeN_noabort);
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 03/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 4b4f78c..ee5c286 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 03/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 4b4f78c..ee5c286 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 04/12] x86_64: add KASan support
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/boot/Makefile            |  2 +
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/include/asm/kasan.h      | 27 ++++++++++++
 arch/x86/kernel/Makefile          |  2 +
 arch/x86/kernel/dumpstack.c       |  5 ++-
 arch/x86/kernel/head64.c          |  9 +++-
 arch/x86/kernel/head_64.S         | 28 +++++++++++++
 arch/x86/mm/Makefile              |  3 ++
 arch/x86/mm/init.c                |  3 ++
 arch/x86/mm/kasan_init_64.c       | 87 +++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |  2 +-
 arch/x86/realmode/rm/Makefile     |  1 +
 arch/x86/vdso/Makefile            |  1 +
 lib/Kconfig.kasan                 |  6 +++
 15 files changed, 175 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6115af9..ba56207 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -138,6 +138,7 @@ config X86
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
 	select X86_FEATURE_NAMES if PROC_FS
+	select HAVE_ARCH_KASAN if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 704f58a..21faab6b7 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..056c943
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_map_shadow(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_map_shadow(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 8f1e774..9d46ee8 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..4a5a597 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,7 @@
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
 #include <asm/init.h>
+#include <asm/kasan.h>
 #include <asm/page.h>
 #include <asm/page_types.h>
 #include <asm/sections.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..c6ea8a4
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,87 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 54cf44f..b458a00 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -12,4 +13,9 @@ config KASAN
 	  of available memory and brings about ~x3 performance slowdown.
 	  For better error detection enable CONFIG_STACKTRACE,
 	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+	default 0xdfffe90000000000 if X86_64
+
 endif
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 04/12] x86_64: add KASan support
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/boot/Makefile            |  2 +
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/include/asm/kasan.h      | 27 ++++++++++++
 arch/x86/kernel/Makefile          |  2 +
 arch/x86/kernel/dumpstack.c       |  5 ++-
 arch/x86/kernel/head64.c          |  9 +++-
 arch/x86/kernel/head_64.S         | 28 +++++++++++++
 arch/x86/mm/Makefile              |  3 ++
 arch/x86/mm/init.c                |  3 ++
 arch/x86/mm/kasan_init_64.c       | 87 +++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |  2 +-
 arch/x86/realmode/rm/Makefile     |  1 +
 arch/x86/vdso/Makefile            |  1 +
 lib/Kconfig.kasan                 |  6 +++
 15 files changed, 175 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6115af9..ba56207 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -138,6 +138,7 @@ config X86
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
 	select X86_FEATURE_NAMES if PROC_FS
+	select HAVE_ARCH_KASAN if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 704f58a..21faab6b7 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..056c943
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_map_shadow(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_map_shadow(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 8f1e774..9d46ee8 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..4a5a597 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,7 @@
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
 #include <asm/init.h>
+#include <asm/kasan.h>
 #include <asm/page.h>
 #include <asm/page_types.h>
 #include <asm/sections.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..c6ea8a4
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,87 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 54cf44f..b458a00 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -12,4 +13,9 @@ config KASAN
 	  of available memory and brings about ~x3 performance slowdown.
 	  For better error detection enable CONFIG_STACKTRACE,
 	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+	default 0xdfffe90000000000 if X86_64
+
 endif
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 05/12] mm: page_alloc: add kasan hooks on alloc and free paths
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index e6e7405..aa529ad 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 11fa3f8..2853c92 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -259,6 +259,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report_error(&info);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9a9fe9f..ee572c4 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 89a9aa1..707323b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fa94263..9ae7d0e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -59,6 +59,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -759,6 +760,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -945,6 +947,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 05/12] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index e6e7405..aa529ad 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 11fa3f8..2853c92 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -259,6 +259,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report_error(&info);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9a9fe9f..ee572c4 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 89a9aa1..707323b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fa94263..9ae7d0e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -59,6 +59,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -759,6 +760,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -945,6 +947,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 06/12] mm: slub: introduce virt_to_obj function.
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 06/12] mm: slub: introduce virt_to_obj function.
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 07/12] mm: slub: share slab_err and object_err functions
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 4 ++++
 mm/slub.c                | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..8fed60d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 80c170e..1458629 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 07/12] mm: slub: share slab_err and object_err functions
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 4 ++++
 mm/slub.c                | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..8fed60d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 80c170e..1458629 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 08/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 1458629..2116ccd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 08/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 1458629..2116ccd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 09/12] mm: slub: add kernel address sanitizer support for slub allocator
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 35 ++++++++++++++++++--
 9 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index b458a00..d16b899 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 63b7871..aa16cec 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 2853c92..0ce187c 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 static inline bool kasan_enabled(void)
 {
@@ -273,6 +274,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index ee572c4..b70a3d1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 707323b..03ce28e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -73,11 +78,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 4069442..ff8d1a5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -785,6 +785,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -959,8 +960,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 2116ccd..b1f614e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2488,6 +2500,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2514,6 +2527,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2897,6 +2912,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3269,6 +3285,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3312,12 +3330,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3333,6 +3353,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 09/12] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 35 ++++++++++++++++++--
 9 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index b458a00..d16b899 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 63b7871..aa16cec 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 2853c92..0ce187c 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 static inline bool kasan_enabled(void)
 {
@@ -273,6 +274,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index ee572c4..b70a3d1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 707323b..03ce28e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -73,11 +78,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 4069442..ff8d1a5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -785,6 +785,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -959,8 +960,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 2116ccd..b1f614e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2488,6 +2500,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2514,6 +2527,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2897,6 +2912,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3269,6 +3285,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3312,12 +3330,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3333,6 +3353,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 10/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index d5a23fd..d58ffcc6 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1429,6 +1431,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 10/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index d5a23fd..d58ffcc6 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1429,6 +1431,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 11/12] kmemleak: disable kasan instrumentation for kmemleak
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 11/12] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 12/12] lib: add kasan test module
  2014-10-27 16:46   ` Andrey Ryabinin
@ 2014-10-27 16:46     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index d16b899..94293c8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,4 +19,12 @@ config KASAN_SHADOW_OFFSET
 	hex
 	default 0xdfffe90000000000 if X86_64
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 84000ec..b387570 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..66a04eb
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v5 12/12] lib: add kasan test module
@ 2014-10-27 16:46     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 16:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index d16b899..94293c8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,4 +19,12 @@ config KASAN_SHADOW_OFFSET
 	hex
 	default 0xdfffe90000000000 if X86_64
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 84000ec..b387570 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..66a04eb
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size , GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size , GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v5 07/12] mm: slub: share slab_err and object_err functions
  2014-10-27 16:46     ` Andrey Ryabinin
@ 2014-10-27 17:00       ` Joe Perches
  -1 siblings, 0 replies; 862+ messages in thread
From: Joe Perches @ 2014-10-27 17:00 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

On Mon, 2014-10-27 at 19:46 +0300, Andrey Ryabinin wrote:
> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.
[]
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
[]
> @@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
[]
> +void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +		u8 *object, char *reason);

Please add __printf(3, 4) to have the compiler catch
format and argument mismatches.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v5 07/12] mm: slub: share slab_err and object_err functions
@ 2014-10-27 17:00       ` Joe Perches
  0 siblings, 0 replies; 862+ messages in thread
From: Joe Perches @ 2014-10-27 17:00 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

On Mon, 2014-10-27 at 19:46 +0300, Andrey Ryabinin wrote:
> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.
[]
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
[]
> @@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
[]
> +void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +		u8 *object, char *reason);

Please add __printf(3, 4) to have the compiler catch
format and argument mismatches.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v5 07/12] mm: slub: share slab_err and object_err functions
  2014-10-27 17:00       ` Joe Perches
@ 2014-10-27 17:07         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 17:07 UTC (permalink / raw)
  To: Joe Perches
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

On 10/27/2014 08:00 PM, Joe Perches wrote:
> On Mon, 2014-10-27 at 19:46 +0300, Andrey Ryabinin wrote:
>> Remove static and add function declarations to mm/slab.h so they
>> could be used by kernel address sanitizer.
> []
>> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> []
>> @@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
> []
>> +void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
>> +void object_err(struct kmem_cache *s, struct page *page,
>> +		u8 *object, char *reason);
> 
> Please add __printf(3, 4) to have the compiler catch
> format and argument mismatches.
> 
> 

Will do, thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v5 07/12] mm: slub: share slab_err and object_err functions
@ 2014-10-27 17:07         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-27 17:07 UTC (permalink / raw)
  To: Joe Perches
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

On 10/27/2014 08:00 PM, Joe Perches wrote:
> On Mon, 2014-10-27 at 19:46 +0300, Andrey Ryabinin wrote:
>> Remove static and add function declarations to mm/slab.h so they
>> could be used by kernel address sanitizer.
> []
>> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> []
>> @@ -115,4 +115,8 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
> []
>> +void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
>> +void object_err(struct kmem_cache *s, struct page *page,
>> +		u8 *object, char *reason);
> 
> Please add __printf(3, 4) to have the compiler catch
> format and argument mismatches.
> 
> 

Will do, thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v5 01/12] Add kernel address sanitizer infrastructure.
  2014-10-27 16:46     ` Andrey Ryabinin
@ 2014-10-27 17:20       ` Jonathan Corbet
  -1 siblings, 0 replies; 862+ messages in thread
From: Jonathan Corbet @ 2014-10-27 17:20 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Michal Marek, Ingo Molnar, Peter Zijlstra

Just looking at kasan.txt...

> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
> new file mode 100644
> index 0000000..12c50da
> --- /dev/null
> +++ b/Documentation/kasan.txt
> @@ -0,0 +1,174 @@
> +Kernel address sanitizer
> +================
> +
> +0. Overview
> +===========
> +
> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> +a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

Documentation is a good place to stick to the 80-column (or slightly less)
limit.  There's no reason to use wide lines here.

> +KASan uses compile-time instrumentation for checking every memory access, therefore you
> +will need a special compiler: GCC >= 4.9.2
> +
> +Currently KASan is supported only for x86_64 architecture and requires kernel
> +to be built with SLUB allocator.

"and requires that the kernel be built with the SLUB allocator."

> +1. Usage
> +=========
> +
> +KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).

That differs from the requirement listed just a few lines above.  Which is
right?  I'm also not sure that a version requirement qualifies as
"special."  

> +To enable KASAN configure kernel with:
> +
> +	  CONFIG_KASAN = y
> +
> +Currently KASAN works only with the SLUB memory allocator.
> +For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
> +at least 'slub_debug=U' in the boot cmdline.
> +
> +To disable instrumentation for specific files or directories, add a line
> +similar to the following to the respective kernel Makefile:
> +
> +        For a single file (e.g. main.o):
> +                KASAN_SANITIZE_main.o := n
> +
> +        For all files in one directory:
> +                KASAN_SANITIZE := n
> +
> +Only files which are linked to the main kernel image or are compiled as
> +kernel modules are supported by this mechanism.

Can you do the opposite?  Disable for all but a few files where you want to
turn it on?  That seems more useful somehow...

> +1.1 Error reports
> +==========
> +
> +A typical out of bounds access report looks like this:
> +
> +==================================================================
> +BUG: AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
> +=============================================================================
> +BUG kmalloc-128 (Not tainted): kasan error
> +-----------------------------------------------------------------------------
> +
> +Disabling lock debugging due to kernel taint
> +INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
> +	__slab_alloc.constprop.72+0x64f/0x680
> +	kmem_cache_alloc+0xa8/0xe0
> +	kasan_kmalloc_oob_rigth+0x2c/0x7a
> +	kasan_tests_init+0x8/0xc
> +	do_one_initcall+0x85/0x1a0
> +	kernel_init_freeable+0x1f1/0x279
> +	kernel_init+0x8/0xd0
> +	ret_from_kernel_thread+0x21/0x30
> +INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
> +INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
> +
> +Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
> +Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
> +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
> + 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
> + c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
> + c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
> +Call Trace:
> + [<c1c4446f>] dump_stack+0x4b/0x75
> + [<c11c3f32>] print_trailer+0xf2/0x180
> + [<c11c4ff5>] object_err+0x25/0x30
> + [<c11ccb78>] kasan_report_error+0xf8/0x380
> + [<c1c57940>] ? need_resched+0x21/0x25
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
> + [<c11cbacc>] __asan_store1+0x9c/0xa0
> + [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
> + [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
> + [<c1f8276b>] kasan_tests_init+0x8/0xc
> + [<c1000435>] do_one_initcall+0x85/0x1a0
> + [<c1f6f508>] ? repair_env_string+0x23/0x66
> + [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
> + [<c10c9883>] ? parse_args+0x33/0x450
> + [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
> + [<c1000558>] kernel_init+0x8/0xd0
> + [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
> + [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
> +Write of size 1 by thread T1:
> +Memory state around the buggy address:
> + c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
> +>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
> +                    ^
> + c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
> + c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> + c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
> + c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
> + c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
> +==================================================================
> +
> +In the last section the report shows memory state around the accessed address.
> +Reading this part requires some more understanding of how KASAN works.

Which is all great, but it might be nice to say briefly what the other
sections are telling us?

> +Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,

What's KASAN_SHADOW_SCALE_SIZE and why is it something we should care
about?  Is it a parameter people can set?

> +partially addressable, freed or they can be part of a redzone.
> +If bytes are marked as addressable that means that they belong to some
> +allocated memory block and it is possible to read or modify any of these
> +bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
> +When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
> +memory block, this bytes are partially addressable and marked by 'N'.

Is that a literal "N" or some number indicating which bytes are accessible?
>From what's below, I'm guessing the latter.  It would be far better to be
clear on that.

> +Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
> +
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
> +#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
> +
> +In the report above the arrows point to the shadow byte 03, which means that the
> +accessed address is partially addressable.

So N = 03 here?

> +2. Implementation details
> +========================
> +
> +From a high level, our approach to memory error detection is similar to that
> +of kmemcheck: use shadow memory to record whether each byte of memory is safe
> +to access, and use compile-time instrumentation to check shadow on each memory
> +access.

"to check the shadow memory on each..."

> +AddressSanitizer dedicates 1/8 of kernel memory to its shadow
> +memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
> +scale and offset to translate a memory address to its corresponding shadow address.
> +
> +Here is the function witch translate an address to its corresponding shadow address:
> +
> +unsigned long kasan_mem_to_shadow(unsigned long addr)
> +{
> +	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
> +}
> +
> +where KASAN_SHADOW_SCALE_SHIFT = 3.
> +
> +Each shadow byte corresponds to 8 bytes of the main memory. We use the
> +following encoding for each shadow byte: 0 means that all 8 bytes of the
> +corresponding memory region are addressable; k (1 <= k <= 7) means that
> +the first k bytes are addressable, and other (8 - k) bytes are not;
> +any negative value indicates that the entire 8-byte word is inaccessible.
> +We use different negative values to distinguish between different kinds of
> +inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

This discussion belongs in the section above where you're talking about
interpreting the markings.

> +Poisoning or unpoisoning a byte in the main memory means writing some special
> +value into the corresponding shadow memory. This value indicates whether the
> +byte is addressable or not.

Is this something developers would do?  Are there helper functions to do
it?  I'd say either fill that in or leave this last bit out.

Interesting work!

jon

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v5 01/12] Add kernel address sanitizer infrastructure.
@ 2014-10-27 17:20       ` Jonathan Corbet
  0 siblings, 0 replies; 862+ messages in thread
From: Jonathan Corbet @ 2014-10-27 17:20 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Michal Marek, Ingo Molnar, Peter Zijlstra

Just looking at kasan.txt...

> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
> new file mode 100644
> index 0000000..12c50da
> --- /dev/null
> +++ b/Documentation/kasan.txt
> @@ -0,0 +1,174 @@
> +Kernel address sanitizer
> +================
> +
> +0. Overview
> +===========
> +
> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> +a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

Documentation is a good place to stick to the 80-column (or slightly less)
limit.  There's no reason to use wide lines here.

> +KASan uses compile-time instrumentation for checking every memory access, therefore you
> +will need a special compiler: GCC >= 4.9.2
> +
> +Currently KASan is supported only for x86_64 architecture and requires kernel
> +to be built with SLUB allocator.

"and requires that the kernel be built with the SLUB allocator."

> +1. Usage
> +=========
> +
> +KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).

That differs from the requirement listed just a few lines above.  Which is
right?  I'm also not sure that a version requirement qualifies as
"special."  

> +To enable KASAN configure kernel with:
> +
> +	  CONFIG_KASAN = y
> +
> +Currently KASAN works only with the SLUB memory allocator.
> +For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
> +at least 'slub_debug=U' in the boot cmdline.
> +
> +To disable instrumentation for specific files or directories, add a line
> +similar to the following to the respective kernel Makefile:
> +
> +        For a single file (e.g. main.o):
> +                KASAN_SANITIZE_main.o := n
> +
> +        For all files in one directory:
> +                KASAN_SANITIZE := n
> +
> +Only files which are linked to the main kernel image or are compiled as
> +kernel modules are supported by this mechanism.

Can you do the opposite?  Disable for all but a few files where you want to
turn it on?  That seems more useful somehow...

> +1.1 Error reports
> +==========
> +
> +A typical out of bounds access report looks like this:
> +
> +==================================================================
> +BUG: AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
> +=============================================================================
> +BUG kmalloc-128 (Not tainted): kasan error
> +-----------------------------------------------------------------------------
> +
> +Disabling lock debugging due to kernel taint
> +INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
> +	__slab_alloc.constprop.72+0x64f/0x680
> +	kmem_cache_alloc+0xa8/0xe0
> +	kasan_kmalloc_oob_rigth+0x2c/0x7a
> +	kasan_tests_init+0x8/0xc
> +	do_one_initcall+0x85/0x1a0
> +	kernel_init_freeable+0x1f1/0x279
> +	kernel_init+0x8/0xd0
> +	ret_from_kernel_thread+0x21/0x30
> +INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
> +INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
> +
> +Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
> +Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> +CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
> +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
> + 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
> + c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
> + c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
> +Call Trace:
> + [<c1c4446f>] dump_stack+0x4b/0x75
> + [<c11c3f32>] print_trailer+0xf2/0x180
> + [<c11c4ff5>] object_err+0x25/0x30
> + [<c11ccb78>] kasan_report_error+0xf8/0x380
> + [<c1c57940>] ? need_resched+0x21/0x25
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
> + [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
> + [<c11cbacc>] __asan_store1+0x9c/0xa0
> + [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
> + [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
> + [<c1f8276b>] kasan_tests_init+0x8/0xc
> + [<c1000435>] do_one_initcall+0x85/0x1a0
> + [<c1f6f508>] ? repair_env_string+0x23/0x66
> + [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
> + [<c10c9883>] ? parse_args+0x33/0x450
> + [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
> + [<c1000558>] kernel_init+0x8/0xd0
> + [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
> + [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
> +Write of size 1 by thread T1:
> +Memory state around the buggy address:
> + c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
> + c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
> +>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
> +                    ^
> + c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
> + c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> + c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
> + c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
> + c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
> +==================================================================
> +
> +In the last section the report shows memory state around the accessed address.
> +Reading this part requires some more understanding of how KASAN works.

Which is all great, but it might be nice to say briefly what the other
sections are telling us?

> +Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,

What's KASAN_SHADOW_SCALE_SIZE and why is it something we should care
about?  Is it a parameter people can set?

> +partially addressable, freed or they can be part of a redzone.
> +If bytes are marked as addressable that means that they belong to some
> +allocated memory block and it is possible to read or modify any of these
> +bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
> +When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
> +memory block, this bytes are partially addressable and marked by 'N'.

Is that a literal "N" or some number indicating which bytes are accessible?
From what's below, I'm guessing the latter.  It would be far better to be
clear on that.

> +Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
> +
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
> +#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
> +
> +In the report above the arrows point to the shadow byte 03, which means that the
> +accessed address is partially addressable.

So N = 03 here?

> +2. Implementation details
> +========================
> +
> +From a high level, our approach to memory error detection is similar to that
> +of kmemcheck: use shadow memory to record whether each byte of memory is safe
> +to access, and use compile-time instrumentation to check shadow on each memory
> +access.

"to check the shadow memory on each..."

> +AddressSanitizer dedicates 1/8 of kernel memory to its shadow
> +memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
> +scale and offset to translate a memory address to its corresponding shadow address.
> +
> +Here is the function witch translate an address to its corresponding shadow address:
> +
> +unsigned long kasan_mem_to_shadow(unsigned long addr)
> +{
> +	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
> +}
> +
> +where KASAN_SHADOW_SCALE_SHIFT = 3.
> +
> +Each shadow byte corresponds to 8 bytes of the main memory. We use the
> +following encoding for each shadow byte: 0 means that all 8 bytes of the
> +corresponding memory region are addressable; k (1 <= k <= 7) means that
> +the first k bytes are addressable, and other (8 - k) bytes are not;
> +any negative value indicates that the entire 8-byte word is inaccessible.
> +We use different negative values to distinguish between different kinds of
> +inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

This discussion belongs in the section above where you're talking about
interpreting the markings.

> +Poisoning or unpoisoning a byte in the main memory means writing some special
> +value into the corresponding shadow memory. This value indicates whether the
> +byte is addressable or not.

Is this something developers would do?  Are there helper functions to do
it?  I'd say either fill that in or leave this last bit out.

Interesting work!

jon

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v5 01/12] Add kernel address sanitizer infrastructure.
  2014-10-27 17:20       ` Jonathan Corbet
@ 2014-10-28 12:24         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-28 12:24 UTC (permalink / raw)
  To: Jonathan Corbet
  Cc: Andrew Morton, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Michal Marek, Ingo Molnar, Peter Zijlstra

On 10/27/2014 08:20 PM, Jonathan Corbet wrote:
> Just looking at kasan.txt...
> 
>> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
>> new file mode 100644
>> index 0000000..12c50da
>> --- /dev/null
>> +++ b/Documentation/kasan.txt
>> @@ -0,0 +1,174 @@
>> +Kernel address sanitizer
>> +================
>> +
>> +0. Overview
>> +===========
>> +
>> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
>> +a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> 
> Documentation is a good place to stick to the 80-column (or slightly less)
> limit.  There's no reason to use wide lines here.
> 

Agree. I wonder why checkpatch doesn't warns here.

>> +KASan uses compile-time instrumentation for checking every memory access, therefore you
>> +will need a special compiler: GCC >= 4.9.2
>> +
>> +Currently KASan is supported only for x86_64 architecture and requires kernel
>> +to be built with SLUB allocator.
> 
> "and requires that the kernel be built with the SLUB allocator."
> 
>> +1. Usage
>> +=========
>> +
>> +KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
> 
> That differs from the requirement listed just a few lines above.  Which is
> right?  I'm also not sure that a version requirement qualifies as
> "special."  
> 

4.9.2 is correct now. Yuri backported kasan patches to 4.9 branch recently.
I agree that "special" doesn't fit here. "Certain" would be better here:

KASAN requires the kernel to be built with a certain compiler version GCC >= 4.9.2

>> +To enable KASAN configure kernel with:
>> +
>> +	  CONFIG_KASAN = y
>> +
>> +Currently KASAN works only with the SLUB memory allocator.
>> +For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
>> +at least 'slub_debug=U' in the boot cmdline.
>> +
>> +To disable instrumentation for specific files or directories, add a line
>> +similar to the following to the respective kernel Makefile:
>> +
>> +        For a single file (e.g. main.o):
>> +                KASAN_SANITIZE_main.o := n
>> +
>> +        For all files in one directory:
>> +                KASAN_SANITIZE := n
>> +
>> +Only files which are linked to the main kernel image or are compiled as
>> +kernel modules are supported by this mechanism.
> 
> Can you do the opposite?  Disable for all but a few files where you want to
> turn it on?  That seems more useful somehow...
> 

There was a config option KASAN_SANTIZE_ALL in v1 patch set, but I decided to remove it
because I think there is no good use case for it. Instrumentation only for few files
is not a good idea because it's quite common to pass pointer to the external function
where pointer deference actually happens.

So bug could be in the instrumented code, but it could be missed because deference happens in
some generic external function.


>> +1.1 Error reports
>> +==========
>> +
>> +A typical out of bounds access report looks like this:
>> +
>> +==================================================================
>> +BUG: AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
>> +=============================================================================
>> +BUG kmalloc-128 (Not tainted): kasan error
>> +-----------------------------------------------------------------------------
>> +
>> +Disabling lock debugging due to kernel taint
>> +INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
>> +	__slab_alloc.constprop.72+0x64f/0x680
>> +	kmem_cache_alloc+0xa8/0xe0
>> +	kasan_kmalloc_oob_rigth+0x2c/0x7a
>> +	kasan_tests_init+0x8/0xc
>> +	do_one_initcall+0x85/0x1a0
>> +	kernel_init_freeable+0x1f1/0x279
>> +	kernel_init+0x8/0xd0
>> +	ret_from_kernel_thread+0x21/0x30
>> +INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
>> +INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
>> +
>> +Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
>> +Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
>> +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
>> + 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
>> + c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
>> + c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
>> +Call Trace:
>> + [<c1c4446f>] dump_stack+0x4b/0x75
>> + [<c11c3f32>] print_trailer+0xf2/0x180
>> + [<c11c4ff5>] object_err+0x25/0x30
>> + [<c11ccb78>] kasan_report_error+0xf8/0x380
>> + [<c1c57940>] ? need_resched+0x21/0x25
>> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
>> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
>> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
>> + [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
>> + [<c11cbacc>] __asan_store1+0x9c/0xa0
>> + [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
>> + [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
>> + [<c1f8276b>] kasan_tests_init+0x8/0xc
>> + [<c1000435>] do_one_initcall+0x85/0x1a0
>> + [<c1f6f508>] ? repair_env_string+0x23/0x66
>> + [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
>> + [<c10c9883>] ? parse_args+0x33/0x450
>> + [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
>> + [<c1000558>] kernel_init+0x8/0xd0
>> + [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
>> + [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
>> +Write of size 1 by thread T1:
>> +Memory state around the buggy address:
>> + c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
>> + c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
>> + c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
>> + c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
>> + c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
>> +>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
>> +                    ^
>> + c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
>> + c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
>> + c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
>> + c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
>> + c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
>> +==================================================================
>> +
>> +In the last section the report shows memory state around the accessed address.
>> +Reading this part requires some more understanding of how KASAN works.
> 
> Which is all great, but it might be nice to say briefly what the other
> sections are telling us?
> 

Other sections are from slub debug output. They are described in Documentation/vm/slub.txt.
To clear this out I will add here following:

First sections describe slub object where bad access happened. See 'SLUB Debug output' section in
Documentation/vm/slub.txt for details.

>> +Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
> 
> What's KASAN_SHADOW_SCALE_SIZE and why is it something we should care
> about?  Is it a parameter people can set?
> 

It's constant equals to 8. It implies how many bytes of memory mapped to one shadow byte.
Just changing this value won't work, so I'll replace it with 8.

>> +partially addressable, freed or they can be part of a redzone.
>> +If bytes are marked as addressable that means that they belong to some
>> +allocated memory block and it is possible to read or modify any of these
>> +bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
>> +When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
>> +memory block, this bytes are partially addressable and marked by 'N'.
> 
> Is that a literal "N" or some number indicating which bytes are accessible?
> From what's below, I'm guessing the latter.  It would be far better to be
> clear on that.
> 

Will do.

>> +Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
>> +
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
>> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
>> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
>> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>> +#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>> +
>> +In the report above the arrows point to the shadow byte 03, which means that the
>> +accessed address is partially addressable.
> 
> So N = 03 here?
> 

Right.

>> +2. Implementation details
>> +========================
>> +
>> +From a high level, our approach to memory error detection is similar to that
>> +of kmemcheck: use shadow memory to record whether each byte of memory is safe
>> +to access, and use compile-time instrumentation to check shadow on each memory
>> +access.
> 
> "to check the shadow memory on each..."
> 
>> +AddressSanitizer dedicates 1/8 of kernel memory to its shadow
>> +memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
>> +scale and offset to translate a memory address to its corresponding shadow address.
>> +
>> +Here is the function witch translate an address to its corresponding shadow address:
>> +
>> +unsigned long kasan_mem_to_shadow(unsigned long addr)
>> +{
>> +	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
>> +}
>> +
>> +where KASAN_SHADOW_SCALE_SHIFT = 3.
>> +
>> +Each shadow byte corresponds to 8 bytes of the main memory. We use the
>> +following encoding for each shadow byte: 0 means that all 8 bytes of the
>> +corresponding memory region are addressable; k (1 <= k <= 7) means that
>> +the first k bytes are addressable, and other (8 - k) bytes are not;
>> +any negative value indicates that the entire 8-byte word is inaccessible.
>> +We use different negative values to distinguish between different kinds of
>> +inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
> 
> This discussion belongs in the section above where you're talking about
> interpreting the markings.
> 
Right, I'll move it in a proper place

>> +Poisoning or unpoisoning a byte in the main memory means writing some special
>> +value into the corresponding shadow memory. This value indicates whether the
>> +byte is addressable or not.
> 
> Is this something developers would do?  Are there helper functions to do
> it?  I'd say either fill that in or leave this last bit out.
> 

Currently it almost internal thing with the only exceptional case.
Details in patch 10/12 "fs: dcache: manually unpoison dname after allocation to shut up kasan's reports".
I'll remove this paragraph then.

FYI at some future point poisoning magic fields in structs could be used to catch memory corruptions inside structures.


> Interesting work!
> 
> jon
> 


Thanks.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v5 01/12] Add kernel address sanitizer infrastructure.
@ 2014-10-28 12:24         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-10-28 12:24 UTC (permalink / raw)
  To: Jonathan Corbet
  Cc: Andrew Morton, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Michal Marek, Ingo Molnar, Peter Zijlstra

On 10/27/2014 08:20 PM, Jonathan Corbet wrote:
> Just looking at kasan.txt...
> 
>> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
>> new file mode 100644
>> index 0000000..12c50da
>> --- /dev/null
>> +++ b/Documentation/kasan.txt
>> @@ -0,0 +1,174 @@
>> +Kernel address sanitizer
>> +================
>> +
>> +0. Overview
>> +===========
>> +
>> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
>> +a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> 
> Documentation is a good place to stick to the 80-column (or slightly less)
> limit.  There's no reason to use wide lines here.
> 

Agree. I wonder why checkpatch doesn't warns here.

>> +KASan uses compile-time instrumentation for checking every memory access, therefore you
>> +will need a special compiler: GCC >= 4.9.2
>> +
>> +Currently KASan is supported only for x86_64 architecture and requires kernel
>> +to be built with SLUB allocator.
> 
> "and requires that the kernel be built with the SLUB allocator."
> 
>> +1. Usage
>> +=========
>> +
>> +KASAN requires the kernel to be built with a special compiler (GCC >= 5.0.0).
> 
> That differs from the requirement listed just a few lines above.  Which is
> right?  I'm also not sure that a version requirement qualifies as
> "special."  
> 

4.9.2 is correct now. Yuri backported kasan patches to 4.9 branch recently.
I agree that "special" doesn't fit here. "Certain" would be better here:

KASAN requires the kernel to be built with a certain compiler version GCC >= 4.9.2

>> +To enable KASAN configure kernel with:
>> +
>> +	  CONFIG_KASAN = y
>> +
>> +Currently KASAN works only with the SLUB memory allocator.
>> +For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
>> +at least 'slub_debug=U' in the boot cmdline.
>> +
>> +To disable instrumentation for specific files or directories, add a line
>> +similar to the following to the respective kernel Makefile:
>> +
>> +        For a single file (e.g. main.o):
>> +                KASAN_SANITIZE_main.o := n
>> +
>> +        For all files in one directory:
>> +                KASAN_SANITIZE := n
>> +
>> +Only files which are linked to the main kernel image or are compiled as
>> +kernel modules are supported by this mechanism.
> 
> Can you do the opposite?  Disable for all but a few files where you want to
> turn it on?  That seems more useful somehow...
> 

There was a config option KASAN_SANTIZE_ALL in v1 patch set, but I decided to remove it
because I think there is no good use case for it. Instrumentation only for few files
is not a good idea because it's quite common to pass pointer to the external function
where pointer deference actually happens.

So bug could be in the instrumented code, but it could be missed because deference happens in
some generic external function.


>> +1.1 Error reports
>> +==========
>> +
>> +A typical out of bounds access report looks like this:
>> +
>> +==================================================================
>> +BUG: AddressSanitizer: buffer overflow in kasan_kmalloc_oob_right+0x6a/0x7a at addr c6006f1b
>> +=============================================================================
>> +BUG kmalloc-128 (Not tainted): kasan error
>> +-----------------------------------------------------------------------------
>> +
>> +Disabling lock debugging due to kernel taint
>> +INFO: Allocated in kasan_kmalloc_oob_right+0x2c/0x7a age=5 cpu=0 pid=1
>> +	__slab_alloc.constprop.72+0x64f/0x680
>> +	kmem_cache_alloc+0xa8/0xe0
>> +	kasan_kmalloc_oob_rigth+0x2c/0x7a
>> +	kasan_tests_init+0x8/0xc
>> +	do_one_initcall+0x85/0x1a0
>> +	kernel_init_freeable+0x1f1/0x279
>> +	kernel_init+0x8/0xd0
>> +	ret_from_kernel_thread+0x21/0x30
>> +INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
>> +INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
>> +
>> +Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
>> +Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>> +CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
>> +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
>> + 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
>> + c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
>> + c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
>> +Call Trace:
>> + [<c1c4446f>] dump_stack+0x4b/0x75
>> + [<c11c3f32>] print_trailer+0xf2/0x180
>> + [<c11c4ff5>] object_err+0x25/0x30
>> + [<c11ccb78>] kasan_report_error+0xf8/0x380
>> + [<c1c57940>] ? need_resched+0x21/0x25
>> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
>> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
>> + [<c11cb92b>] ? poison_shadow+0x2b/0x30
>> + [<c1f82763>] ? kasan_kmalloc_oob_right+0x7a/0x7a
>> + [<c11cbacc>] __asan_store1+0x9c/0xa0
>> + [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
>> + [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
>> + [<c1f8276b>] kasan_tests_init+0x8/0xc
>> + [<c1000435>] do_one_initcall+0x85/0x1a0
>> + [<c1f6f508>] ? repair_env_string+0x23/0x66
>> + [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
>> + [<c10c9883>] ? parse_args+0x33/0x450
>> + [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
>> + [<c1000558>] kernel_init+0x8/0xd0
>> + [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
>> + [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
>> +Write of size 1 by thread T1:
>> +Memory state around the buggy address:
>> + c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
>> + c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
>> + c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
>> + c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
>> + c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
>> +>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
>> +                    ^
>> + c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
>> + c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
>> + c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
>> + c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
>> + c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
>> +==================================================================
>> +
>> +In the last section the report shows memory state around the accessed address.
>> +Reading this part requires some more understanding of how KASAN works.
> 
> Which is all great, but it might be nice to say briefly what the other
> sections are telling us?
> 

Other sections are from slub debug output. They are described in Documentation/vm/slub.txt.
To clear this out I will add here following:

First sections describe slub object where bad access happened. See 'SLUB Debug output' section in
Documentation/vm/slub.txt for details.

>> +Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
> 
> What's KASAN_SHADOW_SCALE_SIZE and why is it something we should care
> about?  Is it a parameter people can set?
> 

It's constant equals to 8. It implies how many bytes of memory mapped to one shadow byte.
Just changing this value won't work, so I'll replace it with 8.

>> +partially addressable, freed or they can be part of a redzone.
>> +If bytes are marked as addressable that means that they belong to some
>> +allocated memory block and it is possible to read or modify any of these
>> +bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
>> +When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
>> +memory block, this bytes are partially addressable and marked by 'N'.
> 
> Is that a literal "N" or some number indicating which bytes are accessible?
> From what's below, I'm guessing the latter.  It would be far better to be
> clear on that.
> 

Will do.

>> +Markers of inaccessible bytes could be found in mm/kasan/kasan.h header:
>> +
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page redzone, does not belong to any slub object */
>> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
>> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
>> +#define KASAN_SLAB_FREE         0xFA  /* free slab page */
>> +#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>> +
>> +In the report above the arrows point to the shadow byte 03, which means that the
>> +accessed address is partially addressable.
> 
> So N = 03 here?
> 

Right.

>> +2. Implementation details
>> +========================
>> +
>> +From a high level, our approach to memory error detection is similar to that
>> +of kmemcheck: use shadow memory to record whether each byte of memory is safe
>> +to access, and use compile-time instrumentation to check shadow on each memory
>> +access.
> 
> "to check the shadow memory on each..."
> 
>> +AddressSanitizer dedicates 1/8 of kernel memory to its shadow
>> +memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a
>> +scale and offset to translate a memory address to its corresponding shadow address.
>> +
>> +Here is the function witch translate an address to its corresponding shadow address:
>> +
>> +unsigned long kasan_mem_to_shadow(unsigned long addr)
>> +{
>> +	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
>> +}
>> +
>> +where KASAN_SHADOW_SCALE_SHIFT = 3.
>> +
>> +Each shadow byte corresponds to 8 bytes of the main memory. We use the
>> +following encoding for each shadow byte: 0 means that all 8 bytes of the
>> +corresponding memory region are addressable; k (1 <= k <= 7) means that
>> +the first k bytes are addressable, and other (8 - k) bytes are not;
>> +any negative value indicates that the entire 8-byte word is inaccessible.
>> +We use different negative values to distinguish between different kinds of
>> +inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
> 
> This discussion belongs in the section above where you're talking about
> interpreting the markings.
> 
Right, I'll move it in a proper place

>> +Poisoning or unpoisoning a byte in the main memory means writing some special
>> +value into the corresponding shadow memory. This value indicates whether the
>> +byte is addressable or not.
> 
> Is this something developers would do?  Are there helper functions to do
> it?  I'd say either fill that in or leave this last bit out.
> 

Currently it almost internal thing with the only exceptional case.
Details in patch 10/12 "fs: dcache: manually unpoison dname after allocation to shut up kasan's reports".
I'll remove this paragraph then.

FYI at some future point poisoning magic fields in structs could be used to catch memory corruptions inside structures.


> Interesting work!
> 
> jon
> 


Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2014-11-05 14:53   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, linux-kernel

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based on mmotm-2014-10-23-16-26 tree and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v6

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Comparison with other debuggin features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.

Andrey Ryabinin (11):
  Add kernel address sanitizer infrastructure.
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module

 Documentation/kasan.txt               | 169 ++++++++++++
 Makefile                              |  23 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 ++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 ++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   6 +
 include/linux/kasan.h                 |  69 +++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |  10 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 ++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   7 +
 mm/kasan/kasan.c                      | 468 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  54 ++++
 mm/kasan/report.c                     | 238 +++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 +++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1617 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Joe Perches <joe@perches.com>
-- 
2.1.3


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-05 14:53   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, linux-kernel

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based on mmotm-2014-10-23-16-26 tree and also avaliable in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v6

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Comparison with other debuggin features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.


Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.

Andrey Ryabinin (11):
  Add kernel address sanitizer infrastructure.
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module

 Documentation/kasan.txt               | 169 ++++++++++++
 Makefile                              |  23 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 ++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 ++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   6 +
 include/linux/kasan.h                 |  69 +++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |  10 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 ++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   7 +
 mm/kasan/kasan.c                      | 468 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  54 ++++
 mm/kasan/report.c                     | 238 +++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 +++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1617 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Joe Perches <joe@perches.com>
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v6 01/11] Add kernel address sanitizer infrastructure.
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 ++++++++++++++++
 Makefile                              |  23 ++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  42 ++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 ++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   7 +
 mm/kasan/kasan.c                      | 362 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  49 +++++
 mm/kasan/report.c                     | 206 +++++++++++++++++++
 scripts/Makefile.lib                  |  10 +
 13 files changed, 916 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..d532f91
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Write of size 1 by task modprobe:
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 382e69c..786268c 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -426,7 +426,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -757,6 +757,25 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+  kasan_inline := $(call cc-option, $(CFLAGS_KASAN) \
+			-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+			--param asan-instrumentation-with-call-threshold=10000)
+  ifeq ($(kasan_inline),)
+    $(warning Cannot use CONFIG_KASAN_INLINE: \
+	      inline instrumentation is not supported by compiler. Trying CONFIG_KASAN_OUTLINE.)
+  else
+    CFLAGS_KASAN := $(kasan_inline)
+  endif
+
+endif
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address is not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 706a9f7..3c3ef5d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1661,6 +1661,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ddd070a..bb26ec3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 27ddb80..63b7871 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,3 +71,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..ef2d313
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,7 @@
+KASAN_SANITIZE := n
+
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..ea5e464
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,362 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
+
+
+/* GCC 5.0 has different function names by default */
+void __asan_load1_noabort(unsigned long) __attribute__((alias("__asan_load1")));
+EXPORT_SYMBOL(__asan_load1_noabort);
+
+void __asan_load2_noabort(unsigned long) __attribute__((alias("__asan_load2")));
+EXPORT_SYMBOL(__asan_load2_noabort);
+
+void __asan_load4_noabort(unsigned long) __attribute__((alias("__asan_load4")));
+EXPORT_SYMBOL(__asan_load4_noabort);
+
+void __asan_load8_noabort(unsigned long) __attribute__((alias("__asan_load8")));
+EXPORT_SYMBOL(__asan_load8_noabort);
+
+void __asan_load16_noabort(unsigned long) __attribute__((alias("__asan_load16")));
+EXPORT_SYMBOL(__asan_load16_noabort);
+
+void __asan_loadN_noabort(unsigned long) __attribute__((alias("__asan_loadN")));
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_store1_noabort(unsigned long) __attribute__((alias("__asan_store1")));
+EXPORT_SYMBOL(__asan_store1_noabort);
+
+void __asan_store2_noabort(unsigned long) __attribute__((alias("__asan_store2")));
+EXPORT_SYMBOL(__asan_store2_noabort);
+
+void __asan_store4_noabort(unsigned long) __attribute__((alias("__asan_store4")));
+EXPORT_SYMBOL(__asan_store4_noabort);
+
+void __asan_store8_noabort(unsigned long) __attribute__((alias("__asan_store8")));
+EXPORT_SYMBOL(__asan_store8_noabort);
+
+void __asan_store16_noabort(unsigned long) __attribute__((alias("__asan_store16")));
+EXPORT_SYMBOL(__asan_store16_noabort);
+
+void __asan_storeN_noabort(unsigned long) __attribute__((alias("__asan_storeN")));
+EXPORT_SYMBOL(__asan_storeN_noabort);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..6da1d78
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,49 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..7f559b4
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,206 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..c1517e2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 01/11] Add kernel address sanitizer infrastructure.
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 ++++++++++++++++
 Makefile                              |  23 ++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  42 ++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 ++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   7 +
 mm/kasan/kasan.c                      | 362 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  49 +++++
 mm/kasan/report.c                     | 206 +++++++++++++++++++
 scripts/Makefile.lib                  |  10 +
 13 files changed, 916 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..d532f91
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Write of size 1 by task modprobe:
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 382e69c..786268c 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -426,7 +426,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -757,6 +757,25 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+  kasan_inline := $(call cc-option, $(CFLAGS_KASAN) \
+			-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+			--param asan-instrumentation-with-call-threshold=10000)
+  ifeq ($(kasan_inline),)
+    $(warning Cannot use CONFIG_KASAN_INLINE: \
+	      inline instrumentation is not supported by compiler. Trying CONFIG_KASAN_OUTLINE.)
+  else
+    CFLAGS_KASAN := $(kasan_inline)
+  endif
+
+endif
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address is not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 706a9f7..3c3ef5d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1661,6 +1661,9 @@ struct task_struct {
 	unsigned int	sequential_io;
 	unsigned int	sequential_io_avg;
 #endif
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 };
 
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ddd070a..bb26ec3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 27ddb80..63b7871 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,3 +71,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..ef2d313
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,7 @@
+KASAN_SANITIZE := n
+
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..ea5e464
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,362 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
+
+
+/* GCC 5.0 has different function names by default */
+void __asan_load1_noabort(unsigned long) __attribute__((alias("__asan_load1")));
+EXPORT_SYMBOL(__asan_load1_noabort);
+
+void __asan_load2_noabort(unsigned long) __attribute__((alias("__asan_load2")));
+EXPORT_SYMBOL(__asan_load2_noabort);
+
+void __asan_load4_noabort(unsigned long) __attribute__((alias("__asan_load4")));
+EXPORT_SYMBOL(__asan_load4_noabort);
+
+void __asan_load8_noabort(unsigned long) __attribute__((alias("__asan_load8")));
+EXPORT_SYMBOL(__asan_load8_noabort);
+
+void __asan_load16_noabort(unsigned long) __attribute__((alias("__asan_load16")));
+EXPORT_SYMBOL(__asan_load16_noabort);
+
+void __asan_loadN_noabort(unsigned long) __attribute__((alias("__asan_loadN")));
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_store1_noabort(unsigned long) __attribute__((alias("__asan_store1")));
+EXPORT_SYMBOL(__asan_store1_noabort);
+
+void __asan_store2_noabort(unsigned long) __attribute__((alias("__asan_store2")));
+EXPORT_SYMBOL(__asan_store2_noabort);
+
+void __asan_store4_noabort(unsigned long) __attribute__((alias("__asan_store4")));
+EXPORT_SYMBOL(__asan_store4_noabort);
+
+void __asan_store8_noabort(unsigned long) __attribute__((alias("__asan_store8")));
+EXPORT_SYMBOL(__asan_store8_noabort);
+
+void __asan_store16_noabort(unsigned long) __attribute__((alias("__asan_store16")));
+EXPORT_SYMBOL(__asan_store16_noabort);
+
+void __asan_storeN_noabort(unsigned long) __attribute__((alias("__asan_storeN")));
+EXPORT_SYMBOL(__asan_storeN_noabort);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..6da1d78
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,49 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..7f559b4
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,206 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by task %s:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 54be19a..c1517e2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 02/11] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 4b4f78c..ee5c286 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 02/11] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 4b4f78c..ee5c286 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 03/11] x86_64: add KASan support
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/boot/Makefile            |  2 +
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/include/asm/kasan.h      | 27 ++++++++++++
 arch/x86/kernel/Makefile          |  2 +
 arch/x86/kernel/dumpstack.c       |  5 ++-
 arch/x86/kernel/head64.c          |  9 +++-
 arch/x86/kernel/head_64.S         | 28 +++++++++++++
 arch/x86/mm/Makefile              |  3 ++
 arch/x86/mm/init.c                |  3 ++
 arch/x86/mm/kasan_init_64.c       | 87 +++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |  2 +-
 arch/x86/realmode/rm/Makefile     |  1 +
 arch/x86/vdso/Makefile            |  1 +
 lib/Kconfig.kasan                 |  2 +
 15 files changed, 171 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6115af9..ba56207 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -138,6 +138,7 @@ config X86
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
 	select X86_FEATURE_NAMES if PROC_FS
+	select HAVE_ARCH_KASAN if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 704f58a..21faab6b7 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..056c943
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_map_shadow(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_map_shadow(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 8f1e774..9d46ee8 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..4a5a597 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,7 @@
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
 #include <asm/init.h>
+#include <asm/kasan.h>
 #include <asm/page.h>
 #include <asm/page_types.h>
 #include <asm/sections.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..c6ea8a4
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,87 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..386cc8b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdfffe90000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 03/11] x86_64: add KASan support
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/boot/Makefile            |  2 +
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/include/asm/kasan.h      | 27 ++++++++++++
 arch/x86/kernel/Makefile          |  2 +
 arch/x86/kernel/dumpstack.c       |  5 ++-
 arch/x86/kernel/head64.c          |  9 +++-
 arch/x86/kernel/head_64.S         | 28 +++++++++++++
 arch/x86/mm/Makefile              |  3 ++
 arch/x86/mm/init.c                |  3 ++
 arch/x86/mm/kasan_init_64.c       | 87 +++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |  2 +-
 arch/x86/realmode/rm/Makefile     |  1 +
 arch/x86/vdso/Makefile            |  1 +
 lib/Kconfig.kasan                 |  2 +
 15 files changed, 171 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6115af9..ba56207 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -138,6 +138,7 @@ config X86
 	select HAVE_ACPI_APEI_NMI if ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP if ACPI
 	select X86_FEATURE_NAMES if PROC_FS
+	select HAVE_ARCH_KASAN if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 704f58a..21faab6b7 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
 # create a compressed vmlinux image from the original vmlinux
 #
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..056c943
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_map_shadow(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_map_shadow(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 8f1e774..9d46ee8 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..4a5a597 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,7 @@
 #include <asm/cacheflush.h>
 #include <asm/e820.h>
 #include <asm/init.h>
+#include <asm/kasan.h>
 #include <asm/page.h>
 #include <asm/page_types.h>
 #include <asm/sections.h>
@@ -685,5 +686,7 @@ void __init zone_sizes_init(void)
 #endif
 
 	free_area_init_nodes(max_zone_pfns);
+
+	kasan_map_shadow();
 }
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..c6ea8a4
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,87 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+void __init kasan_map_shadow(void)
+{
+	int i;
+
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..386cc8b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdfffe90000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 04/11] mm: page_alloc: add kasan hooks on alloc and free paths
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index e6e7405..aa529ad 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index ea5e464..7d4dcc3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6da1d78..2a6a961 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 7f559b4..bfe3a31 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fa94263..9ae7d0e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -59,6 +59,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -759,6 +760,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -945,6 +947,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 04/11] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index e6e7405..aa529ad 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index ea5e464..7d4dcc3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6da1d78..2a6a961 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 7f559b4..bfe3a31 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fa94263..9ae7d0e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -59,6 +59,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -759,6 +760,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -945,6 +947,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 05/11] mm: slub: introduce virt_to_obj function.
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 05/11] mm: slub: introduce virt_to_obj function.
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 06/11] mm: slub: share slab_err and object_err functions
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Joe Perches, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 mm/slub.c                | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..144b5cb 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,9 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+__printf(3, 4)
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 80c170e..1458629 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 06/11] mm: slub: share slab_err and object_err functions
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Joe Perches, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 mm/slub.c                | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..144b5cb 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,9 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+__printf(3, 4)
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 80c170e..1458629 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 07/11] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 1458629..2116ccd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 07/11] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 1458629..2116ccd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 08/11] mm: slub: add kernel address sanitizer support for slub allocator
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 35 ++++++++++++++++++--
 9 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 386cc8b..1fa4fe8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 63b7871..aa16cec 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7d4dcc3..37b8b26 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2a6a961..049349b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index bfe3a31..cbd5c0c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -73,11 +78,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 4069442..ff8d1a5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -785,6 +785,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -959,8 +960,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 2116ccd..b1f614e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2488,6 +2500,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2514,6 +2527,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2897,6 +2912,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3269,6 +3285,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3312,12 +3330,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3333,6 +3353,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 08/11] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 35 ++++++++++++++++++--
 9 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c265bec..5f97037 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 386cc8b..1fa4fe8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 63b7871..aa16cec 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7d4dcc3..37b8b26 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2a6a961..049349b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index bfe3a31..cbd5c0c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -73,11 +78,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 4069442..ff8d1a5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -785,6 +785,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -959,8 +960,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 2116ccd..b1f614e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1416,8 +1426,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2488,6 +2500,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2514,6 +2527,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2897,6 +2912,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3269,6 +3285,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3312,12 +3330,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3333,6 +3353,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 09/11] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:53     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index d5a23fd..d58ffcc6 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1429,6 +1431,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 09/11] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-11-05 14:53     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:53 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index d5a23fd..d58ffcc6 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1429,6 +1431,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 10/11] kmemleak: disable kasan instrumentation for kmemleak
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:54 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v6 10/11] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-11-05 14:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:54 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH] lib: add kasan test module
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-05 14:54     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:54 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 1fa4fe8..8548646 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 84000ec..b387570 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..896dee5
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH] lib: add kasan test module
@ 2014-11-05 14:54     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-05 14:54 UTC (permalink / raw)
  To: akpm
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 1fa4fe8..8548646 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 84000ec..b387570 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..896dee5
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-11  7:21     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-11  7:21 UTC (permalink / raw)
  To: akpm
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Joe Perches, linux-kernel

Hi Andrew,

Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
unless you are ready to take it.
So how should I proceed?

Thanks,
Andrey.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-11  7:21     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-11  7:21 UTC (permalink / raw)
  To: akpm
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Joe Perches, linux-kernel

Hi Andrew,

Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
unless you are ready to take it.
So how should I proceed?

Thanks,
Andrey.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-11  7:21     ` Andrey Ryabinin
@ 2014-11-18 17:08       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-18 17:08 UTC (permalink / raw)
  To: akpm
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Joe Perches, linux-kernel

On 11/11/2014 10:21 AM, Andrey Ryabinin wrote:
> Hi Andrew,
> 
> Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
> I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
> unless you are ready to take it.
> So how should I proceed?
> 
Ping, Andrew ?

FWIW v7 will have one more patch needed for catching bad accesses in memcpy/memmove/memset.
Recently instrumentation of those functions was removed from GCC 5.0.



> Thanks,
> Andrey.
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-18 17:08       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-18 17:08 UTC (permalink / raw)
  To: akpm
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Joe Perches, linux-kernel

On 11/11/2014 10:21 AM, Andrey Ryabinin wrote:
> Hi Andrew,
> 
> Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
> I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
> unless you are ready to take it.
> So how should I proceed?
> 
Ping, Andrew ?

FWIW v7 will have one more patch needed for catching bad accesses in memcpy/memmove/memset.
Recently instrumentation of those functions was removed from GCC 5.0.



> Thanks,
> Andrey.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-11  7:21     ` Andrey Ryabinin
@ 2014-11-18 20:58       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2014-11-18 20:58 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Joe Perches, linux-kernel

On Tue, 11 Nov 2014 10:21:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Hi Andrew,
> 
> Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
> I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
> unless you are ready to take it.

It's a huge pile of tricky code we'll need to maintain.  To justify its
inclusion I think we need to be confident that kasan will find a
significant number of significant bugs that
kmemcheck/debug_pagealloc/slub_debug failed to detect.

How do we get that confidence?  I've seen a small number of
minorish-looking kasan-detected bug reports go past, maybe six or so. 
That's in a 20-year-old code base, so one new minor bug discovered per
three years?  Not worth it!

Presumably more bugs will be exposed as more people use kasan on
different kernel configs, but will their number and seriousness justify
the maintenance effort?

If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
then that tips the balance a little.  What's the feasibility of that?


Sorry to play the hardass here, but someone has to ;)

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-18 20:58       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2014-11-18 20:58 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Michal Marek, Thomas Gleixner, Ingo Molnar,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Joe Perches, linux-kernel

On Tue, 11 Nov 2014 10:21:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Hi Andrew,
> 
> Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
> I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
> unless you are ready to take it.

It's a huge pile of tricky code we'll need to maintain.  To justify its
inclusion I think we need to be confident that kasan will find a
significant number of significant bugs that
kmemcheck/debug_pagealloc/slub_debug failed to detect.

How do we get that confidence?  I've seen a small number of
minorish-looking kasan-detected bug reports go past, maybe six or so. 
That's in a 20-year-old code base, so one new minor bug discovered per
three years?  Not worth it!

Presumably more bugs will be exposed as more people use kasan on
different kernel configs, but will their number and seriousness justify
the maintenance effort?

If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
then that tips the balance a little.  What's the feasibility of that?


Sorry to play the hardass here, but someone has to ;)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-18 20:58       ` Andrew Morton
@ 2014-11-18 21:09         ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-11-18 21:09 UTC (permalink / raw)
  To: Andrew Morton, Andrey Ryabinin
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Joe Perches, linux-kernel

On 11/18/2014 03:58 PM, Andrew Morton wrote:
> On Tue, 11 Nov 2014 10:21:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> Hi Andrew,
>>
>> Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
>> I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
>> unless you are ready to take it.
> 
> It's a huge pile of tricky code we'll need to maintain.  To justify its
> inclusion I think we need to be confident that kasan will find a
> significant number of significant bugs that
> kmemcheck/debug_pagealloc/slub_debug failed to detect.
> 
> How do we get that confidence?  I've seen a small number of
> minorish-looking kasan-detected bug reports go past, maybe six or so. 
> That's in a 20-year-old code base, so one new minor bug discovered per
> three years?  Not worth it!

It's worth noting here that not all bugs discovered by kasan belong to
the -mm tree. Bugs which are more severe, such as:

	http://openwall.com/lists/oss-security/2014/07/17/1

Are the result of fuzzing with kasan. So while it's indeed not a huge number,
it's way more than 6 and not only minor issues.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-18 21:09         ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-11-18 21:09 UTC (permalink / raw)
  To: Andrew Morton, Andrey Ryabinin
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Joe Perches, linux-kernel

On 11/18/2014 03:58 PM, Andrew Morton wrote:
> On Tue, 11 Nov 2014 10:21:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> Hi Andrew,
>>
>> Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
>> I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
>> unless you are ready to take it.
> 
> It's a huge pile of tricky code we'll need to maintain.  To justify its
> inclusion I think we need to be confident that kasan will find a
> significant number of significant bugs that
> kmemcheck/debug_pagealloc/slub_debug failed to detect.
> 
> How do we get that confidence?  I've seen a small number of
> minorish-looking kasan-detected bug reports go past, maybe six or so. 
> That's in a 20-year-old code base, so one new minor bug discovered per
> three years?  Not worth it!

It's worth noting here that not all bugs discovered by kasan belong to
the -mm tree. Bugs which are more severe, such as:

	http://openwall.com/lists/oss-security/2014/07/17/1

Are the result of fuzzing with kasan. So while it's indeed not a huge number,
it's way more than 6 and not only minor issues.


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-18 20:58       ` Andrew Morton
@ 2014-11-18 21:15         ` Andi Kleen
  -1 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-11-18 21:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, linux-kernel

> It's a huge pile of tricky code we'll need to maintain.  To justify its
> inclusion I think we need to be confident that kasan will find a
> significant number of significant bugs that
> kmemcheck/debug_pagealloc/slub_debug failed to detect.

I would put it differently. kmemcheck is effectively too slow to run
regularly. kasan is much faster and covers most of kmemcheck.

So I would rather see it as a more practical replacement to
kmemcheck, not an addition.

> How do we get that confidence?  I've seen a small number of
> minorish-looking kasan-detected bug reports go past, maybe six or so. 
> That's in a 20-year-old code base, so one new minor bug discovered per
> three years?  Not worth it!
> 
> Presumably more bugs will be exposed as more people use kasan on
> different kernel configs, but will their number and seriousness justify
> the maintenance effort?

I would expect so. It's also about saving developer time.

IMHO getting better tools like this is the only way to keep
up with growing complexity.

> If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
> then that tips the balance a little.  What's the feasibility of that?

Maybe removing kmemcheck. slub_debug/debug_pagealloc are simple, and are in
different niches (lower overhead debugging)

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-18 21:15         ` Andi Kleen
  0 siblings, 0 replies; 862+ messages in thread
From: Andi Kleen @ 2014-11-18 21:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, linux-kernel

> It's a huge pile of tricky code we'll need to maintain.  To justify its
> inclusion I think we need to be confident that kasan will find a
> significant number of significant bugs that
> kmemcheck/debug_pagealloc/slub_debug failed to detect.

I would put it differently. kmemcheck is effectively too slow to run
regularly. kasan is much faster and covers most of kmemcheck.

So I would rather see it as a more practical replacement to
kmemcheck, not an addition.

> How do we get that confidence?  I've seen a small number of
> minorish-looking kasan-detected bug reports go past, maybe six or so. 
> That's in a 20-year-old code base, so one new minor bug discovered per
> three years?  Not worth it!
> 
> Presumably more bugs will be exposed as more people use kasan on
> different kernel configs, but will their number and seriousness justify
> the maintenance effort?

I would expect so. It's also about saving developer time.

IMHO getting better tools like this is the only way to keep
up with growing complexity.

> If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
> then that tips the balance a little.  What's the feasibility of that?

Maybe removing kmemcheck. slub_debug/debug_pagealloc are simple, and are in
different niches (lower overhead debugging)

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-18 21:15         ` Andi Kleen
@ 2014-11-18 21:32           ` Dave Hansen
  -1 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-11-18 21:32 UTC (permalink / raw)
  To: Andi Kleen, Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Joe Perches, linux-kernel

On 11/18/2014 01:15 PM, Andi Kleen wrote:
>> > If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
>> > then that tips the balance a little.  What's the feasibility of that?
> Maybe removing kmemcheck. slub_debug/debug_pagealloc are simple, and are in
> different niches (lower overhead debugging)

Yeah, slub_debug can be turned on at runtime in production kernels so
it's in a completely different category.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-18 21:32           ` Dave Hansen
  0 siblings, 0 replies; 862+ messages in thread
From: Dave Hansen @ 2014-11-18 21:32 UTC (permalink / raw)
  To: Andi Kleen, Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Joe Perches, linux-kernel

On 11/18/2014 01:15 PM, Andi Kleen wrote:
>> > If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
>> > then that tips the balance a little.  What's the feasibility of that?
> Maybe removing kmemcheck. slub_debug/debug_pagealloc are simple, and are in
> different niches (lower overhead debugging)

Yeah, slub_debug can be turned on at runtime in production kernels so
it's in a completely different category.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-05 14:53   ` Andrey Ryabinin
@ 2014-11-18 23:38     ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-11-18 23:38 UTC (permalink / raw)
  To: Andrey Ryabinin, akpm
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Joe Perches, linux-kernel

Hi Andrey,

After the recent exchange of mails about kasan it came to me that I haven't
seen a kasan warning for a while now. To give kasan a quick test I added a rather
simple error which should generate a kasan warning about accessing userspace
memory (yes, I know kasan has a test module but my setup doesn't like modules):

	diff --git a/net/socket.c b/net/socket.c
	index fe20c31..794e9f4 100644
	--- a/net/socket.c
	+++ b/net/socket.c
	@@ -1902,7 +1902,7 @@ SYSCALL_DEFINE5(setsockopt, int, fd, int, level, int, optname,
	 {
	        int err, fput_needed;
	        struct socket *sock;
	-
	+       *((char *)10) = 5;
	        if (optlen < 0)
	                return -EINVAL;

A gfp was triggered, but no kasan warning was shown.

I remembered that one of the biggest changes in kasan was the introduction of
inline instrumentation, so I went ahead to disable it and see if it helps. But
the only result of that was having the boot process hang pretty early:

[...]
[    0.000000] IOAPIC[0]: apic_id 21, version 17, address 0xfec00000, GSI 0-23
[    0.000000] Processors: 20
[    0.000000] smpboot: Allowing 24 CPUs, 4 hotplug CPUs
[    0.000000] e820: [mem 0xd0000000-0xffffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on KVM
[    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1
[    0.000000] PERCPU: Embedded 491 pages/cpu @ffff8808dce00000 s1971864 r8192 d31080 u2097152
*HANG*

I'm using the latest gcc:

$ gcc --version
gcc (GCC) 5.0.0 20141117 (experimental)


I'll continue looking into it tomorrow, just hoping it rings a bell...


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-18 23:38     ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-11-18 23:38 UTC (permalink / raw)
  To: Andrey Ryabinin, akpm
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Joe Perches, linux-kernel

Hi Andrey,

After the recent exchange of mails about kasan it came to me that I haven't
seen a kasan warning for a while now. To give kasan a quick test I added a rather
simple error which should generate a kasan warning about accessing userspace
memory (yes, I know kasan has a test module but my setup doesn't like modules):

	diff --git a/net/socket.c b/net/socket.c
	index fe20c31..794e9f4 100644
	--- a/net/socket.c
	+++ b/net/socket.c
	@@ -1902,7 +1902,7 @@ SYSCALL_DEFINE5(setsockopt, int, fd, int, level, int, optname,
	 {
	        int err, fput_needed;
	        struct socket *sock;
	-
	+       *((char *)10) = 5;
	        if (optlen < 0)
	                return -EINVAL;

A gfp was triggered, but no kasan warning was shown.

I remembered that one of the biggest changes in kasan was the introduction of
inline instrumentation, so I went ahead to disable it and see if it helps. But
the only result of that was having the boot process hang pretty early:

[...]
[    0.000000] IOAPIC[0]: apic_id 21, version 17, address 0xfec00000, GSI 0-23
[    0.000000] Processors: 20
[    0.000000] smpboot: Allowing 24 CPUs, 4 hotplug CPUs
[    0.000000] e820: [mem 0xd0000000-0xffffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on KVM
[    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1
[    0.000000] PERCPU: Embedded 491 pages/cpu @ffff8808dce00000 s1971864 r8192 d31080 u2097152
*HANG*

I'm using the latest gcc:

$ gcc --version
gcc (GCC) 5.0.0 20141117 (experimental)


I'll continue looking into it tomorrow, just hoping it rings a bell...


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-18 20:58       ` Andrew Morton
@ 2014-11-18 23:53         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-18 23:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML

2014-11-18 23:58 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Tue, 11 Nov 2014 10:21:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>> Hi Andrew,
>>
>> Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
>> I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
>> unless you are ready to take it.
>
> It's a huge pile of tricky code we'll need to maintain.  To justify its
> inclusion I think we need to be confident that kasan will find a
> significant number of significant bugs that
> kmemcheck/debug_pagealloc/slub_debug failed to detect.
>
> How do we get that confidence?  I've seen a small number of
> minorish-looking kasan-detected bug reports go past, maybe six or so.

I must admit that most bugs I've seen is a minor,
but there are  a bit more then six of them.

I've counted 16:

aab515d (fib_trie: remove potential out of bound access)
984f173 ([SCSI] sd: Fix potential out-of-bounds access)
5e9ae2e (aio: fix use-after-free in aio_migratepage)
2811eba (ipv6: udp packets following an UFO enqueued packet need also
be handled by UFO)
057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
9709674 (ipv4: fix a race in ip4_datagram_release_cb())
4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
624483f (mm: rmap: fix use-after-free in __put_anon_vma)
93b7aca (lib/idr.c: fix out-of-bounds pointer dereference)
b4903d6 (mm: debugfs: move rounddown_pow_of_two() out from do_fault path)
40eea80 (net: sendmsg: fix NULL pointer dereference)
10ec947 (ipv4: fix buffer overflow in ip_options_compile())
dbf20cb2 (f2fs: avoid use invalid mapping of node_inode when evict meta inode)
d6d86c0 (mm/balloon_compaction: redesign ballooned pages management)

+ 2 recently found, seems minor:
    http://lkml.kernel.org/r/1415372020-1871-1-git-send-email-a.ryabinin@samsung.com
    (sched/numa: Fix out of bounds read in sched_init_numa())

    http://lkml.kernel.org/r/1415458085-12485-1-git-send-email-ryabinin.a.a@gmail.com
    (security: smack: fix out-of-bounds access in smk_parse_smack())

Note that some functionality is not yet implemented in this patch set.
Kasan has possibility
to detect out-of-bounds accesses on global/stack variables. Neither
kmemcheck/debug_pagealloc or slub_debug could do that.

> That's in a 20-year-old code base, so one new minor bug discovered per
> three years?  Not worth it!
>
> Presumably more bugs will be exposed as more people use kasan on
> different kernel configs, but will their number and seriousness justify
> the maintenance effort?
>

Yes, AFAIK there are only few users of kasan now, and I guess that
only small part of kernel code
was covered by it.
IMO kasan shouldn't take a lot maintenance efforts, most part of code
is isolated and it doesn't
have some complex dependencies on in-kernel API.
And you could always just poke me, I'd be happy to sort out any issues.

> If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
> then that tips the balance a little.  What's the feasibility of that?
>

I think kasan could replace kmemcheck at some point.
Unlike kmemcheck, kasan couldn't detect uninitialized memory reads now.
But It could be done  using the same compiler's instrumentation (I
have some proof-of-concept).
Though it will be a different Kconfig option, so you either enable
CONFIG_KASAN to detect out-of-bounds
and use-after-frees or CONFIG_DETECT_UNINITIALIZED_MEMORY to catch
only uninitialized memory reads.

Removing debug_pagealloc maybe is not so good idea, because it doesn't
eat much memory unlike kasan.

slub_debug could be enabled in production kernels without rebuilding,
so I wouldn't touch it too.

>
> Sorry to play the hardass here, but someone has to ;)
>


-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-18 23:53         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-18 23:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML

2014-11-18 23:58 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Tue, 11 Nov 2014 10:21:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>> Hi Andrew,
>>
>> Now we have stable GCC(4.9.2) which supports kasan and from my point of view patchset is ready for merging.
>> I could have sent v7 (it's just rebased v6), but I see no point in doing that and bothering people,
>> unless you are ready to take it.
>
> It's a huge pile of tricky code we'll need to maintain.  To justify its
> inclusion I think we need to be confident that kasan will find a
> significant number of significant bugs that
> kmemcheck/debug_pagealloc/slub_debug failed to detect.
>
> How do we get that confidence?  I've seen a small number of
> minorish-looking kasan-detected bug reports go past, maybe six or so.

I must admit that most bugs I've seen is a minor,
but there are  a bit more then six of them.

I've counted 16:

aab515d (fib_trie: remove potential out of bound access)
984f173 ([SCSI] sd: Fix potential out-of-bounds access)
5e9ae2e (aio: fix use-after-free in aio_migratepage)
2811eba (ipv6: udp packets following an UFO enqueued packet need also
be handled by UFO)
057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
9709674 (ipv4: fix a race in ip4_datagram_release_cb())
4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
624483f (mm: rmap: fix use-after-free in __put_anon_vma)
93b7aca (lib/idr.c: fix out-of-bounds pointer dereference)
b4903d6 (mm: debugfs: move rounddown_pow_of_two() out from do_fault path)
40eea80 (net: sendmsg: fix NULL pointer dereference)
10ec947 (ipv4: fix buffer overflow in ip_options_compile())
dbf20cb2 (f2fs: avoid use invalid mapping of node_inode when evict meta inode)
d6d86c0 (mm/balloon_compaction: redesign ballooned pages management)

+ 2 recently found, seems minor:
    http://lkml.kernel.org/r/1415372020-1871-1-git-send-email-a.ryabinin@samsung.com
    (sched/numa: Fix out of bounds read in sched_init_numa())

    http://lkml.kernel.org/r/1415458085-12485-1-git-send-email-ryabinin.a.a@gmail.com
    (security: smack: fix out-of-bounds access in smk_parse_smack())

Note that some functionality is not yet implemented in this patch set.
Kasan has possibility
to detect out-of-bounds accesses on global/stack variables. Neither
kmemcheck/debug_pagealloc or slub_debug could do that.

> That's in a 20-year-old code base, so one new minor bug discovered per
> three years?  Not worth it!
>
> Presumably more bugs will be exposed as more people use kasan on
> different kernel configs, but will their number and seriousness justify
> the maintenance effort?
>

Yes, AFAIK there are only few users of kasan now, and I guess that
only small part of kernel code
was covered by it.
IMO kasan shouldn't take a lot maintenance efforts, most part of code
is isolated and it doesn't
have some complex dependencies on in-kernel API.
And you could always just poke me, I'd be happy to sort out any issues.

> If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
> then that tips the balance a little.  What's the feasibility of that?
>

I think kasan could replace kmemcheck at some point.
Unlike kmemcheck, kasan couldn't detect uninitialized memory reads now.
But It could be done  using the same compiler's instrumentation (I
have some proof-of-concept).
Though it will be a different Kconfig option, so you either enable
CONFIG_KASAN to detect out-of-bounds
and use-after-frees or CONFIG_DETECT_UNINITIALIZED_MEMORY to catch
only uninitialized memory reads.

Removing debug_pagealloc maybe is not so good idea, because it doesn't
eat much memory unlike kasan.

slub_debug could be enabled in production kernels without rebuilding,
so I wouldn't touch it too.

>
> Sorry to play the hardass here, but someone has to ;)
>


-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-18 23:38     ` Sasha Levin
@ 2014-11-19  0:09       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-19  0:09 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML

2014-11-19 2:38 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
> Hi Andrey,
>
> After the recent exchange of mails about kasan it came to me that I haven't
> seen a kasan warning for a while now. To give kasan a quick test I added a rather
> simple error which should generate a kasan warning about accessing userspace
> memory (yes, I know kasan has a test module but my setup doesn't like modules):
>
>         diff --git a/net/socket.c b/net/socket.c
>         index fe20c31..794e9f4 100644
>         --- a/net/socket.c
>         +++ b/net/socket.c
>         @@ -1902,7 +1902,7 @@ SYSCALL_DEFINE5(setsockopt, int, fd, int, level, int, optname,
>          {
>                 int err, fput_needed;
>                 struct socket *sock;
>         -
>         +       *((char *)10) = 5;
>                 if (optlen < 0)
>                         return -EINVAL;
>
> A gfp was triggered, but no kasan warning was shown.
>

Yes with CONFIG_KASAN_INLINE you will get GPF instead of kasan report.
For userspaces addresses we don't have shadow memory. In outline case
I just check address itself before checking shadow. In inline case compiler
just checks shadow, so there is no way to avoid GPF.

To be able to print report instead of GPF, I need to treat GPFs in a special
way if inline instrumentation was enabled, but it's not done yet.

> I remembered that one of the biggest changes in kasan was the introduction of
> inline instrumentation, so I went ahead to disable it and see if it helps. But
> the only result of that was having the boot process hang pretty early:
>
> [...]
> [    0.000000] IOAPIC[0]: apic_id 21, version 17, address 0xfec00000, GSI 0-23
> [    0.000000] Processors: 20
> [    0.000000] smpboot: Allowing 24 CPUs, 4 hotplug CPUs
> [    0.000000] e820: [mem 0xd0000000-0xffffffff] available for PCI devices
> [    0.000000] Booting paravirtualized kernel on KVM
> [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1
> [    0.000000] PERCPU: Embedded 491 pages/cpu @ffff8808dce00000 s1971864 r8192 d31080 u2097152
> *HANG*
>

This hang happens only with your error patch above or even without it?
In any case I'll look tomorrow.

> I'm using the latest gcc:
>
> $ gcc --version
> gcc (GCC) 5.0.0 20141117 (experimental)
>
>
> I'll continue looking into it tomorrow, just hoping it rings a bell...
>
>
> Thanks,
> Sasha
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>



-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-19  0:09       ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-19  0:09 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML

2014-11-19 2:38 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
> Hi Andrey,
>
> After the recent exchange of mails about kasan it came to me that I haven't
> seen a kasan warning for a while now. To give kasan a quick test I added a rather
> simple error which should generate a kasan warning about accessing userspace
> memory (yes, I know kasan has a test module but my setup doesn't like modules):
>
>         diff --git a/net/socket.c b/net/socket.c
>         index fe20c31..794e9f4 100644
>         --- a/net/socket.c
>         +++ b/net/socket.c
>         @@ -1902,7 +1902,7 @@ SYSCALL_DEFINE5(setsockopt, int, fd, int, level, int, optname,
>          {
>                 int err, fput_needed;
>                 struct socket *sock;
>         -
>         +       *((char *)10) = 5;
>                 if (optlen < 0)
>                         return -EINVAL;
>
> A gfp was triggered, but no kasan warning was shown.
>

Yes with CONFIG_KASAN_INLINE you will get GPF instead of kasan report.
For userspaces addresses we don't have shadow memory. In outline case
I just check address itself before checking shadow. In inline case compiler
just checks shadow, so there is no way to avoid GPF.

To be able to print report instead of GPF, I need to treat GPFs in a special
way if inline instrumentation was enabled, but it's not done yet.

> I remembered that one of the biggest changes in kasan was the introduction of
> inline instrumentation, so I went ahead to disable it and see if it helps. But
> the only result of that was having the boot process hang pretty early:
>
> [...]
> [    0.000000] IOAPIC[0]: apic_id 21, version 17, address 0xfec00000, GSI 0-23
> [    0.000000] Processors: 20
> [    0.000000] smpboot: Allowing 24 CPUs, 4 hotplug CPUs
> [    0.000000] e820: [mem 0xd0000000-0xffffffff] available for PCI devices
> [    0.000000] Booting paravirtualized kernel on KVM
> [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1
> [    0.000000] PERCPU: Embedded 491 pages/cpu @ffff8808dce00000 s1971864 r8192 d31080 u2097152
> *HANG*
>

This hang happens only with your error patch above or even without it?
In any case I'll look tomorrow.

> I'm using the latest gcc:
>
> $ gcc --version
> gcc (GCC) 5.0.0 20141117 (experimental)
>
>
> I'll continue looking into it tomorrow, just hoping it rings a bell...
>
>
> Thanks,
> Sasha
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>



-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-19  0:09       ` Andrey Ryabinin
@ 2014-11-19  0:44         ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-11-19  0:44 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML

On 11/18/2014 07:09 PM, Andrey Ryabinin wrote:
> Yes with CONFIG_KASAN_INLINE you will get GPF instead of kasan report.
> For userspaces addresses we don't have shadow memory. In outline case
> I just check address itself before checking shadow. In inline case compiler
> just checks shadow, so there is no way to avoid GPF.
> 
> To be able to print report instead of GPF, I need to treat GPFs in a special
> way if inline instrumentation was enabled, but it's not done yet.

I went ahead and tested it with the test module, which worked perfectly. No
more complaints here...

>> > I remembered that one of the biggest changes in kasan was the introduction of
>> > inline instrumentation, so I went ahead to disable it and see if it helps. But
>> > the only result of that was having the boot process hang pretty early:
>> >
>> > [...]
>> > [    0.000000] IOAPIC[0]: apic_id 21, version 17, address 0xfec00000, GSI 0-23
>> > [    0.000000] Processors: 20
>> > [    0.000000] smpboot: Allowing 24 CPUs, 4 hotplug CPUs
>> > [    0.000000] e820: [mem 0xd0000000-0xffffffff] available for PCI devices
>> > [    0.000000] Booting paravirtualized kernel on KVM
>> > [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1
>> > [    0.000000] PERCPU: Embedded 491 pages/cpu @ffff8808dce00000 s1971864 r8192 d31080 u2097152
>> > *HANG*
>> >
> This hang happens only with your error patch above or even without it?

It happens even without the patch.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-19  0:44         ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-11-19  0:44 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML

On 11/18/2014 07:09 PM, Andrey Ryabinin wrote:
> Yes with CONFIG_KASAN_INLINE you will get GPF instead of kasan report.
> For userspaces addresses we don't have shadow memory. In outline case
> I just check address itself before checking shadow. In inline case compiler
> just checks shadow, so there is no way to avoid GPF.
> 
> To be able to print report instead of GPF, I need to treat GPFs in a special
> way if inline instrumentation was enabled, but it's not done yet.

I went ahead and tested it with the test module, which worked perfectly. No
more complaints here...

>> > I remembered that one of the biggest changes in kasan was the introduction of
>> > inline instrumentation, so I went ahead to disable it and see if it helps. But
>> > the only result of that was having the boot process hang pretty early:
>> >
>> > [...]
>> > [    0.000000] IOAPIC[0]: apic_id 21, version 17, address 0xfec00000, GSI 0-23
>> > [    0.000000] Processors: 20
>> > [    0.000000] smpboot: Allowing 24 CPUs, 4 hotplug CPUs
>> > [    0.000000] e820: [mem 0xd0000000-0xffffffff] available for PCI devices
>> > [    0.000000] Booting paravirtualized kernel on KVM
>> > [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1
>> > [    0.000000] PERCPU: Embedded 491 pages/cpu @ffff8808dce00000 s1971864 r8192 d31080 u2097152
>> > *HANG*
>> >
> This hang happens only with your error patch above or even without it?

It happens even without the patch.


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-19  0:44         ` Sasha Levin
@ 2014-11-19 12:41           ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-19 12:41 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML

On 11/19/2014 03:44 AM, Sasha Levin wrote:
> On 11/18/2014 07:09 PM, Andrey Ryabinin wrote:
>> Yes with CONFIG_KASAN_INLINE you will get GPF instead of kasan report.
>> For userspaces addresses we don't have shadow memory. In outline case
>> I just check address itself before checking shadow. In inline case compiler
>> just checks shadow, so there is no way to avoid GPF.
>>
>> To be able to print report instead of GPF, I need to treat GPFs in a special
>> way if inline instrumentation was enabled, but it's not done yet.
> 
> I went ahead and tested it with the test module, which worked perfectly. No
> more complaints here...
> 
>>>> I remembered that one of the biggest changes in kasan was the introduction of
>>>> inline instrumentation, so I went ahead to disable it and see if it helps. But
>>>> the only result of that was having the boot process hang pretty early:
>>>>
>>>> [...]
>>>> [    0.000000] IOAPIC[0]: apic_id 21, version 17, address 0xfec00000, GSI 0-23
>>>> [    0.000000] Processors: 20
>>>> [    0.000000] smpboot: Allowing 24 CPUs, 4 hotplug CPUs
>>>> [    0.000000] e820: [mem 0xd0000000-0xffffffff] available for PCI devices
>>>> [    0.000000] Booting paravirtualized kernel on KVM
>>>> [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1
>>>> [    0.000000] PERCPU: Embedded 491 pages/cpu @ffff8808dce00000 s1971864 r8192 d31080 u2097152
>>>> *HANG*
>>>>
>> This hang happens only with your error patch above or even without it?
> 
> It happens even without the patch.
> 

I took your config from "Replace _PAGE_NUMA with PAGE_NONE protections" thread.
I've noticed that you have both KASAN and UBSAN enabled.
I didn't try them together, though it could work with patch bellow.
But it should hang much earlier then you see, without this patch.

------------------------------------------------------
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Subject: [PATCH] kasan: don't use ubsan's instrumentation for kasan internals

kasan do unaligned access for checking shadow memory faster.
If ubsan is also enabled this will lead to unbound recursion:
__asan_load* -> __ubsan_handle_type_mismatch -> __asan_load* -> ...

Disable ubsan's instrumentation for kasan.c to avoid that.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kasan/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index ef2d313..2b53073 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -1,4 +1,5 @@
 KASAN_SANITIZE := n
+UBSAN_SANITIZE := n

 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-19 12:41           ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-19 12:41 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML

On 11/19/2014 03:44 AM, Sasha Levin wrote:
> On 11/18/2014 07:09 PM, Andrey Ryabinin wrote:
>> Yes with CONFIG_KASAN_INLINE you will get GPF instead of kasan report.
>> For userspaces addresses we don't have shadow memory. In outline case
>> I just check address itself before checking shadow. In inline case compiler
>> just checks shadow, so there is no way to avoid GPF.
>>
>> To be able to print report instead of GPF, I need to treat GPFs in a special
>> way if inline instrumentation was enabled, but it's not done yet.
> 
> I went ahead and tested it with the test module, which worked perfectly. No
> more complaints here...
> 
>>>> I remembered that one of the biggest changes in kasan was the introduction of
>>>> inline instrumentation, so I went ahead to disable it and see if it helps. But
>>>> the only result of that was having the boot process hang pretty early:
>>>>
>>>> [...]
>>>> [    0.000000] IOAPIC[0]: apic_id 21, version 17, address 0xfec00000, GSI 0-23
>>>> [    0.000000] Processors: 20
>>>> [    0.000000] smpboot: Allowing 24 CPUs, 4 hotplug CPUs
>>>> [    0.000000] e820: [mem 0xd0000000-0xffffffff] available for PCI devices
>>>> [    0.000000] Booting paravirtualized kernel on KVM
>>>> [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1
>>>> [    0.000000] PERCPU: Embedded 491 pages/cpu @ffff8808dce00000 s1971864 r8192 d31080 u2097152
>>>> *HANG*
>>>>
>> This hang happens only with your error patch above or even without it?
> 
> It happens even without the patch.
> 

I took your config from "Replace _PAGE_NUMA with PAGE_NONE protections" thread.
I've noticed that you have both KASAN and UBSAN enabled.
I didn't try them together, though it could work with patch bellow.
But it should hang much earlier then you see, without this patch.

------------------------------------------------------
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Subject: [PATCH] kasan: don't use ubsan's instrumentation for kasan internals

kasan do unaligned access for checking shadow memory faster.
If ubsan is also enabled this will lead to unbound recursion:
__asan_load* -> __ubsan_handle_type_mismatch -> __asan_load* -> ...

Disable ubsan's instrumentation for kasan.c to avoid that.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kasan/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index ef2d313..2b53073 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -1,4 +1,5 @@
 KASAN_SANITIZE := n
+UBSAN_SANITIZE := n

 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-18 23:53         ` Andrey Ryabinin
@ 2014-11-20  9:03           ` Ingo Molnar
  -1 siblings, 0 replies; 862+ messages in thread
From: Ingo Molnar @ 2014-11-20  9:03 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Andrey Ryabinin, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds


* Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:

> I've counted 16:
> 
> aab515d (fib_trie: remove potential out of bound access)
> 984f173 ([SCSI] sd: Fix potential out-of-bounds access)
> 5e9ae2e (aio: fix use-after-free in aio_migratepage)
> 2811eba (ipv6: udp packets following an UFO enqueued packet need also
> be handled by UFO)
> 057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
> 9709674 (ipv4: fix a race in ip4_datagram_release_cb())
> 4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
> 624483f (mm: rmap: fix use-after-free in __put_anon_vma)
> 93b7aca (lib/idr.c: fix out-of-bounds pointer dereference)
> b4903d6 (mm: debugfs: move rounddown_pow_of_two() out from do_fault path)
> 40eea80 (net: sendmsg: fix NULL pointer dereference)
> 10ec947 (ipv4: fix buffer overflow in ip_options_compile())
> dbf20cb2 (f2fs: avoid use invalid mapping of node_inode when evict meta inode)
> d6d86c0 (mm/balloon_compaction: redesign ballooned pages management)
> 
> + 2 recently found, seems minor:
>     http://lkml.kernel.org/r/1415372020-1871-1-git-send-email-a.ryabinin@samsung.com
>     (sched/numa: Fix out of bounds read in sched_init_numa())
> 
>     http://lkml.kernel.org/r/1415458085-12485-1-git-send-email-ryabinin.a.a@gmail.com
>     (security: smack: fix out-of-bounds access in smk_parse_smack())
> 
> Note that some functionality is not yet implemented in this 
> patch set. Kasan has possibility to detect out-of-bounds 
> accesses on global/stack variables. Neither 
> kmemcheck/debug_pagealloc or slub_debug could do that.
> 
> > That's in a 20-year-old code base, so one new minor bug discovered per
> > three years?  Not worth it!
> >
> > Presumably more bugs will be exposed as more people use kasan on
> > different kernel configs, but will their number and seriousness justify
> > the maintenance effort?
> >
> 
> Yes, AFAIK there are only few users of kasan now, and I guess that
> only small part of kernel code
> was covered by it.
> IMO kasan shouldn't take a lot maintenance efforts, most part of code
> is isolated and it doesn't
> have some complex dependencies on in-kernel API.
> And you could always just poke me, I'd be happy to sort out any issues.
> 
> > If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
> > then that tips the balance a little.  What's the feasibility of that?
> >
> 
> I think kasan could replace kmemcheck at some point.

So that angle sounds interesting, because kmemcheck is 
essentially unmaintained right now: in the last 3 years since 
v3.0 arch/x86/mm/kmemcheck/ has not seen a single kmemcheck 
specific change, only 4 incidental changes.

kmemcheck is also very architecture bound and somewhat fragile 
due to having to decode instructions, so if generic, compiler 
driven instrumentation can replace it, that would be a plus.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-20  9:03           ` Ingo Molnar
  0 siblings, 0 replies; 862+ messages in thread
From: Ingo Molnar @ 2014-11-20  9:03 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Andrey Ryabinin, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds


* Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:

> I've counted 16:
> 
> aab515d (fib_trie: remove potential out of bound access)
> 984f173 ([SCSI] sd: Fix potential out-of-bounds access)
> 5e9ae2e (aio: fix use-after-free in aio_migratepage)
> 2811eba (ipv6: udp packets following an UFO enqueued packet need also
> be handled by UFO)
> 057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
> 9709674 (ipv4: fix a race in ip4_datagram_release_cb())
> 4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
> 624483f (mm: rmap: fix use-after-free in __put_anon_vma)
> 93b7aca (lib/idr.c: fix out-of-bounds pointer dereference)
> b4903d6 (mm: debugfs: move rounddown_pow_of_two() out from do_fault path)
> 40eea80 (net: sendmsg: fix NULL pointer dereference)
> 10ec947 (ipv4: fix buffer overflow in ip_options_compile())
> dbf20cb2 (f2fs: avoid use invalid mapping of node_inode when evict meta inode)
> d6d86c0 (mm/balloon_compaction: redesign ballooned pages management)
> 
> + 2 recently found, seems minor:
>     http://lkml.kernel.org/r/1415372020-1871-1-git-send-email-a.ryabinin@samsung.com
>     (sched/numa: Fix out of bounds read in sched_init_numa())
> 
>     http://lkml.kernel.org/r/1415458085-12485-1-git-send-email-ryabinin.a.a@gmail.com
>     (security: smack: fix out-of-bounds access in smk_parse_smack())
> 
> Note that some functionality is not yet implemented in this 
> patch set. Kasan has possibility to detect out-of-bounds 
> accesses on global/stack variables. Neither 
> kmemcheck/debug_pagealloc or slub_debug could do that.
> 
> > That's in a 20-year-old code base, so one new minor bug discovered per
> > three years?  Not worth it!
> >
> > Presumably more bugs will be exposed as more people use kasan on
> > different kernel configs, but will their number and seriousness justify
> > the maintenance effort?
> >
> 
> Yes, AFAIK there are only few users of kasan now, and I guess that
> only small part of kernel code
> was covered by it.
> IMO kasan shouldn't take a lot maintenance efforts, most part of code
> is isolated and it doesn't
> have some complex dependencies on in-kernel API.
> And you could always just poke me, I'd be happy to sort out any issues.
> 
> > If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
> > then that tips the balance a little.  What's the feasibility of that?
> >
> 
> I think kasan could replace kmemcheck at some point.

So that angle sounds interesting, because kmemcheck is 
essentially unmaintained right now: in the last 3 years since 
v3.0 arch/x86/mm/kmemcheck/ has not seen a single kmemcheck 
specific change, only 4 incidental changes.

kmemcheck is also very architecture bound and somewhat fragile 
due to having to decode instructions, so if generic, compiler 
driven instrumentation can replace it, that would be a plus.

Thanks,

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-20  9:03           ` Ingo Molnar
@ 2014-11-20 12:35             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-20 12:35 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On 11/20/2014 12:03 PM, Ingo Molnar wrote:
> 
> * Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> 
>> I've counted 16:
>>
>> aab515d (fib_trie: remove potential out of bound access)
>> 984f173 ([SCSI] sd: Fix potential out-of-bounds access)
>> 5e9ae2e (aio: fix use-after-free in aio_migratepage)
>> 2811eba (ipv6: udp packets following an UFO enqueued packet need also
>> be handled by UFO)
>> 057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
>> 9709674 (ipv4: fix a race in ip4_datagram_release_cb())
>> 4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
>> 624483f (mm: rmap: fix use-after-free in __put_anon_vma)
>> 93b7aca (lib/idr.c: fix out-of-bounds pointer dereference)
>> b4903d6 (mm: debugfs: move rounddown_pow_of_two() out from do_fault path)
>> 40eea80 (net: sendmsg: fix NULL pointer dereference)
>> 10ec947 (ipv4: fix buffer overflow in ip_options_compile())
>> dbf20cb2 (f2fs: avoid use invalid mapping of node_inode when evict meta inode)
>> d6d86c0 (mm/balloon_compaction: redesign ballooned pages management)
>>
>> + 2 recently found, seems minor:
>>     http://lkml.kernel.org/r/1415372020-1871-1-git-send-email-a.ryabinin@samsung.com
>>     (sched/numa: Fix out of bounds read in sched_init_numa())
>>
>>     http://lkml.kernel.org/r/1415458085-12485-1-git-send-email-ryabinin.a.a@gmail.com
>>     (security: smack: fix out-of-bounds access in smk_parse_smack())
>>
>> Note that some functionality is not yet implemented in this 
>> patch set. Kasan has possibility to detect out-of-bounds 
>> accesses on global/stack variables. Neither 
>> kmemcheck/debug_pagealloc or slub_debug could do that.
>>
>>> That's in a 20-year-old code base, so one new minor bug discovered per
>>> three years?  Not worth it!
>>>
>>> Presumably more bugs will be exposed as more people use kasan on
>>> different kernel configs, but will their number and seriousness justify
>>> the maintenance effort?
>>>
>>
>> Yes, AFAIK there are only few users of kasan now, and I guess that
>> only small part of kernel code
>> was covered by it.
>> IMO kasan shouldn't take a lot maintenance efforts, most part of code
>> is isolated and it doesn't
>> have some complex dependencies on in-kernel API.
>> And you could always just poke me, I'd be happy to sort out any issues.
>>
>>> If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
>>> then that tips the balance a little.  What's the feasibility of that?
>>>
>>
>> I think kasan could replace kmemcheck at some point.
> 
> So that angle sounds interesting, because kmemcheck is 
> essentially unmaintained right now: in the last 3 years since 
> v3.0 arch/x86/mm/kmemcheck/ has not seen a single kmemcheck 
> specific change, only 4 incidental changes.
> 
> kmemcheck is also very architecture bound and somewhat fragile 
> due to having to decode instructions, so if generic, compiler 
> driven instrumentation can replace it, that would be a plus.
> 

GCC already supports address sanitizer on x86_32/x86_64/arm/arm64/rs6000,
and adding compiler's support for any other architecture is trivial.

Per-arch work on kernel-side maybe is not trivial, but there is nothing complex either.
It's much more simpler then kmemcheck.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-20 12:35             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-20 12:35 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On 11/20/2014 12:03 PM, Ingo Molnar wrote:
> 
> * Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> 
>> I've counted 16:
>>
>> aab515d (fib_trie: remove potential out of bound access)
>> 984f173 ([SCSI] sd: Fix potential out-of-bounds access)
>> 5e9ae2e (aio: fix use-after-free in aio_migratepage)
>> 2811eba (ipv6: udp packets following an UFO enqueued packet need also
>> be handled by UFO)
>> 057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
>> 9709674 (ipv4: fix a race in ip4_datagram_release_cb())
>> 4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
>> 624483f (mm: rmap: fix use-after-free in __put_anon_vma)
>> 93b7aca (lib/idr.c: fix out-of-bounds pointer dereference)
>> b4903d6 (mm: debugfs: move rounddown_pow_of_two() out from do_fault path)
>> 40eea80 (net: sendmsg: fix NULL pointer dereference)
>> 10ec947 (ipv4: fix buffer overflow in ip_options_compile())
>> dbf20cb2 (f2fs: avoid use invalid mapping of node_inode when evict meta inode)
>> d6d86c0 (mm/balloon_compaction: redesign ballooned pages management)
>>
>> + 2 recently found, seems minor:
>>     http://lkml.kernel.org/r/1415372020-1871-1-git-send-email-a.ryabinin@samsung.com
>>     (sched/numa: Fix out of bounds read in sched_init_numa())
>>
>>     http://lkml.kernel.org/r/1415458085-12485-1-git-send-email-ryabinin.a.a@gmail.com
>>     (security: smack: fix out-of-bounds access in smk_parse_smack())
>>
>> Note that some functionality is not yet implemented in this 
>> patch set. Kasan has possibility to detect out-of-bounds 
>> accesses on global/stack variables. Neither 
>> kmemcheck/debug_pagealloc or slub_debug could do that.
>>
>>> That's in a 20-year-old code base, so one new minor bug discovered per
>>> three years?  Not worth it!
>>>
>>> Presumably more bugs will be exposed as more people use kasan on
>>> different kernel configs, but will their number and seriousness justify
>>> the maintenance effort?
>>>
>>
>> Yes, AFAIK there are only few users of kasan now, and I guess that
>> only small part of kernel code
>> was covered by it.
>> IMO kasan shouldn't take a lot maintenance efforts, most part of code
>> is isolated and it doesn't
>> have some complex dependencies on in-kernel API.
>> And you could always just poke me, I'd be happy to sort out any issues.
>>
>>> If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
>>> then that tips the balance a little.  What's the feasibility of that?
>>>
>>
>> I think kasan could replace kmemcheck at some point.
> 
> So that angle sounds interesting, because kmemcheck is 
> essentially unmaintained right now: in the last 3 years since 
> v3.0 arch/x86/mm/kmemcheck/ has not seen a single kmemcheck 
> specific change, only 4 incidental changes.
> 
> kmemcheck is also very architecture bound and somewhat fragile 
> due to having to decode instructions, so if generic, compiler 
> driven instrumentation can replace it, that would be a plus.
> 

GCC already supports address sanitizer on x86_32/x86_64/arm/arm64/rs6000,
and adding compiler's support for any other architecture is trivial.

Per-arch work on kernel-side maybe is not trivial, but there is nothing complex either.
It's much more simpler then kmemcheck.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-20  9:03           ` Ingo Molnar
@ 2014-11-20 16:32             ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-11-20 16:32 UTC (permalink / raw)
  To: Ingo Molnar, Andrew Morton
  Cc: Andrey Ryabinin, Andrey Ryabinin, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On Thu, Nov 20, 2014 at 12:03 PM, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
>> I've counted 16:
>>
>> aab515d (fib_trie: remove potential out of bound access)
>> 984f173 ([SCSI] sd: Fix potential out-of-bounds access)
>> 5e9ae2e (aio: fix use-after-free in aio_migratepage)
>> 2811eba (ipv6: udp packets following an UFO enqueued packet need also
>> be handled by UFO)
>> 057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
>> 9709674 (ipv4: fix a race in ip4_datagram_release_cb())
>> 4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
>> 624483f (mm: rmap: fix use-after-free in __put_anon_vma)
>> 93b7aca (lib/idr.c: fix out-of-bounds pointer dereference)
>> b4903d6 (mm: debugfs: move rounddown_pow_of_two() out from do_fault path)
>> 40eea80 (net: sendmsg: fix NULL pointer dereference)
>> 10ec947 (ipv4: fix buffer overflow in ip_options_compile())
>> dbf20cb2 (f2fs: avoid use invalid mapping of node_inode when evict meta inode)
>> d6d86c0 (mm/balloon_compaction: redesign ballooned pages management)
>>
>> + 2 recently found, seems minor:
>>     http://lkml.kernel.org/r/1415372020-1871-1-git-send-email-a.ryabinin@samsung.com
>>     (sched/numa: Fix out of bounds read in sched_init_numa())
>>
>>     http://lkml.kernel.org/r/1415458085-12485-1-git-send-email-ryabinin.a.a@gmail.com
>>     (security: smack: fix out-of-bounds access in smk_parse_smack())
>>
>> Note that some functionality is not yet implemented in this
>> patch set. Kasan has possibility to detect out-of-bounds
>> accesses on global/stack variables. Neither
>> kmemcheck/debug_pagealloc or slub_debug could do that.
>>
>> > That's in a 20-year-old code base, so one new minor bug discovered per
>> > three years?  Not worth it!
>> >
>> > Presumably more bugs will be exposed as more people use kasan on
>> > different kernel configs, but will their number and seriousness justify
>> > the maintenance effort?
>> >
>>
>> Yes, AFAIK there are only few users of kasan now, and I guess that
>> only small part of kernel code
>> was covered by it.
>> IMO kasan shouldn't take a lot maintenance efforts, most part of code
>> is isolated and it doesn't
>> have some complex dependencies on in-kernel API.
>> And you could always just poke me, I'd be happy to sort out any issues.
>>
>> > If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
>> > then that tips the balance a little.  What's the feasibility of that?
>> >
>>
>> I think kasan could replace kmemcheck at some point.
>
> So that angle sounds interesting, because kmemcheck is
> essentially unmaintained right now: in the last 3 years since
> v3.0 arch/x86/mm/kmemcheck/ has not seen a single kmemcheck
> specific change, only 4 incidental changes.
>
> kmemcheck is also very architecture bound and somewhat fragile
> due to having to decode instructions, so if generic, compiler
> driven instrumentation can replace it, that would be a plus.

Hi Andrew, Ingo,

I understand your concerns about added complexity.

Let me provide some background first.
We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others):
https://code.google.com/p/address-sanitizer/wiki/FoundBugs
https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
The tools are part of both gcc and clang compilers.

We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed here:
https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some. It's somewhat expected
that when we boot the kernel and run a trivial workload, we do not
find hundreds of bugs -- most of the harmful bugs in kernel codebase
were already fixed the hard way (the kernel is quite stable, right).
Based on our experience with user-space version of the tool, most of
the bugs will be discovered by continuously testing new code (new bugs
discovered the easy way), running fuzzers (that can discover existing
bugs that are not hit frequently enough) and running end-to-end tests
of production systems.

As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).

I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.

Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port.

Thanks

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-20 16:32             ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-11-20 16:32 UTC (permalink / raw)
  To: Ingo Molnar, Andrew Morton
  Cc: Andrey Ryabinin, Andrey Ryabinin, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On Thu, Nov 20, 2014 at 12:03 PM, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
>> I've counted 16:
>>
>> aab515d (fib_trie: remove potential out of bound access)
>> 984f173 ([SCSI] sd: Fix potential out-of-bounds access)
>> 5e9ae2e (aio: fix use-after-free in aio_migratepage)
>> 2811eba (ipv6: udp packets following an UFO enqueued packet need also
>> be handled by UFO)
>> 057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
>> 9709674 (ipv4: fix a race in ip4_datagram_release_cb())
>> 4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
>> 624483f (mm: rmap: fix use-after-free in __put_anon_vma)
>> 93b7aca (lib/idr.c: fix out-of-bounds pointer dereference)
>> b4903d6 (mm: debugfs: move rounddown_pow_of_two() out from do_fault path)
>> 40eea80 (net: sendmsg: fix NULL pointer dereference)
>> 10ec947 (ipv4: fix buffer overflow in ip_options_compile())
>> dbf20cb2 (f2fs: avoid use invalid mapping of node_inode when evict meta inode)
>> d6d86c0 (mm/balloon_compaction: redesign ballooned pages management)
>>
>> + 2 recently found, seems minor:
>>     http://lkml.kernel.org/r/1415372020-1871-1-git-send-email-a.ryabinin@samsung.com
>>     (sched/numa: Fix out of bounds read in sched_init_numa())
>>
>>     http://lkml.kernel.org/r/1415458085-12485-1-git-send-email-ryabinin.a.a@gmail.com
>>     (security: smack: fix out-of-bounds access in smk_parse_smack())
>>
>> Note that some functionality is not yet implemented in this
>> patch set. Kasan has possibility to detect out-of-bounds
>> accesses on global/stack variables. Neither
>> kmemcheck/debug_pagealloc or slub_debug could do that.
>>
>> > That's in a 20-year-old code base, so one new minor bug discovered per
>> > three years?  Not worth it!
>> >
>> > Presumably more bugs will be exposed as more people use kasan on
>> > different kernel configs, but will their number and seriousness justify
>> > the maintenance effort?
>> >
>>
>> Yes, AFAIK there are only few users of kasan now, and I guess that
>> only small part of kernel code
>> was covered by it.
>> IMO kasan shouldn't take a lot maintenance efforts, most part of code
>> is isolated and it doesn't
>> have some complex dependencies on in-kernel API.
>> And you could always just poke me, I'd be happy to sort out any issues.
>>
>> > If kasan will permit us to remove kmemcheck/debug_pagealloc/slub_debug
>> > then that tips the balance a little.  What's the feasibility of that?
>> >
>>
>> I think kasan could replace kmemcheck at some point.
>
> So that angle sounds interesting, because kmemcheck is
> essentially unmaintained right now: in the last 3 years since
> v3.0 arch/x86/mm/kmemcheck/ has not seen a single kmemcheck
> specific change, only 4 incidental changes.
>
> kmemcheck is also very architecture bound and somewhat fragile
> due to having to decode instructions, so if generic, compiler
> driven instrumentation can replace it, that would be a plus.

Hi Andrew, Ingo,

I understand your concerns about added complexity.

Let me provide some background first.
We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others):
https://code.google.com/p/address-sanitizer/wiki/FoundBugs
https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
The tools are part of both gcc and clang compilers.

We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed here:
https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some. It's somewhat expected
that when we boot the kernel and run a trivial workload, we do not
find hundreds of bugs -- most of the harmful bugs in kernel codebase
were already fixed the hard way (the kernel is quite stable, right).
Based on our experience with user-space version of the tool, most of
the bugs will be discovered by continuously testing new code (new bugs
discovered the easy way), running fuzzers (that can discover existing
bugs that are not hit frequently enough) and running end-to-end tests
of production systems.

As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).

I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.

Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port.

Thanks

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-20 16:32             ` Dmitry Vyukov
@ 2014-11-20 23:00               ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2014-11-20 23:00 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Ingo Molnar, Andrey Ryabinin, Andrey Ryabinin,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:

> Let me provide some background first.

Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
changelog?

Also, some quantitative info about the kmemleak overhead would be
useful.

In this discussion you've mentioned a few planned kasan enhancements. 
Please also list those and attempt to describe the amount of effort and
complexity levels.  Partly so other can understand the plans and partly
so we can see what we're semi-committing ourselves to if we merge this
stuff.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-20 23:00               ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2014-11-20 23:00 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Ingo Molnar, Andrey Ryabinin, Andrey Ryabinin,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:

> Let me provide some background first.

Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
changelog?

Also, some quantitative info about the kmemleak overhead would be
useful.

In this discussion you've mentioned a few planned kasan enhancements. 
Please also list those and attempt to describe the amount of effort and
complexity levels.  Partly so other can understand the plans and partly
so we can see what we're semi-committing ourselves to if we merge this
stuff.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-20 23:00               ` Andrew Morton
@ 2014-11-20 23:14                 ` Thomas Gleixner
  -1 siblings, 0 replies; 862+ messages in thread
From: Thomas Gleixner @ 2014-11-20 23:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dmitry Vyukov, Ingo Molnar, Andrey Ryabinin, Andrey Ryabinin,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones, Jonathan Corbet, Joe Perches, LKML,
	Linus Torvalds

On Thu, 20 Nov 2014, Andrew Morton wrote:

> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
> 
> > Let me provide some background first.
> 
> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
> changelog?

And into Documentation/UBSan or whatever the favourite place is. 0/n
lengthy explanations have a tendecy to be hard to retrieve.

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-20 23:14                 ` Thomas Gleixner
  0 siblings, 0 replies; 862+ messages in thread
From: Thomas Gleixner @ 2014-11-20 23:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dmitry Vyukov, Ingo Molnar, Andrey Ryabinin, Andrey Ryabinin,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones, Jonathan Corbet, Joe Perches, LKML,
	Linus Torvalds

On Thu, 20 Nov 2014, Andrew Morton wrote:

> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
> 
> > Let me provide some background first.
> 
> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
> changelog?

And into Documentation/UBSan or whatever the favourite place is. 0/n
lengthy explanations have a tendecy to be hard to retrieve.

Thanks,

	tglx


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-20 23:00               ` Andrew Morton
@ 2014-11-21  7:32                 ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-11-21  7:32 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Ingo Molnar, Andrey Ryabinin, Andrey Ryabinin,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On Fri, Nov 21, 2014 at 2:00 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
>
>> Let me provide some background first.
>
> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
> changelog?
>
> Also, some quantitative info about the kmemleak overhead would be
> useful.
>
> In this discussion you've mentioned a few planned kasan enhancements.
> Please also list those and attempt to describe the amount of effort and
> complexity levels.  Partly so other can understand the plans and partly
> so we can see what we're semi-committing ourselves to if we merge this
> stuff.


The enhancements are:
1. Detection of stack out-of-bounds. This is done mostly in the
compiler. Kernel only needs adjustments in reporting.
2. Detection of global out-of-bounds. Kernel will need to process
compiler-generated list of globals during bootstrap. Complexity is
very low and it is isolated in Asan code.
3. Heap quarantine (delayed reuse of heap blocks). We will need to
hook into slub, queue freed blocks in an efficient/scalable way and
integrate with memory shrinker (register_shrinker). This will be
somewhat complex and touch production kernel code. Konstantin
Khlebnikov wants to make the quarantine available independently of
Asan, as part of slub debug that can be enabled at runtime.
4. Port Asan to slAb.
5. Do various tuning of allocator integration, redzones sizes,
speeding up what is currently considered debug-only paths in
malloc/free, etc.
6. Some people also expressed interest in ARM port.

The user-space Asan codebase is mostly stable for the last two years,
so it's not that we have infinite plans.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-21  7:32                 ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2014-11-21  7:32 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Ingo Molnar, Andrey Ryabinin, Andrey Ryabinin,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On Fri, Nov 21, 2014 at 2:00 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
>
>> Let me provide some background first.
>
> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
> changelog?
>
> Also, some quantitative info about the kmemleak overhead would be
> useful.
>
> In this discussion you've mentioned a few planned kasan enhancements.
> Please also list those and attempt to describe the amount of effort and
> complexity levels.  Partly so other can understand the plans and partly
> so we can see what we're semi-committing ourselves to if we merge this
> stuff.


The enhancements are:
1. Detection of stack out-of-bounds. This is done mostly in the
compiler. Kernel only needs adjustments in reporting.
2. Detection of global out-of-bounds. Kernel will need to process
compiler-generated list of globals during bootstrap. Complexity is
very low and it is isolated in Asan code.
3. Heap quarantine (delayed reuse of heap blocks). We will need to
hook into slub, queue freed blocks in an efficient/scalable way and
integrate with memory shrinker (register_shrinker). This will be
somewhat complex and touch production kernel code. Konstantin
Khlebnikov wants to make the quarantine available independently of
Asan, as part of slub debug that can be enabled at runtime.
4. Port Asan to slAb.
5. Do various tuning of allocator integration, redzones sizes,
speeding up what is currently considered debug-only paths in
malloc/free, etc.
6. Some people also expressed interest in ARM port.

The user-space Asan codebase is mostly stable for the last two years,
so it's not that we have infinite plans.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-20 23:00               ` Andrew Morton
@ 2014-11-21 11:06                 ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-21 11:06 UTC (permalink / raw)
  To: Andrew Morton, Dmitry Vyukov
  Cc: Ingo Molnar, Andrey Ryabinin, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On 11/21/2014 02:00 AM, Andrew Morton wrote:
> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
> 
>> Let me provide some background first.
> 
> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
> changelog?
> 

Sure.

> Also, some quantitative info about the kmemleak overhead would be
> useful.
> 

Confused. Perhaps you mean kmemcheck?

I did some brief performance testing:

$ netperf -l 30

		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23


So on this workload kasan x500-x600 times faster then kmemcheck.


> In this discussion you've mentioned a few planned kasan enhancements. 
> Please also list those and attempt to describe the amount of effort and
> complexity levels.  Partly so other can understand the plans and partly
> so we can see what we're semi-committing ourselves to if we merge this
> stuff.
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-21 11:06                 ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-21 11:06 UTC (permalink / raw)
  To: Andrew Morton, Dmitry Vyukov
  Cc: Ingo Molnar, Andrey Ryabinin, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On 11/21/2014 02:00 AM, Andrew Morton wrote:
> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
> 
>> Let me provide some background first.
> 
> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
> changelog?
> 

Sure.

> Also, some quantitative info about the kmemleak overhead would be
> useful.
> 

Confused. Perhaps you mean kmemcheck?

I did some brief performance testing:

$ netperf -l 30

		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23


So on this workload kasan x500-x600 times faster then kmemcheck.


> In this discussion you've mentioned a few planned kasan enhancements. 
> Please also list those and attempt to describe the amount of effort and
> complexity levels.  Partly so other can understand the plans and partly
> so we can see what we're semi-committing ourselves to if we merge this
> stuff.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-21  7:32                 ` Dmitry Vyukov
@ 2014-11-21 11:19                   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-21 11:19 UTC (permalink / raw)
  To: Dmitry Vyukov, Andrew Morton
  Cc: Ingo Molnar, Andrey Ryabinin, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On 11/21/2014 10:32 AM, Dmitry Vyukov wrote:
> On Fri, Nov 21, 2014 at 2:00 AM, Andrew Morton
> <akpm@linux-foundation.org> wrote:
>> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
>>
>>> Let me provide some background first.
>>
>> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
>> changelog?
>>
>> Also, some quantitative info about the kmemleak overhead would be
>> useful.
>>
>> In this discussion you've mentioned a few planned kasan enhancements.
>> Please also list those and attempt to describe the amount of effort and
>> complexity levels.  Partly so other can understand the plans and partly
>> so we can see what we're semi-committing ourselves to if we merge this
>> stuff.
> 
> 
> The enhancements are:
> 1. Detection of stack out-of-bounds. This is done mostly in the
> compiler. Kernel only needs adjustments in reporting.

Not so easy.
 - Because of redzones stack size needs enlarging.
 - We also need to populate shadow for addresses where kernel .data section mapped
   because  we need shadow memory for init task's stack.


> 2. Detection of global out-of-bounds. Kernel will need to process
> compiler-generated list of globals during bootstrap. Complexity is
> very low and it is isolated in Asan code.

One easy thing to do here is adding support for .init.array.* constructors.
Kernel already supports .init.array constructors, but for address sanitizer,
GCC puts constructors into .init.array.00099 section.

Just as for stack redzones, shadow needs to be populated for kernel .data addresses.
Plus shadow memory for module mapping space is also needed.


> 3. Heap quarantine (delayed reuse of heap blocks). We will need to
> hook into slub, queue freed blocks in an efficient/scalable way and
> integrate with memory shrinker (register_shrinker). This will be
> somewhat complex and touch production kernel code. Konstantin
> Khlebnikov wants to make the quarantine available independently of
> Asan, as part of slub debug that can be enabled at runtime.

If someone wants to try quarantine for slub: git://github.com/koct9i/linux/ --branch=quarantine

It has some problems with switching it on/off in runtime, besides that, it works.

> 4. Port Asan to slAb.
> 5. Do various tuning of allocator integration, redzones sizes,
> speeding up what is currently considered debug-only paths in
> malloc/free, etc.
> 6. Some people also expressed interest in ARM port.
> 

7. Compiler can't instrument assembler code, so it would be nice to have
   checks in most frequently used parts of inline assembly. Something like
    that:

	static inline void atomic_inc(atomic_t *v)
	{
		kasan_check _memory(v, sizeof(*v), WRITE);
		asm volatile(LOCK_PREFIX "incl %0"
			     : "+m" (v->counter));
	}

8. With asan's inline instrumentation bugs like NULL-ptr derefs or access to user space
turn into General protection faults. I will add a hint message into GPF handler to
indicate that GPF could be caused by NULL-ptr dereference or user memory access.
It's trivial, so I'll do this in v7.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-21 11:19                   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-21 11:19 UTC (permalink / raw)
  To: Dmitry Vyukov, Andrew Morton
  Cc: Ingo Molnar, Andrey Ryabinin, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, LKML, Linus Torvalds

On 11/21/2014 10:32 AM, Dmitry Vyukov wrote:
> On Fri, Nov 21, 2014 at 2:00 AM, Andrew Morton
> <akpm@linux-foundation.org> wrote:
>> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
>>
>>> Let me provide some background first.
>>
>> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
>> changelog?
>>
>> Also, some quantitative info about the kmemleak overhead would be
>> useful.
>>
>> In this discussion you've mentioned a few planned kasan enhancements.
>> Please also list those and attempt to describe the amount of effort and
>> complexity levels.  Partly so other can understand the plans and partly
>> so we can see what we're semi-committing ourselves to if we merge this
>> stuff.
> 
> 
> The enhancements are:
> 1. Detection of stack out-of-bounds. This is done mostly in the
> compiler. Kernel only needs adjustments in reporting.

Not so easy.
 - Because of redzones stack size needs enlarging.
 - We also need to populate shadow for addresses where kernel .data section mapped
   because  we need shadow memory for init task's stack.


> 2. Detection of global out-of-bounds. Kernel will need to process
> compiler-generated list of globals during bootstrap. Complexity is
> very low and it is isolated in Asan code.

One easy thing to do here is adding support for .init.array.* constructors.
Kernel already supports .init.array constructors, but for address sanitizer,
GCC puts constructors into .init.array.00099 section.

Just as for stack redzones, shadow needs to be populated for kernel .data addresses.
Plus shadow memory for module mapping space is also needed.


> 3. Heap quarantine (delayed reuse of heap blocks). We will need to
> hook into slub, queue freed blocks in an efficient/scalable way and
> integrate with memory shrinker (register_shrinker). This will be
> somewhat complex and touch production kernel code. Konstantin
> Khlebnikov wants to make the quarantine available independently of
> Asan, as part of slub debug that can be enabled at runtime.

If someone wants to try quarantine for slub: git://github.com/koct9i/linux/ --branch=quarantine

It has some problems with switching it on/off in runtime, besides that, it works.

> 4. Port Asan to slAb.
> 5. Do various tuning of allocator integration, redzones sizes,
> speeding up what is currently considered debug-only paths in
> malloc/free, etc.
> 6. Some people also expressed interest in ARM port.
> 

7. Compiler can't instrument assembler code, so it would be nice to have
   checks in most frequently used parts of inline assembly. Something like
    that:

	static inline void atomic_inc(atomic_t *v)
	{
		kasan_check _memory(v, sizeof(*v), WRITE);
		asm volatile(LOCK_PREFIX "incl %0"
			     : "+m" (v->counter));
	}

8. With asan's inline instrumentation bugs like NULL-ptr derefs or access to user space
turn into General protection faults. I will add a hint message into GPF handler to
indicate that GPF could be caused by NULL-ptr dereference or user memory access.
It's trivial, so I'll do this in v7.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
  2014-11-20 23:14                 ` Thomas Gleixner
@ 2014-11-21 16:06                   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-21 16:06 UTC (permalink / raw)
  To: Thomas Gleixner, Andrew Morton
  Cc: Dmitry Vyukov, Ingo Molnar, Andrey Ryabinin,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones, Jonathan Corbet, Joe Perches, LKML,
	Linus Torvalds

On 11/21/2014 02:14 AM, Thomas Gleixner wrote:
> On Thu, 20 Nov 2014, Andrew Morton wrote:
> 
>> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
>>
>>> Let me provide some background first.
>>
>> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
>> changelog?
> 
> And into Documentation/UBSan or whatever the favourite place is. 0/n
> lengthy explanations have a tendecy to be hard to retrieve.
> 

I would rather put this into 1/n patch changelog.
IMO Documentation should only describe how to use this tool and how it works.

And UBSan != KASan. UBSan for detecting undefined behavior,
KASan for out of bounds and use after frees.

> Thanks,
> 
> 	tglx
> 
> 
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-21 16:06                   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-21 16:06 UTC (permalink / raw)
  To: Thomas Gleixner, Andrew Morton
  Cc: Dmitry Vyukov, Ingo Molnar, Andrey Ryabinin,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, x86, linux-mm, Randy Dunlap, Peter Zijlstra,
	Alexander Viro, Dave Jones, Jonathan Corbet, Joe Perches, LKML,
	Linus Torvalds

On 11/21/2014 02:14 AM, Thomas Gleixner wrote:
> On Thu, 20 Nov 2014, Andrew Morton wrote:
> 
>> On Thu, 20 Nov 2014 20:32:30 +0400 Dmitry Vyukov <dvyukov@google.com> wrote:
>>
>>> Let me provide some background first.
>>
>> Well that was useful.  Andrey, please slurp Dmitry's info into the 0/n
>> changelog?
> 
> And into Documentation/UBSan or whatever the favourite place is. 0/n
> lengthy explanations have a tendecy to be hard to retrieve.
> 

I would rather put this into 1/n patch changelog.
IMO Documentation should only describe how to use this tool and how it works.

And UBSan != KASan. UBSan for detecting undefined behavior,
KASan for out of bounds and use after frees.

> Thanks,
> 
> 	tglx
> 
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v7 00/12] Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2014-11-24 18:02   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, Linus Torvalds, linux-kernel

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based should apply cleanly on top of 3.18-rc6 and mmotm-2014-11-19-16-16.
Patches  available in git as well:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v7

Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Andrey Ryabinin (12):
  Add kernel address sanitizer infrastructure.
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions

 Documentation/kasan.txt               | 169 ++++++++++++
 Makefile                              |  23 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/boot/compressed/eboot.c      |   2 +
 arch/x86/boot/compressed/misc.h       |   1 +
 arch/x86/include/asm/kasan.h          |  27 ++
 arch/x86/include/asm/string_64.h      |  18 +-
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 ++
 arch/x86/kernel/setup.c               |   3 +
 arch/x86/kernel/x8664_ksyms_64.c      |  10 +-
 arch/x86/lib/memcpy_64.S              |   2 +
 arch/x86/lib/memmove_64.S             |   4 +
 arch/x86/lib/memset_64.S              |  10 +-
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/kasan_init_64.c           | 107 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   6 +
 include/linux/kasan.h                 |  69 +++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |  10 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 ++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 +++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   7 +
 mm/kasan/kasan.c                      | 506 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  54 ++++
 mm/kasan/report.c                     | 237 ++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 +++-
 scripts/Makefile.lib                  |  10 +
 45 files changed, 1714 insertions(+), 22 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Joe Perches <joe@perches.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
-- 
2.1.3


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v7 00/12] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-24 18:02   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, Linus Torvalds, linux-kernel

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based should apply cleanly on top of 3.18-rc6 and mmotm-2014-11-19-16-16.
Patches  available in git as well:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v7

Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Andrey Ryabinin (12):
  Add kernel address sanitizer infrastructure.
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions

 Documentation/kasan.txt               | 169 ++++++++++++
 Makefile                              |  23 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/boot/compressed/eboot.c      |   2 +
 arch/x86/boot/compressed/misc.h       |   1 +
 arch/x86/include/asm/kasan.h          |  27 ++
 arch/x86/include/asm/string_64.h      |  18 +-
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 ++
 arch/x86/kernel/setup.c               |   3 +
 arch/x86/kernel/x8664_ksyms_64.c      |  10 +-
 arch/x86/lib/memcpy_64.S              |   2 +
 arch/x86/lib/memmove_64.S             |   4 +
 arch/x86/lib/memset_64.S              |  10 +-
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/kasan_init_64.c           | 107 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   6 +
 include/linux/kasan.h                 |  69 +++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |  10 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 ++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 +++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   7 +
 mm/kasan/kasan.c                      | 506 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  54 ++++
 mm/kasan/report.c                     | 237 ++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 +++-
 scripts/Makefile.lib                  |  10 +
 45 files changed, 1714 insertions(+), 22 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Joe Perches <joe@perches.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v7 01/12] Add kernel address sanitizer infrastructure.
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++
 Makefile                              |  23 ++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  42 ++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 ++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   7 +
 mm/kasan/kasan.c                      | 374 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  49 +++++
 mm/kasan/report.c                     | 205 +++++++++++++++++++
 scripts/Makefile.lib                  |  10 +
 13 files changed, 927 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 92edae4..052c1f4 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -427,7 +427,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -758,6 +758,25 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+  kasan_inline := $(call cc-option, $(CFLAGS_KASAN) \
+			-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+			--param asan-instrumentation-with-call-threshold=10000)
+  ifeq ($(kasan_inline),)
+    $(warning Cannot use CONFIG_KASAN_INLINE: \
+	      inline instrumentation is not supported by compiler. Trying CONFIG_KASAN_OUTLINE.)
+  else
+    CFLAGS_KASAN := $(kasan_inline)
+  endif
+
+endif
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address is not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8db31ef..26e1b47 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1662,6 +1662,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ddd070a..bb26ec3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index d9d5794..33d9971 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -72,3 +72,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..ef2d313
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,7 @@
+KASAN_SANITIZE := n
+
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..f77be01
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,374 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
+
+
+/* GCC 5.0 has different function names by default */
+__attribute__((alias("__asan_load1")))
+void __asan_load1_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load1_noabort);
+
+__attribute__((alias("__asan_load2")))
+void __asan_load2_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load2_noabort);
+
+__attribute__((alias("__asan_load4")))
+void __asan_load4_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load4_noabort);
+
+__attribute__((alias("__asan_load8")))
+void __asan_load8_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load8_noabort);
+
+__attribute__((alias("__asan_load16")))
+void __asan_load16_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load16_noabort);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+__attribute__((alias("__asan_store1")))
+void __asan_store1_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store1_noabort);
+
+__attribute__((alias("__asan_store2")))
+void __asan_store2_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store2_noabort);
+
+__attribute__((alias("__asan_store4")))
+void __asan_store4_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store4_noabort);
+
+__attribute__((alias("__asan_store8")))
+void __asan_store8_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store8_noabort);
+
+__attribute__((alias("__asan_store16")))
+void __asan_store16_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store16_noabort);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_storeN_noabort);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..6da1d78
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,49 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..56a2089
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,205 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..a5845a2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 01/12] Add kernel address sanitizer infrastructure.
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++
 Makefile                              |  23 ++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  42 ++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 ++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   7 +
 mm/kasan/kasan.c                      | 374 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  49 +++++
 mm/kasan/report.c                     | 205 +++++++++++++++++++
 scripts/Makefile.lib                  |  10 +
 13 files changed, 927 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 92edae4..052c1f4 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -427,7 +427,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -758,6 +758,25 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+  kasan_inline := $(call cc-option, $(CFLAGS_KASAN) \
+			-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+			--param asan-instrumentation-with-call-threshold=10000)
+  ifeq ($(kasan_inline),)
+    $(warning Cannot use CONFIG_KASAN_INLINE: \
+	      inline instrumentation is not supported by compiler. Trying CONFIG_KASAN_OUTLINE.)
+  else
+    CFLAGS_KASAN := $(kasan_inline)
+  endif
+
+endif
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address is not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8db31ef..26e1b47 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1662,6 +1662,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ddd070a..bb26ec3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index d9d5794..33d9971 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -72,3 +72,4 @@ obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
 obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..ef2d313
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,7 @@
+KASAN_SANITIZE := n
+
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..f77be01
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,374 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
+
+
+/* GCC 5.0 has different function names by default */
+__attribute__((alias("__asan_load1")))
+void __asan_load1_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load1_noabort);
+
+__attribute__((alias("__asan_load2")))
+void __asan_load2_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load2_noabort);
+
+__attribute__((alias("__asan_load4")))
+void __asan_load4_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load4_noabort);
+
+__attribute__((alias("__asan_load8")))
+void __asan_load8_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load8_noabort);
+
+__attribute__((alias("__asan_load16")))
+void __asan_load16_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load16_noabort);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+__attribute__((alias("__asan_store1")))
+void __asan_store1_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store1_noabort);
+
+__attribute__((alias("__asan_store2")))
+void __asan_store2_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store2_noabort);
+
+__attribute__((alias("__asan_store4")))
+void __asan_store4_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store4_noabort);
+
+__attribute__((alias("__asan_store8")))
+void __asan_store8_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store8_noabort);
+
+__attribute__((alias("__asan_store16")))
+void __asan_store16_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store16_noabort);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_storeN_noabort);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..6da1d78
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,49 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..56a2089
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,205 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..a5845a2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 02/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 8779d63..97f56f6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 02/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 8779d63..97f56f6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 03/12] x86_64: add KASan support
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  27 ++++++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +++-
 arch/x86/kernel/head_64.S         |  28 ++++++++++
 arch/x86/kernel/setup.c           |   3 ++
 arch/x86/mm/Makefile              |   3 ++
 arch/x86/mm/kasan_init_64.c       | 107 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   2 +
 15 files changed, 191 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ec21dfd..0ccd17a 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -84,6 +84,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 44a866b..0cb8703 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..47e0d42
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d4502c..74d3f3e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 872dab8..9f9e989 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1174,6 +1175,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..70041fd
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,107 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled\n");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..386cc8b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdfffe90000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 03/12] x86_64: add KASan support
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  27 ++++++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +++-
 arch/x86/kernel/head_64.S         |  28 ++++++++++
 arch/x86/kernel/setup.c           |   3 ++
 arch/x86/mm/Makefile              |   3 ++
 arch/x86/mm/kasan_init_64.c       | 107 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   2 +
 15 files changed, 191 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ec21dfd..0ccd17a 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -84,6 +84,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 44a866b..0cb8703 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..47e0d42
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d4502c..74d3f3e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 872dab8..9f9e989 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1174,6 +1175,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 6a19ad9..b6c5168 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -8,6 +8,8 @@ CFLAGS_setup_nx.o		:= $(nostackp)
 
 CFLAGS_fault.o := -I$(src)/../include/asm/trace
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+
 obj-$(CONFIG_X86_PAT)		+= pat_rbtree.o
 obj-$(CONFIG_SMP)		+= tlb.o
 
@@ -30,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
 obj-$(CONFIG_MEMTEST)		+= memtest.o
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..70041fd
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,107 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled\n");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..386cc8b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdfffe90000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 04/12] mm: page_alloc: add kasan hooks on alloc and free paths
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index a857225..a5c8e84 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -61,6 +62,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index f77be01..b336073 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6da1d78..2a6a961 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 56a2089..8ac3b6b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -78,6 +81,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b0e6eab..3829589 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -58,6 +58,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -758,6 +759,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -940,6 +942,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 04/12] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index a857225..a5c8e84 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -61,6 +62,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index f77be01..b336073 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6da1d78..2a6a961 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 56a2089..8ac3b6b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -78,6 +81,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b0e6eab..3829589 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -58,6 +58,7 @@
 #include <linux/page-debug-flags.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
+#include <linux/kasan.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -758,6 +759,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -940,6 +942,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 05/12] mm: slub: introduce virt_to_obj function.
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 05/12] mm: slub: introduce virt_to_obj function.
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 06/12] mm: slub: share slab_err and object_err functions
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Joe Perches, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 mm/slub.c                | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..144b5cb 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,9 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+__printf(3, 4)
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 95d2142..0c01584 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 06/12] mm: slub: share slab_err and object_err functions
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Joe Perches, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Sasha Levin,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 mm/slub.c                | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..144b5cb 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,9 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+__printf(3, 4)
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 95d2142..0c01584 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 0c01584..88ad8b8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 0c01584..88ad8b8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 08/12] mm: slub: add kernel address sanitizer support for slub allocator
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 35 ++++++++++++++++++--
 9 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 8a2457d..5dc0d69 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 386cc8b..1fa4fe8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 33d9971..5f0138f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index b336073..9f5326e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2a6a961..049349b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 8ac3b6b..185d04c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -76,11 +81,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index e03dd6f..4dcbc2d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -789,6 +789,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -973,8 +974,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 88ad8b8..6af95c0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1419,8 +1429,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2491,6 +2503,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2517,6 +2530,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2900,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3272,6 +3288,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3315,12 +3333,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3336,6 +3356,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 08/12] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 35 ++++++++++++++++++--
 9 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 8a2457d..5dc0d69 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 386cc8b..1fa4fe8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 33d9971..5f0138f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index b336073..9f5326e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2a6a961..049349b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 8ac3b6b..185d04c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -76,11 +81,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index e03dd6f..4dcbc2d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -789,6 +789,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -973,8 +974,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 88ad8b8..6af95c0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
+	kasan_slab_free(s, x);
 
 	/*
 	 * Trouble is that we may no longer disable interrupts in the fast path
@@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1419,8 +1429,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2491,6 +2503,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2517,6 +2530,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2900,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3272,6 +3288,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3315,12 +3333,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3336,6 +3356,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 09/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index a6c5d7e..3914e56 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1429,6 +1431,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 09/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index a6c5d7e..3914e56 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1429,6 +1431,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 10/12] kmemleak: disable kasan instrumentation for kmemleak
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 10/12] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 11/12] lib: add kasan test module
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 1fa4fe8..8548646 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 750617c..1d8211a 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..896dee5
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 11/12] lib: add kasan test module
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 1fa4fe8..8548646 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 750617c..1d8211a 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..896dee5
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 12/12] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  2014-11-24 18:02   ` Andrey Ryabinin
@ 2014-11-24 18:02     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Matt Fleming, Thomas Gleixner, Ingo Molnar

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c |  2 ++
 arch/x86/boot/compressed/misc.h  |  1 +
 arch/x86/include/asm/string_64.h | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c | 10 ++++++++--
 arch/x86/lib/memcpy_64.S         |  2 ++
 arch/x86/lib/memmove_64.S        |  4 ++++
 arch/x86/lib/memset_64.S         | 10 ++++++----
 mm/kasan/kasan.c                 | 28 +++++++++++++++++++++++++++-
 8 files changed, 67 insertions(+), 8 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 1acf605..8f46fa7 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -14,6 +14,8 @@
 #include <asm/desc.h>
 
 #undef memcpy			/* Use memcpy from misc.c */
+#undef memset
+#undef memmove
 
 #include "eboot.h"
 
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..d79db86 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 9f5326e..190e471 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -44,7 +44,7 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 	shadow_start = kasan_mem_to_shadow(addr);
 	shadow_end = kasan_mem_to_shadow(addr + size);
 
-	memset((void *)shadow_start, value, shadow_end - shadow_start);
+	__memset((void *)shadow_start, value, shadow_end - shadow_start);
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size)
@@ -248,6 +248,32 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	check_memory_region((unsigned long)addr, len, true);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t count)
+{
+	check_memory_region((unsigned long)src, count, false);
+	check_memory_region((unsigned long)dest, count, true);
+
+	return __memmove(dest, src, count);
+}
+
+#undef memcpy
+void *memcpy(void *to, const void *from, size_t len)
+{
+	check_memory_region((unsigned long)from, len, false);
+	check_memory_region((unsigned long)to, len, true);
+
+	return __memcpy(to, from, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v7 12/12] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
@ 2014-11-24 18:02     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, linux-kernel,
	Matt Fleming, Thomas Gleixner, Ingo Molnar

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c |  2 ++
 arch/x86/boot/compressed/misc.h  |  1 +
 arch/x86/include/asm/string_64.h | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c | 10 ++++++++--
 arch/x86/lib/memcpy_64.S         |  2 ++
 arch/x86/lib/memmove_64.S        |  4 ++++
 arch/x86/lib/memset_64.S         | 10 ++++++----
 mm/kasan/kasan.c                 | 28 +++++++++++++++++++++++++++-
 8 files changed, 67 insertions(+), 8 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 1acf605..8f46fa7 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -14,6 +14,8 @@
 #include <asm/desc.h>
 
 #undef memcpy			/* Use memcpy from misc.c */
+#undef memset
+#undef memmove
 
 #include "eboot.h"
 
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..d79db86 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 9f5326e..190e471 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -44,7 +44,7 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 	shadow_start = kasan_mem_to_shadow(addr);
 	shadow_end = kasan_mem_to_shadow(addr + size);
 
-	memset((void *)shadow_start, value, shadow_end - shadow_start);
+	__memset((void *)shadow_start, value, shadow_end - shadow_start);
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size)
@@ -248,6 +248,32 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	check_memory_region((unsigned long)addr, len, true);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t count)
+{
+	check_memory_region((unsigned long)src, count, false);
+	check_memory_region((unsigned long)dest, count, true);
+
+	return __memmove(dest, src, count);
+}
+
+#undef memcpy
+void *memcpy(void *to, const void *from, size_t len)
+{
+	check_memory_region((unsigned long)from, len, false);
+	check_memory_region((unsigned long)to, len, true);
+
+	return __memcpy(to, from, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 03/12] x86_64: add KASan support
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-24 18:45       ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-11-24 18:45 UTC (permalink / raw)
  To: Andrey Ryabinin, Andrew Morton
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Thomas Gleixner, Ingo Molnar

On 11/24/2014 01:02 PM, Andrey Ryabinin wrote:
> +static int kasan_die_handler(struct notifier_block *self,
> +			     unsigned long val,
> +			     void *data)
> +{
> +	if (val == DIE_GPF) {
> +		pr_emerg("CONFIG_KASAN_INLINE enabled\n");
> +		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
> +	}
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block kasan_die_notifier = {
> +	.notifier_call = kasan_die_handler,
> +};

This part fails to compile:

  CC      arch/x86/mm/kasan_init_64.o
arch/x86/mm/kasan_init_64.c: In function ‘kasan_die_handler’:
arch/x86/mm/kasan_init_64.c:72:13: error: ‘DIE_GPF’ undeclared (first use in this function)
  if (val == DIE_GPF) {
             ^
arch/x86/mm/kasan_init_64.c:72:13: note: each undeclared identifier is reported only once for each function it appears in
arch/x86/mm/kasan_init_64.c: In function ‘kasan_init’:
arch/x86/mm/kasan_init_64.c:89:2: error: implicit declaration of function ‘register_die_notifier’ [-Werror=implicit-function-declaration]
  register_die_notifier(&kasan_die_notifier);
  ^
cc1: some warnings being treated as errors
make[1]: *** [arch/x86/mm/kasan_init_64.o] Error 1


Simple fix:

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 70041fd..c8f7f3e 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -5,6 +5,7 @@
 #include <linux/vmalloc.h>

 #include <asm/tlbflush.h>
+#include <linux/kdebug.h>

 extern pgd_t early_level4_pgt[PTRS_PER_PGD];
 extern struct range pfn_mapped[E820_X_MAX];


Thanks,
Sasha

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 03/12] x86_64: add KASan support
@ 2014-11-24 18:45       ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2014-11-24 18:45 UTC (permalink / raw)
  To: Andrey Ryabinin, Andrew Morton
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, Dave Jones, x86, linux-mm,
	linux-kernel, Thomas Gleixner, Ingo Molnar

On 11/24/2014 01:02 PM, Andrey Ryabinin wrote:
> +static int kasan_die_handler(struct notifier_block *self,
> +			     unsigned long val,
> +			     void *data)
> +{
> +	if (val == DIE_GPF) {
> +		pr_emerg("CONFIG_KASAN_INLINE enabled\n");
> +		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
> +	}
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block kasan_die_notifier = {
> +	.notifier_call = kasan_die_handler,
> +};

This part fails to compile:

  CC      arch/x86/mm/kasan_init_64.o
arch/x86/mm/kasan_init_64.c: In function ?kasan_die_handler?:
arch/x86/mm/kasan_init_64.c:72:13: error: ?DIE_GPF? undeclared (first use in this function)
  if (val == DIE_GPF) {
             ^
arch/x86/mm/kasan_init_64.c:72:13: note: each undeclared identifier is reported only once for each function it appears in
arch/x86/mm/kasan_init_64.c: In function ?kasan_init?:
arch/x86/mm/kasan_init_64.c:89:2: error: implicit declaration of function ?register_die_notifier? [-Werror=implicit-function-declaration]
  register_die_notifier(&kasan_die_notifier);
  ^
cc1: some warnings being treated as errors
make[1]: *** [arch/x86/mm/kasan_init_64.o] Error 1


Simple fix:

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 70041fd..c8f7f3e 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -5,6 +5,7 @@
 #include <linux/vmalloc.h>

 #include <asm/tlbflush.h>
+#include <linux/kdebug.h>

 extern pgd_t early_level4_pgt[PTRS_PER_PGD];
 extern struct range pfn_mapped[E820_X_MAX];


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 05/12] mm: slub: introduce virt_to_obj function.
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-24 20:08       ` Christoph Lameter
  -1 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-11-24 20:08 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

On Mon, 24 Nov 2014, Andrey Ryabinin wrote:

> virt_to_obj takes kmem_cache address, address of slab page,
> address x pointing somewhere inside slab object,
> and returns address of the begging of object.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 05/12] mm: slub: introduce virt_to_obj function.
@ 2014-11-24 20:08       ` Christoph Lameter
  0 siblings, 0 replies; 862+ messages in thread
From: Christoph Lameter @ 2014-11-24 20:08 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

On Mon, 24 Nov 2014, Andrey Ryabinin wrote:

> virt_to_obj takes kmem_cache address, address of slab page,
> address x pointing somewhere inside slab object,
> and returns address of the begging of object.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 03/12] x86_64: add KASan support
  2014-11-24 18:45       ` Sasha Levin
@ 2014-11-24 21:26         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 21:26 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, LKML, Thomas Gleixner,
	Ingo Molnar

2014-11-24 21:45 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
> On 11/24/2014 01:02 PM, Andrey Ryabinin wrote:
>> +static int kasan_die_handler(struct notifier_block *self,
>> +                          unsigned long val,
>> +                          void *data)
>> +{
>> +     if (val == DIE_GPF) {
>> +             pr_emerg("CONFIG_KASAN_INLINE enabled\n");
>> +             pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
>> +     }
>> +     return NOTIFY_OK;
>> +}
>> +
>> +static struct notifier_block kasan_die_notifier = {
>> +     .notifier_call = kasan_die_handler,
>> +};
>
> This part fails to compile:
>
>   CC      arch/x86/mm/kasan_init_64.o
> arch/x86/mm/kasan_init_64.c: In function ‘kasan_die_handler’:
> arch/x86/mm/kasan_init_64.c:72:13: error: ‘DIE_GPF’ undeclared (first use in this function)
>   if (val == DIE_GPF) {
>              ^
> arch/x86/mm/kasan_init_64.c:72:13: note: each undeclared identifier is reported only once for each function it appears in
> arch/x86/mm/kasan_init_64.c: In function ‘kasan_init’:
> arch/x86/mm/kasan_init_64.c:89:2: error: implicit declaration of function ‘register_die_notifier’ [-Werror=implicit-function-declaration]
>   register_die_notifier(&kasan_die_notifier);
>   ^
> cc1: some warnings being treated as errors
> make[1]: *** [arch/x86/mm/kasan_init_64.o] Error 1
>
>
> Simple fix:
>

Thanks, I thought I've fixed this, but apparently I forgot to commit it.


> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
> index 70041fd..c8f7f3e 100644
> --- a/arch/x86/mm/kasan_init_64.c
> +++ b/arch/x86/mm/kasan_init_64.c
> @@ -5,6 +5,7 @@
>  #include <linux/vmalloc.h>
>
>  #include <asm/tlbflush.h>
> +#include <linux/kdebug.h>
>
>  extern pgd_t early_level4_pgt[PTRS_PER_PGD];
>  extern struct range pfn_mapped[E820_X_MAX];
>
>
> Thanks,
> Sasha
>

-- 
Best regards,
Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 03/12] x86_64: add KASan support
@ 2014-11-24 21:26         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-24 21:26 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, LKML, Thomas Gleixner,
	Ingo Molnar

2014-11-24 21:45 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
> On 11/24/2014 01:02 PM, Andrey Ryabinin wrote:
>> +static int kasan_die_handler(struct notifier_block *self,
>> +                          unsigned long val,
>> +                          void *data)
>> +{
>> +     if (val == DIE_GPF) {
>> +             pr_emerg("CONFIG_KASAN_INLINE enabled\n");
>> +             pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
>> +     }
>> +     return NOTIFY_OK;
>> +}
>> +
>> +static struct notifier_block kasan_die_notifier = {
>> +     .notifier_call = kasan_die_handler,
>> +};
>
> This part fails to compile:
>
>   CC      arch/x86/mm/kasan_init_64.o
> arch/x86/mm/kasan_init_64.c: In function ‘kasan_die_handler’:
> arch/x86/mm/kasan_init_64.c:72:13: error: ‘DIE_GPF’ undeclared (first use in this function)
>   if (val == DIE_GPF) {
>              ^
> arch/x86/mm/kasan_init_64.c:72:13: note: each undeclared identifier is reported only once for each function it appears in
> arch/x86/mm/kasan_init_64.c: In function ‘kasan_init’:
> arch/x86/mm/kasan_init_64.c:89:2: error: implicit declaration of function ‘register_die_notifier’ [-Werror=implicit-function-declaration]
>   register_die_notifier(&kasan_die_notifier);
>   ^
> cc1: some warnings being treated as errors
> make[1]: *** [arch/x86/mm/kasan_init_64.o] Error 1
>
>
> Simple fix:
>

Thanks, I thought I've fixed this, but apparently I forgot to commit it.


> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
> index 70041fd..c8f7f3e 100644
> --- a/arch/x86/mm/kasan_init_64.c
> +++ b/arch/x86/mm/kasan_init_64.c
> @@ -5,6 +5,7 @@
>  #include <linux/vmalloc.h>
>
>  #include <asm/tlbflush.h>
> +#include <linux/kdebug.h>
>
>  extern pgd_t early_level4_pgt[PTRS_PER_PGD];
>  extern struct range pfn_mapped[E820_X_MAX];
>
>
> Thanks,
> Sasha
>

-- 
Best regards,
Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 03/12] x86_64: add KASan support
  2014-11-24 21:26         ` Andrey Ryabinin
@ 2014-11-25 10:47           ` Dmitry Chernenkov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 10:47 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Sasha Levin, Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Christoph Lameter, Joonsoo Kim,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, LKML, Thomas Gleixner, Ingo Molnar

LGTM.

Also, please send a pull request to google/kasan whenever you're ready
(for the whole bulk of changes).



On Tue, Nov 25, 2014 at 12:26 AM, Andrey Ryabinin
<ryabinin.a.a@gmail.com> wrote:
> 2014-11-24 21:45 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
>> On 11/24/2014 01:02 PM, Andrey Ryabinin wrote:
>>> +static int kasan_die_handler(struct notifier_block *self,
>>> +                          unsigned long val,
>>> +                          void *data)
>>> +{
>>> +     if (val == DIE_GPF) {
>>> +             pr_emerg("CONFIG_KASAN_INLINE enabled\n");
>>> +             pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
>>> +     }
>>> +     return NOTIFY_OK;
>>> +}
>>> +
>>> +static struct notifier_block kasan_die_notifier = {
>>> +     .notifier_call = kasan_die_handler,
>>> +};
>>
>> This part fails to compile:
>>
>>   CC      arch/x86/mm/kasan_init_64.o
>> arch/x86/mm/kasan_init_64.c: In function ‘kasan_die_handler’:
>> arch/x86/mm/kasan_init_64.c:72:13: error: ‘DIE_GPF’ undeclared (first use in this function)
>>   if (val == DIE_GPF) {
>>              ^
>> arch/x86/mm/kasan_init_64.c:72:13: note: each undeclared identifier is reported only once for each function it appears in
>> arch/x86/mm/kasan_init_64.c: In function ‘kasan_init’:
>> arch/x86/mm/kasan_init_64.c:89:2: error: implicit declaration of function ‘register_die_notifier’ [-Werror=implicit-function-declaration]
>>   register_die_notifier(&kasan_die_notifier);
>>   ^
>> cc1: some warnings being treated as errors
>> make[1]: *** [arch/x86/mm/kasan_init_64.o] Error 1
>>
>>
>> Simple fix:
>>
>
> Thanks, I thought I've fixed this, but apparently I forgot to commit it.
>
>
>> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
>> index 70041fd..c8f7f3e 100644
>> --- a/arch/x86/mm/kasan_init_64.c
>> +++ b/arch/x86/mm/kasan_init_64.c
>> @@ -5,6 +5,7 @@
>>  #include <linux/vmalloc.h>
>>
>>  #include <asm/tlbflush.h>
>> +#include <linux/kdebug.h>
>>
>>  extern pgd_t early_level4_pgt[PTRS_PER_PGD];
>>  extern struct range pfn_mapped[E820_X_MAX];
>>
>>
>> Thanks,
>> Sasha
>>
>
> --
> Best regards,
> Andrey Ryabinin

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 03/12] x86_64: add KASan support
@ 2014-11-25 10:47           ` Dmitry Chernenkov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 10:47 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Sasha Levin, Andrey Ryabinin, Andrew Morton, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Christoph Lameter, Joonsoo Kim,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin,
	Dave Jones, x86, linux-mm, LKML, Thomas Gleixner, Ingo Molnar

LGTM.

Also, please send a pull request to google/kasan whenever you're ready
(for the whole bulk of changes).



On Tue, Nov 25, 2014 at 12:26 AM, Andrey Ryabinin
<ryabinin.a.a@gmail.com> wrote:
> 2014-11-24 21:45 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
>> On 11/24/2014 01:02 PM, Andrey Ryabinin wrote:
>>> +static int kasan_die_handler(struct notifier_block *self,
>>> +                          unsigned long val,
>>> +                          void *data)
>>> +{
>>> +     if (val == DIE_GPF) {
>>> +             pr_emerg("CONFIG_KASAN_INLINE enabled\n");
>>> +             pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
>>> +     }
>>> +     return NOTIFY_OK;
>>> +}
>>> +
>>> +static struct notifier_block kasan_die_notifier = {
>>> +     .notifier_call = kasan_die_handler,
>>> +};
>>
>> This part fails to compile:
>>
>>   CC      arch/x86/mm/kasan_init_64.o
>> arch/x86/mm/kasan_init_64.c: In function ‘kasan_die_handler’:
>> arch/x86/mm/kasan_init_64.c:72:13: error: ‘DIE_GPF’ undeclared (first use in this function)
>>   if (val == DIE_GPF) {
>>              ^
>> arch/x86/mm/kasan_init_64.c:72:13: note: each undeclared identifier is reported only once for each function it appears in
>> arch/x86/mm/kasan_init_64.c: In function ‘kasan_init’:
>> arch/x86/mm/kasan_init_64.c:89:2: error: implicit declaration of function ‘register_die_notifier’ [-Werror=implicit-function-declaration]
>>   register_die_notifier(&kasan_die_notifier);
>>   ^
>> cc1: some warnings being treated as errors
>> make[1]: *** [arch/x86/mm/kasan_init_64.o] Error 1
>>
>>
>> Simple fix:
>>
>
> Thanks, I thought I've fixed this, but apparently I forgot to commit it.
>
>
>> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
>> index 70041fd..c8f7f3e 100644
>> --- a/arch/x86/mm/kasan_init_64.c
>> +++ b/arch/x86/mm/kasan_init_64.c
>> @@ -5,6 +5,7 @@
>>  #include <linux/vmalloc.h>
>>
>>  #include <asm/tlbflush.h>
>> +#include <linux/kdebug.h>
>>
>>  extern pgd_t early_level4_pgt[PTRS_PER_PGD];
>>  extern struct range pfn_mapped[E820_X_MAX];
>>
>>
>> Thanks,
>> Sasha
>>
>
> --
> Best regards,
> Andrey Ryabinin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 11/12] lib: add kasan test module
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-25 11:14       ` Dmitry Chernenkov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 11:14 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, linux-kernel

I have a bit of concern about tests.
A) they are not fully automated, there is no checking whether they
pass or not. This is implemented in our repository using special tags
in the log (https://github.com/google/kasan/commit/33b267553e7ffe66d5207152a3294112361b75fe;
don't mmind the TODOs, they weren't broken to begin with), and a
parser script (https://code.google.com/p/address-sanitizer/source/browse/trunk/tools/kernel_test_parse.py)
to feed the kernel log to.

B) They are not thorough enough - they don't check false negatives,
accesses more than 1 byte away etc.

C) (more of general concern for current Kasan realiability) - when
running multiple times, some tests are flaky, specificially oob_right
and uaf2. The latter needs quarantine to work reliably (I know
Konstantin is working on it). oob_right needs redzones in the
beginning of the slabs.

I know all of these may seem like long shots, but if we want a
reliable solution (also a backportable solution), we need to at least
consider them.

Otherwise, LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This is a test module doing various nasty things like
> out of bounds accesses, use after free. It is useful for testing
> kernel debugging features like kernel address sanitizer.
>
> It mostly concentrates on testing of slab allocator, but we
> might want to add more different stuff here in future (like
> stack/global variables out of bounds accesses and so on).
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  lib/Kconfig.kasan |   8 ++
>  lib/Makefile      |   1 +
>  lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 263 insertions(+)
>  create mode 100644 lib/test_kasan.c
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 1fa4fe8..8548646 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -43,4 +43,12 @@ config KASAN_INLINE
>
>  endchoice
>
> +config TEST_KASAN
> +       tristate "Module for testing kasan for bug detection"
> +       depends on m && KASAN
> +       help
> +         This is a test module doing various nasty things like
> +         out of bounds accesses, use after free. It is useful for testing
> +         kernel debugging features like kernel address sanitizer.
> +
>  endif
> diff --git a/lib/Makefile b/lib/Makefile
> index 750617c..1d8211a 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -38,6 +38,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
>  obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
>  obj-$(CONFIG_TEST_BPF) += test_bpf.o
>  obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
> +obj-$(CONFIG_TEST_KASAN) += test_kasan.o
>
>  ifeq ($(CONFIG_DEBUG_KOBJECT),y)
>  CFLAGS_kobject.o += -DDEBUG
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> new file mode 100644
> index 0000000..896dee5
> --- /dev/null
> +++ b/lib/test_kasan.c
> @@ -0,0 +1,254 @@
> +/*
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +
> +#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
> +
> +#include <linux/kernel.h>
> +#include <linux/printk.h>
> +#include <linux/slab.h>
> +#include <linux/string.h>
> +#include <linux/module.h>
> +
> +static noinline void __init kmalloc_oob_right(void)
> +{
> +       char *ptr;
> +       size_t size = 123;
> +
> +       pr_info("out-of-bounds to right\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 'x';
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_oob_left(void)
> +{
> +       char *ptr;
> +       size_t size = 15;
> +
> +       pr_info("out-of-bounds to left\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       *ptr = *(ptr - 1);
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_node_oob_right(void)
> +{
> +       char *ptr;
> +       size_t size = 4096;
> +
> +       pr_info("kmalloc_node(): out-of-bounds to right\n");
> +       ptr = kmalloc_node(size, GFP_KERNEL, 0);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 0;
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_large_oob_rigth(void)
> +{
> +       char *ptr;
> +       size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
> +
> +       pr_info("kmalloc large allocation: out-of-bounds to right\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 0;
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_oob_krealloc_more(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size1 = 17;
> +       size_t size2 = 19;
> +
> +       pr_info("out-of-bounds after krealloc more\n");
> +       ptr1 = kmalloc(size1, GFP_KERNEL);
> +       ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               return;
> +       }
> +
> +       ptr2[size2] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_krealloc_less(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size1 = 17;
> +       size_t size2 = 15;
> +
> +       pr_info("out-of-bounds after krealloc less\n");
> +       ptr1 = kmalloc(size1, GFP_KERNEL);
> +       ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               return;
> +       }
> +       ptr2[size1] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_16(void)
> +{
> +       struct {
> +               u64 words[2];
> +       } *ptr1, *ptr2;
> +
> +       pr_info("kmalloc out-of-bounds for 16-bytes access\n");
> +       ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
> +       ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               kfree(ptr2);
> +               return;
> +       }
> +       *ptr1 = *ptr2;
> +       kfree(ptr1);
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_in_memset(void)
> +{
> +       char *ptr;
> +       size_t size = 666;
> +
> +       pr_info("out-of-bounds in memset\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       memset(ptr, 0, size+5);
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_uaf(void)
> +{
> +       char *ptr;
> +       size_t size = 10;
> +
> +       pr_info("use-after-free\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr);
> +       *(ptr + 8) = 'x';
> +}
> +
> +static noinline void __init kmalloc_uaf_memset(void)
> +{
> +       char *ptr;
> +       size_t size = 33;
> +
> +       pr_info("use-after-free in memset\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr);
> +       memset(ptr, 0, size);
> +}
> +
> +static noinline void __init kmalloc_uaf2(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size = 43;
> +
> +       pr_info("use-after-free after another kmalloc\n");
> +       ptr1 = kmalloc(size, GFP_KERNEL);
> +       if (!ptr1) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr1);
> +       ptr2 = kmalloc(size, GFP_KERNEL);
> +       if (!ptr2) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr1[40] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmem_cache_oob(void)
> +{
> +       char *p;
> +       size_t size = 200;
> +       struct kmem_cache *cache = kmem_cache_create("test_cache",
> +                                               size, 0,
> +                                               0, NULL);
> +       if (!cache) {
> +               pr_err("Cache allocation failed\n");
> +               return;
> +       }
> +       pr_info("out-of-bounds in kmem_cache_alloc\n");
> +       p = kmem_cache_alloc(cache, GFP_KERNEL);
> +       if (!p) {
> +               pr_err("Allocation failed\n");
> +               kmem_cache_destroy(cache);
> +               return;
> +       }
> +
> +       *p = p[size];
> +       kmem_cache_free(cache, p);
> +       kmem_cache_destroy(cache);
> +}
> +
> +int __init kmalloc_tests_init(void)
> +{
> +       kmalloc_oob_right();
> +       kmalloc_oob_left();
> +       kmalloc_node_oob_right();
> +       kmalloc_large_oob_rigth();
> +       kmalloc_oob_krealloc_more();
> +       kmalloc_oob_krealloc_less();
> +       kmalloc_oob_16();
> +       kmalloc_oob_in_memset();
> +       kmalloc_uaf();
> +       kmalloc_uaf_memset();
> +       kmalloc_uaf2();
> +       kmem_cache_oob();
> +       return -EAGAIN;
> +}
> +
> +module_init(kmalloc_tests_init);
> +MODULE_LICENSE("GPL");
> --
> 2.1.3
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 11/12] lib: add kasan test module
@ 2014-11-25 11:14       ` Dmitry Chernenkov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 11:14 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, linux-kernel

I have a bit of concern about tests.
A) they are not fully automated, there is no checking whether they
pass or not. This is implemented in our repository using special tags
in the log (https://github.com/google/kasan/commit/33b267553e7ffe66d5207152a3294112361b75fe;
don't mmind the TODOs, they weren't broken to begin with), and a
parser script (https://code.google.com/p/address-sanitizer/source/browse/trunk/tools/kernel_test_parse.py)
to feed the kernel log to.

B) They are not thorough enough - they don't check false negatives,
accesses more than 1 byte away etc.

C) (more of general concern for current Kasan realiability) - when
running multiple times, some tests are flaky, specificially oob_right
and uaf2. The latter needs quarantine to work reliably (I know
Konstantin is working on it). oob_right needs redzones in the
beginning of the slabs.

I know all of these may seem like long shots, but if we want a
reliable solution (also a backportable solution), we need to at least
consider them.

Otherwise, LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> This is a test module doing various nasty things like
> out of bounds accesses, use after free. It is useful for testing
> kernel debugging features like kernel address sanitizer.
>
> It mostly concentrates on testing of slab allocator, but we
> might want to add more different stuff here in future (like
> stack/global variables out of bounds accesses and so on).
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  lib/Kconfig.kasan |   8 ++
>  lib/Makefile      |   1 +
>  lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 263 insertions(+)
>  create mode 100644 lib/test_kasan.c
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 1fa4fe8..8548646 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -43,4 +43,12 @@ config KASAN_INLINE
>
>  endchoice
>
> +config TEST_KASAN
> +       tristate "Module for testing kasan for bug detection"
> +       depends on m && KASAN
> +       help
> +         This is a test module doing various nasty things like
> +         out of bounds accesses, use after free. It is useful for testing
> +         kernel debugging features like kernel address sanitizer.
> +
>  endif
> diff --git a/lib/Makefile b/lib/Makefile
> index 750617c..1d8211a 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -38,6 +38,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
>  obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
>  obj-$(CONFIG_TEST_BPF) += test_bpf.o
>  obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
> +obj-$(CONFIG_TEST_KASAN) += test_kasan.o
>
>  ifeq ($(CONFIG_DEBUG_KOBJECT),y)
>  CFLAGS_kobject.o += -DDEBUG
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> new file mode 100644
> index 0000000..896dee5
> --- /dev/null
> +++ b/lib/test_kasan.c
> @@ -0,0 +1,254 @@
> +/*
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +
> +#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
> +
> +#include <linux/kernel.h>
> +#include <linux/printk.h>
> +#include <linux/slab.h>
> +#include <linux/string.h>
> +#include <linux/module.h>
> +
> +static noinline void __init kmalloc_oob_right(void)
> +{
> +       char *ptr;
> +       size_t size = 123;
> +
> +       pr_info("out-of-bounds to right\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 'x';
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_oob_left(void)
> +{
> +       char *ptr;
> +       size_t size = 15;
> +
> +       pr_info("out-of-bounds to left\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       *ptr = *(ptr - 1);
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_node_oob_right(void)
> +{
> +       char *ptr;
> +       size_t size = 4096;
> +
> +       pr_info("kmalloc_node(): out-of-bounds to right\n");
> +       ptr = kmalloc_node(size, GFP_KERNEL, 0);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 0;
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_large_oob_rigth(void)
> +{
> +       char *ptr;
> +       size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
> +
> +       pr_info("kmalloc large allocation: out-of-bounds to right\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr[size] = 0;
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_oob_krealloc_more(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size1 = 17;
> +       size_t size2 = 19;
> +
> +       pr_info("out-of-bounds after krealloc more\n");
> +       ptr1 = kmalloc(size1, GFP_KERNEL);
> +       ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               return;
> +       }
> +
> +       ptr2[size2] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_krealloc_less(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size1 = 17;
> +       size_t size2 = 15;
> +
> +       pr_info("out-of-bounds after krealloc less\n");
> +       ptr1 = kmalloc(size1, GFP_KERNEL);
> +       ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               return;
> +       }
> +       ptr2[size1] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_16(void)
> +{
> +       struct {
> +               u64 words[2];
> +       } *ptr1, *ptr2;
> +
> +       pr_info("kmalloc out-of-bounds for 16-bytes access\n");
> +       ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
> +       ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
> +       if (!ptr1 || !ptr2) {
> +               pr_err("Allocation failed\n");
> +               kfree(ptr1);
> +               kfree(ptr2);
> +               return;
> +       }
> +       *ptr1 = *ptr2;
> +       kfree(ptr1);
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmalloc_oob_in_memset(void)
> +{
> +       char *ptr;
> +       size_t size = 666;
> +
> +       pr_info("out-of-bounds in memset\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       memset(ptr, 0, size+5);
> +       kfree(ptr);
> +}
> +
> +static noinline void __init kmalloc_uaf(void)
> +{
> +       char *ptr;
> +       size_t size = 10;
> +
> +       pr_info("use-after-free\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr);
> +       *(ptr + 8) = 'x';
> +}
> +
> +static noinline void __init kmalloc_uaf_memset(void)
> +{
> +       char *ptr;
> +       size_t size = 33;
> +
> +       pr_info("use-after-free in memset\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr);
> +       memset(ptr, 0, size);
> +}
> +
> +static noinline void __init kmalloc_uaf2(void)
> +{
> +       char *ptr1, *ptr2;
> +       size_t size = 43;
> +
> +       pr_info("use-after-free after another kmalloc\n");
> +       ptr1 = kmalloc(size, GFP_KERNEL);
> +       if (!ptr1) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       kfree(ptr1);
> +       ptr2 = kmalloc(size, GFP_KERNEL);
> +       if (!ptr2) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       ptr1[40] = 'x';
> +       kfree(ptr2);
> +}
> +
> +static noinline void __init kmem_cache_oob(void)
> +{
> +       char *p;
> +       size_t size = 200;
> +       struct kmem_cache *cache = kmem_cache_create("test_cache",
> +                                               size, 0,
> +                                               0, NULL);
> +       if (!cache) {
> +               pr_err("Cache allocation failed\n");
> +               return;
> +       }
> +       pr_info("out-of-bounds in kmem_cache_alloc\n");
> +       p = kmem_cache_alloc(cache, GFP_KERNEL);
> +       if (!p) {
> +               pr_err("Allocation failed\n");
> +               kmem_cache_destroy(cache);
> +               return;
> +       }
> +
> +       *p = p[size];
> +       kmem_cache_free(cache, p);
> +       kmem_cache_destroy(cache);
> +}
> +
> +int __init kmalloc_tests_init(void)
> +{
> +       kmalloc_oob_right();
> +       kmalloc_oob_left();
> +       kmalloc_node_oob_right();
> +       kmalloc_large_oob_rigth();
> +       kmalloc_oob_krealloc_more();
> +       kmalloc_oob_krealloc_less();
> +       kmalloc_oob_16();
> +       kmalloc_oob_in_memset();
> +       kmalloc_uaf();
> +       kmalloc_uaf_memset();
> +       kmalloc_uaf2();
> +       kmem_cache_oob();
> +       return -EAGAIN;
> +}
> +
> +module_init(kmalloc_tests_init);
> +MODULE_LICENSE("GPL");
> --
> 2.1.3
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 08/12] mm: slub: add kernel address sanitizer support for slub allocator
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-25 12:17       ` Dmitry Chernenkov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:17 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

FYI, when I backported Kasan to 3.14, in kasan_mark_slab_padding()
sometimes a negative size of padding was generated. This started
working when the patch below was applied:

@@ -262,12 +264,11 @@ void kasan_free_pages(struct page *page,
unsigned int order)
 void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
  struct page *page)
 {
- unsigned long object_end = (unsigned long)object + s->size;
- unsigned long padding_start = round_up(object_end,
- KASAN_SHADOW_SCALE_SIZE);
- unsigned long padding_end = (unsigned long)page_address(page) +
- (PAGE_SIZE << compound_order(page));
- size_t size = padding_end - padding_start;
+ unsigned long page_start = (unsigned long) page_address(page);
+ unsigned long page_end = page_start + (PAGE_SIZE << compound_order(page));
+ unsigned long padding_start = round_up(page_end - s->reserved,
+ KASAN_SHADOW_SCALE_SIZE);
+ size_t size = page_end - padding_start;

  kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
 }

Also, in kasan_slab_free you poison the shadow with FREE not just the
object space, but also redzones. This is inefficient and will mistake
right out-of-bounds error for the next object with use-after-free.
This is fixed here
https://github.com/google/kasan/commit/4b3238be392ba0bc56bbc934ac545df3ff840782
, please patch.


LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Initially all objects in newly allocated slab page, marked as free.
> Later, when allocation of slub object happens, requested by caller
> number of bytes marked as accessible, and the rest of the object
> (including slub's metadata) marked as redzone (inaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible.
>
> Code in slub.c and slab_common.c files could validly access to object's
> metadata, so instrumentation for this files are disabled.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h | 21 ++++++++++++
>  include/linux/slab.h  | 11 ++++--
>  lib/Kconfig.kasan     |  1 +
>  mm/Makefile           |  3 ++
>  mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h      |  4 +++
>  mm/kasan/report.c     | 25 ++++++++++++++
>  mm/slab_common.c      |  5 ++-
>  mm/slub.c             | 35 ++++++++++++++++++--
>  9 files changed, 191 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 9714fba..0463b90 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
>
>  void kasan_alloc_pages(struct page *page, unsigned int order);
>  void kasan_free_pages(struct page *page, unsigned int order);
> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
> +                       struct page *page);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size);
> +void kasan_kfree_large(const void *ptr);
> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
> +void kasan_krealloc(const void *object, size_t new_size);
> +
> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
> +void kasan_slab_free(struct kmem_cache *s, void *object);
>
>  #else /* CONFIG_KASAN */
>
> @@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
>
>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
> +                                       struct page *page) {}
> +
> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
> +static inline void kasan_kfree_large(const void *ptr) {}
> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
> +                               size_t size) {}
> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
> +
> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
>
>  #endif /* CONFIG_KASAN */
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 8a2457d..5dc0d69 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -104,6 +104,7 @@
>                                 (unsigned long)ZERO_SIZE_PTR)
>
>  #include <linux/kmemleak.h>
> +#include <linux/kasan.h>
>
>  struct mem_cgroup;
>  /*
> @@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>  static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
>                 gfp_t flags, size_t size)
>  {
> -       return kmem_cache_alloc(s, flags);
> +       void *ret = kmem_cache_alloc(s, flags);
> +
> +       kasan_kmalloc(s, ret, size);
> +       return ret;
>  }
>
>  static __always_inline void *
> @@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>                               gfp_t gfpflags,
>                               int node, size_t size)
>  {
> -       return kmem_cache_alloc_node(s, gfpflags, node);
> +       void *ret = kmem_cache_alloc_node(s, gfpflags, node);
> +
> +       kasan_kmalloc(s, ret, size);
> +       return ret;
>  }
>  #endif /* CONFIG_TRACING */
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 386cc8b..1fa4fe8 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
>  config KASAN
>         bool "AddressSanitizer: runtime memory debugger"
>         depends on !MEMORY_HOTPLUG
> +       depends on SLUB_DEBUG
>         help
>           Enables address sanitizer - runtime memory debugger,
>           designed to find out-of-bounds accesses and use-after-free bugs.
> diff --git a/mm/Makefile b/mm/Makefile
> index 33d9971..5f0138f 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -2,6 +2,9 @@
>  # Makefile for the linux memory manager.
>  #
>
> +KASAN_SANITIZE_slab_common.o := n
> +KASAN_SANITIZE_slub.o := n
> +
>  mmu-y                  := nommu.o
>  mmu-$(CONFIG_MMU)      := gup.o highmem.o memory.o mincore.o \
>                            mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index b336073..9f5326e 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -30,6 +30,7 @@
>  #include <linux/kasan.h>
>
>  #include "kasan.h"
> +#include "../slab.h"
>
>  /*
>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> @@ -261,6 +262,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
>                                 KASAN_FREE_PAGE);
>  }
>
> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
> +                       struct page *page)
> +{
> +       unsigned long object_end = (unsigned long)object + s->size;
> +       unsigned long padding_start = round_up(object_end,
> +                                       KASAN_SHADOW_SCALE_SIZE);
> +       unsigned long padding_end = (unsigned long)page_address(page) +
> +                                       (PAGE_SIZE << compound_order(page));
> +       size_t size = padding_end - padding_start;
> +
> +       kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
> +}
> +
> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
> +{
> +       kasan_kmalloc(cache, object, cache->object_size);
> +}
> +
> +void kasan_slab_free(struct kmem_cache *cache, void *object)
> +{
> +       unsigned long size = cache->size;
> +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +
> +       /* RCU slabs could be legally used after free within the RCU period */
> +       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
> +               return;
> +
> +       kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
> +}
> +
> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
> +{
> +       unsigned long redzone_start;
> +       unsigned long redzone_end;
> +
> +       if (unlikely(object == NULL))
> +               return;
> +
> +       redzone_start = round_up((unsigned long)(object + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)object + cache->size;
> +
> +       kasan_unpoison_shadow(object, size);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               KASAN_KMALLOC_REDZONE);
> +
> +}
> +EXPORT_SYMBOL(kasan_kmalloc);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size)
> +{
> +       struct page *page;
> +       unsigned long redzone_start;
> +       unsigned long redzone_end;
> +
> +       if (unlikely(ptr == NULL))
> +               return;
> +
> +       page = virt_to_page(ptr);
> +       redzone_start = round_up((unsigned long)(ptr + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> +
> +       kasan_unpoison_shadow(ptr, size);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               KASAN_PAGE_REDZONE);
> +}
> +
> +void kasan_krealloc(const void *object, size_t size)
> +{
> +       struct page *page;
> +
> +       if (unlikely(object == ZERO_SIZE_PTR))
> +               return;
> +
> +       page = virt_to_head_page(object);
> +
> +       if (unlikely(!PageSlab(page)))
> +               kasan_kmalloc_large(object, size);
> +       else
> +               kasan_kmalloc(page->slab_cache, object, size);
> +}
> +
> +void kasan_kfree_large(const void *ptr)
> +{
> +       struct page *page = virt_to_page(ptr);
> +
> +       kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> +                       KASAN_FREE_PAGE);
> +}
> +
>  void __asan_load1(unsigned long addr)
>  {
>         check_memory_region(addr, 1, false);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 2a6a961..049349b 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -7,6 +7,10 @@
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 8ac3b6b..185d04c 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -24,6 +24,7 @@
>  #include <linux/kasan.h>
>
>  #include "kasan.h"
> +#include "../slab.h"
>
>  /* Shadow layout customization. */
>  #define SHADOW_BYTES_PER_BLOCK 1
> @@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
>         shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         switch (shadow_val) {
> +       case KASAN_PAGE_REDZONE:
> +       case KASAN_SLAB_PADDING:
> +       case KASAN_KMALLOC_REDZONE:
>         case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>                 bug_type = "out of bounds access";
>                 break;
>         case KASAN_FREE_PAGE:
> +       case KASAN_KMALLOC_FREE:
>                 bug_type = "use after free";
>                 break;
>         case KASAN_SHADOW_GAP:
> @@ -76,11 +81,31 @@ static void print_error_description(struct access_info *info)
>  static void print_address_description(struct access_info *info)
>  {
>         struct page *page;
> +       struct kmem_cache *cache;
>         u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         page = virt_to_head_page((void *)info->access_addr);
>
>         switch (shadow_val) {
> +       case KASAN_SLAB_PADDING:
> +               cache = page->slab_cache;
> +               slab_err(cache, page, "access to slab redzone");
> +               dump_stack();
> +               break;
> +       case KASAN_KMALLOC_FREE:
> +       case KASAN_KMALLOC_REDZONE:
> +       case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +               if (PageSlab(page)) {
> +                       void *object;
> +                       void *slab_page = page_address(page);
> +
> +                       cache = page->slab_cache;
> +                       object = virt_to_obj(cache, slab_page,
> +                                       (void *)info->access_addr);
> +                       object_err(cache, page, object, "kasan error");
> +                       break;
> +               }
> +       case KASAN_PAGE_REDZONE:
>         case KASAN_FREE_PAGE:
>                 dump_page(page, "kasan error");
>                 dump_stack();
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index e03dd6f..4dcbc2d 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -789,6 +789,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
>         page = alloc_kmem_pages(flags, order);
>         ret = page ? page_address(page) : NULL;
>         kmemleak_alloc(ret, size, 1, flags);
> +       kasan_kmalloc_large(ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmalloc_order);
> @@ -973,8 +974,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
>         if (p)
>                 ks = ksize(p);
>
> -       if (ks >= new_size)
> +       if (ks >= new_size) {
> +               kasan_krealloc((void *)p, new_size);
>                 return (void *)p;
> +       }
>
>         ret = kmalloc_track_caller(new_size, flags);
>         if (ret && p)
> diff --git a/mm/slub.c b/mm/slub.c
> index 88ad8b8..6af95c0 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -33,6 +33,7 @@
>  #include <linux/stacktrace.h>
>  #include <linux/prefetch.h>
>  #include <linux/memcontrol.h>
> +#include <linux/kasan.h>
>
>  #include <trace/events/kmem.h>
>
> @@ -469,10 +470,12 @@ static int disable_higher_order_debug;
>
>  static inline void metadata_access_enable(void)
>  {
> +       kasan_disable_local();
>  }
>
>  static inline void metadata_access_disable(void)
>  {
> +       kasan_enable_local();
>  }
>
>  /*
> @@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
>  static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
>  {
>         kmemleak_alloc(ptr, size, 1, flags);
> +       kasan_kmalloc_large(ptr, size);
>  }
>
>  static inline void kfree_hook(const void *x)
>  {
>         kmemleak_free(x);
> +       kasan_kfree_large(x);
>  }
>
>  static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
> @@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
>         flags &= gfp_allowed_mask;
>         kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
>         kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
> +       kasan_slab_alloc(s, object);
>  }
>
>  static inline void slab_free_hook(struct kmem_cache *s, void *x)
>  {
>         kmemleak_free_recursive(x, s->flags);
> +       kasan_slab_free(s, x);
>
>         /*
>          * Trouble is that we may no longer disable interrupts in the fast path
> @@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>                                 void *object)
>  {
>         setup_object_debug(s, page, object);
> -       if (unlikely(s->ctor))
> +       if (unlikely(s->ctor)) {
> +               kasan_slab_alloc(s, object);
>                 s->ctor(object);
> +       }
> +       kasan_slab_free(s, object);
>  }
>
>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> @@ -1419,8 +1429,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>                 setup_object(s, page, p);
>                 if (likely(idx < page->objects))
>                         set_freepointer(s, p, p + s->size);
> -               else
> +               else {
>                         set_freepointer(s, p, NULL);
> +                       kasan_mark_slab_padding(s, p, page);
> +               }
>         }
>
>         page->freelist = start;
> @@ -2491,6 +2503,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
>  {
>         void *ret = slab_alloc(s, gfpflags, _RET_IP_);
>         trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
> +       kasan_kmalloc(s, ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_trace);
> @@ -2517,6 +2530,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
>
>         trace_kmalloc_node(_RET_IP_, ret,
>                            size, s->size, gfpflags, node);
> +
> +       kasan_kmalloc(s, ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
> @@ -2900,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
>         init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
>         init_tracking(kmem_cache_node, n);
>  #endif
> +       kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
>         init_kmem_cache_node(n);
>         inc_slabs_node(kmem_cache_node, node, page->objects);
>
> @@ -3272,6 +3288,8 @@ void *__kmalloc(size_t size, gfp_t flags)
>
>         trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
>
> +       kasan_kmalloc(s, ret, size);
> +
>         return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc);
> @@ -3315,12 +3333,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>
>         trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
>
> +       kasan_kmalloc(s, ret, size);
> +
>         return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> -size_t ksize(const void *object)
> +static size_t __ksize(const void *object)
>  {
>         struct page *page;
>
> @@ -3336,6 +3356,15 @@ size_t ksize(const void *object)
>
>         return slab_ksize(page->slab_cache);
>  }
> +
> +size_t ksize(const void *object)
> +{
> +       size_t size = __ksize(object);
> +       /* We assume that ksize callers could use whole allocated area,
> +          so we need unpoison this area. */
> +       kasan_krealloc(object, size);
> +       return size;
> +}
>  EXPORT_SYMBOL(ksize);
>
>  void kfree(const void *x)
> --
> 2.1.3
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 08/12] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-11-25 12:17       ` Dmitry Chernenkov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:17 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

FYI, when I backported Kasan to 3.14, in kasan_mark_slab_padding()
sometimes a negative size of padding was generated. This started
working when the patch below was applied:

@@ -262,12 +264,11 @@ void kasan_free_pages(struct page *page,
unsigned int order)
 void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
  struct page *page)
 {
- unsigned long object_end = (unsigned long)object + s->size;
- unsigned long padding_start = round_up(object_end,
- KASAN_SHADOW_SCALE_SIZE);
- unsigned long padding_end = (unsigned long)page_address(page) +
- (PAGE_SIZE << compound_order(page));
- size_t size = padding_end - padding_start;
+ unsigned long page_start = (unsigned long) page_address(page);
+ unsigned long page_end = page_start + (PAGE_SIZE << compound_order(page));
+ unsigned long padding_start = round_up(page_end - s->reserved,
+ KASAN_SHADOW_SCALE_SIZE);
+ size_t size = page_end - padding_start;

  kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
 }

Also, in kasan_slab_free you poison the shadow with FREE not just the
object space, but also redzones. This is inefficient and will mistake
right out-of-bounds error for the next object with use-after-free.
This is fixed here
https://github.com/google/kasan/commit/4b3238be392ba0bc56bbc934ac545df3ff840782
, please patch.


LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Initially all objects in newly allocated slab page, marked as free.
> Later, when allocation of slub object happens, requested by caller
> number of bytes marked as accessible, and the rest of the object
> (including slub's metadata) marked as redzone (inaccessible).
>
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible.
>
> Code in slub.c and slab_common.c files could validly access to object's
> metadata, so instrumentation for this files are disabled.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h | 21 ++++++++++++
>  include/linux/slab.h  | 11 ++++--
>  lib/Kconfig.kasan     |  1 +
>  mm/Makefile           |  3 ++
>  mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h      |  4 +++
>  mm/kasan/report.c     | 25 ++++++++++++++
>  mm/slab_common.c      |  5 ++-
>  mm/slub.c             | 35 ++++++++++++++++++--
>  9 files changed, 191 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 9714fba..0463b90 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
>
>  void kasan_alloc_pages(struct page *page, unsigned int order);
>  void kasan_free_pages(struct page *page, unsigned int order);
> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
> +                       struct page *page);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size);
> +void kasan_kfree_large(const void *ptr);
> +void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
> +void kasan_krealloc(const void *object, size_t new_size);
> +
> +void kasan_slab_alloc(struct kmem_cache *s, void *object);
> +void kasan_slab_free(struct kmem_cache *s, void *object);
>
>  #else /* CONFIG_KASAN */
>
> @@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
>
>  static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
>  static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
> +                                       struct page *page) {}
> +
> +static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
> +static inline void kasan_kfree_large(const void *ptr) {}
> +static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
> +                               size_t size) {}
> +static inline void kasan_krealloc(const void *object, size_t new_size) {}
> +
> +static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
> +static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
>
>  #endif /* CONFIG_KASAN */
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 8a2457d..5dc0d69 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -104,6 +104,7 @@
>                                 (unsigned long)ZERO_SIZE_PTR)
>
>  #include <linux/kmemleak.h>
> +#include <linux/kasan.h>
>
>  struct mem_cgroup;
>  /*
> @@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>  static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
>                 gfp_t flags, size_t size)
>  {
> -       return kmem_cache_alloc(s, flags);
> +       void *ret = kmem_cache_alloc(s, flags);
> +
> +       kasan_kmalloc(s, ret, size);
> +       return ret;
>  }
>
>  static __always_inline void *
> @@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>                               gfp_t gfpflags,
>                               int node, size_t size)
>  {
> -       return kmem_cache_alloc_node(s, gfpflags, node);
> +       void *ret = kmem_cache_alloc_node(s, gfpflags, node);
> +
> +       kasan_kmalloc(s, ret, size);
> +       return ret;
>  }
>  #endif /* CONFIG_TRACING */
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 386cc8b..1fa4fe8 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
>  config KASAN
>         bool "AddressSanitizer: runtime memory debugger"
>         depends on !MEMORY_HOTPLUG
> +       depends on SLUB_DEBUG
>         help
>           Enables address sanitizer - runtime memory debugger,
>           designed to find out-of-bounds accesses and use-after-free bugs.
> diff --git a/mm/Makefile b/mm/Makefile
> index 33d9971..5f0138f 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -2,6 +2,9 @@
>  # Makefile for the linux memory manager.
>  #
>
> +KASAN_SANITIZE_slab_common.o := n
> +KASAN_SANITIZE_slub.o := n
> +
>  mmu-y                  := nommu.o
>  mmu-$(CONFIG_MMU)      := gup.o highmem.o memory.o mincore.o \
>                            mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index b336073..9f5326e 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -30,6 +30,7 @@
>  #include <linux/kasan.h>
>
>  #include "kasan.h"
> +#include "../slab.h"
>
>  /*
>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> @@ -261,6 +262,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
>                                 KASAN_FREE_PAGE);
>  }
>
> +void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
> +                       struct page *page)
> +{
> +       unsigned long object_end = (unsigned long)object + s->size;
> +       unsigned long padding_start = round_up(object_end,
> +                                       KASAN_SHADOW_SCALE_SIZE);
> +       unsigned long padding_end = (unsigned long)page_address(page) +
> +                                       (PAGE_SIZE << compound_order(page));
> +       size_t size = padding_end - padding_start;
> +
> +       kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
> +}
> +
> +void kasan_slab_alloc(struct kmem_cache *cache, void *object)
> +{
> +       kasan_kmalloc(cache, object, cache->object_size);
> +}
> +
> +void kasan_slab_free(struct kmem_cache *cache, void *object)
> +{
> +       unsigned long size = cache->size;
> +       unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +
> +       /* RCU slabs could be legally used after free within the RCU period */
> +       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
> +               return;
> +
> +       kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
> +}
> +
> +void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
> +{
> +       unsigned long redzone_start;
> +       unsigned long redzone_end;
> +
> +       if (unlikely(object == NULL))
> +               return;
> +
> +       redzone_start = round_up((unsigned long)(object + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)object + cache->size;
> +
> +       kasan_unpoison_shadow(object, size);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               KASAN_KMALLOC_REDZONE);
> +
> +}
> +EXPORT_SYMBOL(kasan_kmalloc);
> +
> +void kasan_kmalloc_large(const void *ptr, size_t size)
> +{
> +       struct page *page;
> +       unsigned long redzone_start;
> +       unsigned long redzone_end;
> +
> +       if (unlikely(ptr == NULL))
> +               return;
> +
> +       page = virt_to_page(ptr);
> +       redzone_start = round_up((unsigned long)(ptr + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> +
> +       kasan_unpoison_shadow(ptr, size);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               KASAN_PAGE_REDZONE);
> +}
> +
> +void kasan_krealloc(const void *object, size_t size)
> +{
> +       struct page *page;
> +
> +       if (unlikely(object == ZERO_SIZE_PTR))
> +               return;
> +
> +       page = virt_to_head_page(object);
> +
> +       if (unlikely(!PageSlab(page)))
> +               kasan_kmalloc_large(object, size);
> +       else
> +               kasan_kmalloc(page->slab_cache, object, size);
> +}
> +
> +void kasan_kfree_large(const void *ptr)
> +{
> +       struct page *page = virt_to_page(ptr);
> +
> +       kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> +                       KASAN_FREE_PAGE);
> +}
> +
>  void __asan_load1(unsigned long addr)
>  {
>         check_memory_region(addr, 1, false);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 2a6a961..049349b 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -7,6 +7,10 @@
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
> +#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> +#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 8ac3b6b..185d04c 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -24,6 +24,7 @@
>  #include <linux/kasan.h>
>
>  #include "kasan.h"
> +#include "../slab.h"
>
>  /* Shadow layout customization. */
>  #define SHADOW_BYTES_PER_BLOCK 1
> @@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
>         shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         switch (shadow_val) {
> +       case KASAN_PAGE_REDZONE:
> +       case KASAN_SLAB_PADDING:
> +       case KASAN_KMALLOC_REDZONE:
>         case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>                 bug_type = "out of bounds access";
>                 break;
>         case KASAN_FREE_PAGE:
> +       case KASAN_KMALLOC_FREE:
>                 bug_type = "use after free";
>                 break;
>         case KASAN_SHADOW_GAP:
> @@ -76,11 +81,31 @@ static void print_error_description(struct access_info *info)
>  static void print_address_description(struct access_info *info)
>  {
>         struct page *page;
> +       struct kmem_cache *cache;
>         u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         page = virt_to_head_page((void *)info->access_addr);
>
>         switch (shadow_val) {
> +       case KASAN_SLAB_PADDING:
> +               cache = page->slab_cache;
> +               slab_err(cache, page, "access to slab redzone");
> +               dump_stack();
> +               break;
> +       case KASAN_KMALLOC_FREE:
> +       case KASAN_KMALLOC_REDZONE:
> +       case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +               if (PageSlab(page)) {
> +                       void *object;
> +                       void *slab_page = page_address(page);
> +
> +                       cache = page->slab_cache;
> +                       object = virt_to_obj(cache, slab_page,
> +                                       (void *)info->access_addr);
> +                       object_err(cache, page, object, "kasan error");
> +                       break;
> +               }
> +       case KASAN_PAGE_REDZONE:
>         case KASAN_FREE_PAGE:
>                 dump_page(page, "kasan error");
>                 dump_stack();
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index e03dd6f..4dcbc2d 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -789,6 +789,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
>         page = alloc_kmem_pages(flags, order);
>         ret = page ? page_address(page) : NULL;
>         kmemleak_alloc(ret, size, 1, flags);
> +       kasan_kmalloc_large(ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmalloc_order);
> @@ -973,8 +974,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
>         if (p)
>                 ks = ksize(p);
>
> -       if (ks >= new_size)
> +       if (ks >= new_size) {
> +               kasan_krealloc((void *)p, new_size);
>                 return (void *)p;
> +       }
>
>         ret = kmalloc_track_caller(new_size, flags);
>         if (ret && p)
> diff --git a/mm/slub.c b/mm/slub.c
> index 88ad8b8..6af95c0 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -33,6 +33,7 @@
>  #include <linux/stacktrace.h>
>  #include <linux/prefetch.h>
>  #include <linux/memcontrol.h>
> +#include <linux/kasan.h>
>
>  #include <trace/events/kmem.h>
>
> @@ -469,10 +470,12 @@ static int disable_higher_order_debug;
>
>  static inline void metadata_access_enable(void)
>  {
> +       kasan_disable_local();
>  }
>
>  static inline void metadata_access_disable(void)
>  {
> +       kasan_enable_local();
>  }
>
>  /*
> @@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
>  static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
>  {
>         kmemleak_alloc(ptr, size, 1, flags);
> +       kasan_kmalloc_large(ptr, size);
>  }
>
>  static inline void kfree_hook(const void *x)
>  {
>         kmemleak_free(x);
> +       kasan_kfree_large(x);
>  }
>
>  static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
> @@ -1264,11 +1269,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
>         flags &= gfp_allowed_mask;
>         kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
>         kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
> +       kasan_slab_alloc(s, object);
>  }
>
>  static inline void slab_free_hook(struct kmem_cache *s, void *x)
>  {
>         kmemleak_free_recursive(x, s->flags);
> +       kasan_slab_free(s, x);
>
>         /*
>          * Trouble is that we may no longer disable interrupts in the fast path
> @@ -1381,8 +1388,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>                                 void *object)
>  {
>         setup_object_debug(s, page, object);
> -       if (unlikely(s->ctor))
> +       if (unlikely(s->ctor)) {
> +               kasan_slab_alloc(s, object);
>                 s->ctor(object);
> +       }
> +       kasan_slab_free(s, object);
>  }
>
>  static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> @@ -1419,8 +1429,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>                 setup_object(s, page, p);
>                 if (likely(idx < page->objects))
>                         set_freepointer(s, p, p + s->size);
> -               else
> +               else {
>                         set_freepointer(s, p, NULL);
> +                       kasan_mark_slab_padding(s, p, page);
> +               }
>         }
>
>         page->freelist = start;
> @@ -2491,6 +2503,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
>  {
>         void *ret = slab_alloc(s, gfpflags, _RET_IP_);
>         trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
> +       kasan_kmalloc(s, ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_trace);
> @@ -2517,6 +2530,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
>
>         trace_kmalloc_node(_RET_IP_, ret,
>                            size, s->size, gfpflags, node);
> +
> +       kasan_kmalloc(s, ret, size);
>         return ret;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
> @@ -2900,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
>         init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
>         init_tracking(kmem_cache_node, n);
>  #endif
> +       kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
>         init_kmem_cache_node(n);
>         inc_slabs_node(kmem_cache_node, node, page->objects);
>
> @@ -3272,6 +3288,8 @@ void *__kmalloc(size_t size, gfp_t flags)
>
>         trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
>
> +       kasan_kmalloc(s, ret, size);
> +
>         return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc);
> @@ -3315,12 +3333,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>
>         trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
>
> +       kasan_kmalloc(s, ret, size);
> +
>         return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> -size_t ksize(const void *object)
> +static size_t __ksize(const void *object)
>  {
>         struct page *page;
>
> @@ -3336,6 +3356,15 @@ size_t ksize(const void *object)
>
>         return slab_ksize(page->slab_cache);
>  }
> +
> +size_t ksize(const void *object)
> +{
> +       size_t size = __ksize(object);
> +       /* We assume that ksize callers could use whole allocated area,
> +          so we need unpoison this area. */
> +       kasan_krealloc(object, size);
> +       return size;
> +}
>  EXPORT_SYMBOL(ksize);
>
>  void kfree(const void *x)
> --
> 2.1.3
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-25 12:22       ` Dmitry Chernenkov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:22 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

LGTM

Does this mean we're going to sanitize the slub code itself?)

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Wrap access to object's metadata in external functions with
> metadata_access_enable()/metadata_access_disable() function calls.
>
> This hooks separates payload accesses from metadata accesses
> which might be useful for different checkers (e.g. KASan).
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slub.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 0c01584..88ad8b8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -467,13 +467,23 @@ static int slub_debug;
>  static char *slub_debug_slabs;
>  static int disable_higher_order_debug;
>
> +static inline void metadata_access_enable(void)
> +{
> +}
> +
> +static inline void metadata_access_disable(void)
> +{
> +}
> +
>  /*
>   * Object debugging
>   */
>  static void print_section(char *text, u8 *addr, unsigned int length)
>  {
> +       metadata_access_enable();
>         print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
>                         length, 1);
> +       metadata_access_disable();
>  }
>
>  static struct track *get_track(struct kmem_cache *s, void *object,
> @@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
>                 trace.max_entries = TRACK_ADDRS_COUNT;
>                 trace.entries = p->addrs;
>                 trace.skip = 3;
> +               metadata_access_enable();
>                 save_stack_trace(&trace);
> +               metadata_access_disable();
>
>                 /* See rant in lockdep.c */
>                 if (trace.nr_entries != 0 &&
> @@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
>         u8 *fault;
>         u8 *end;
>
> +       metadata_access_enable();
>         fault = memchr_inv(start, value, bytes);
> +       metadata_access_disable();
>         if (!fault)
>                 return 1;
>
> @@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
>         if (!remainder)
>                 return 1;
>
> +       metadata_access_enable();
>         fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
> +       metadata_access_disable();
>         if (!fault)
>                 return 1;
>         while (end > fault && end[-1] == POISON_INUSE)
> --
> 2.1.3
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-11-25 12:22       ` Dmitry Chernenkov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:22 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

LGTM

Does this mean we're going to sanitize the slub code itself?)

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Wrap access to object's metadata in external functions with
> metadata_access_enable()/metadata_access_disable() function calls.
>
> This hooks separates payload accesses from metadata accesses
> which might be useful for different checkers (e.g. KASan).
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/slub.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 0c01584..88ad8b8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -467,13 +467,23 @@ static int slub_debug;
>  static char *slub_debug_slabs;
>  static int disable_higher_order_debug;
>
> +static inline void metadata_access_enable(void)
> +{
> +}
> +
> +static inline void metadata_access_disable(void)
> +{
> +}
> +
>  /*
>   * Object debugging
>   */
>  static void print_section(char *text, u8 *addr, unsigned int length)
>  {
> +       metadata_access_enable();
>         print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
>                         length, 1);
> +       metadata_access_disable();
>  }
>
>  static struct track *get_track(struct kmem_cache *s, void *object,
> @@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
>                 trace.max_entries = TRACK_ADDRS_COUNT;
>                 trace.entries = p->addrs;
>                 trace.skip = 3;
> +               metadata_access_enable();
>                 save_stack_trace(&trace);
> +               metadata_access_disable();
>
>                 /* See rant in lockdep.c */
>                 if (trace.nr_entries != 0 &&
> @@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
>         u8 *fault;
>         u8 *end;
>
> +       metadata_access_enable();
>         fault = memchr_inv(start, value, bytes);
> +       metadata_access_disable();
>         if (!fault)
>                 return 1;
>
> @@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
>         if (!remainder)
>                 return 1;
>
> +       metadata_access_enable();
>         fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
> +       metadata_access_disable();
>         if (!fault)
>                 return 1;
>         while (end > fault && end[-1] == POISON_INUSE)
> --
> 2.1.3
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 06/12] mm: slub: share slab_err and object_err functions
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-25 12:26       ` Dmitry Chernenkov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:26 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Joe Perches, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/slub_def.h | 5 +++++
>  mm/slub.c                | 4 ++--
>  2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index c75bc1d..144b5cb 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -115,4 +115,9 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
>         return x - ((x - slab_page) % s->size);
>  }
>
> +__printf(3, 4)
> +void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +               u8 *object, char *reason);
> +
>  #endif /* _LINUX_SLUB_DEF_H */
> diff --git a/mm/slub.c b/mm/slub.c
> index 95d2142..0c01584 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>         dump_stack();
>  }
>
> -static void object_err(struct kmem_cache *s, struct page *page,
> +void object_err(struct kmem_cache *s, struct page *page,
>                         u8 *object, char *reason)
>  {
>         slab_bug(s, "%s", reason);
>         print_trailer(s, page, object);
>  }
>
> -static void slab_err(struct kmem_cache *s, struct page *page,
> +void slab_err(struct kmem_cache *s, struct page *page,
>                         const char *fmt, ...)
>  {
>         va_list args;
> --
> 2.1.3
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 06/12] mm: slub: share slab_err and object_err functions
@ 2014-11-25 12:26       ` Dmitry Chernenkov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:26 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Joe Perches, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Remove static and add function declarations to mm/slab.h so they
> could be used by kernel address sanitizer.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/slub_def.h | 5 +++++
>  mm/slub.c                | 4 ++--
>  2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index c75bc1d..144b5cb 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -115,4 +115,9 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
>         return x - ((x - slab_page) % s->size);
>  }
>
> +__printf(3, 4)
> +void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
> +void object_err(struct kmem_cache *s, struct page *page,
> +               u8 *object, char *reason);
> +
>  #endif /* _LINUX_SLUB_DEF_H */
> diff --git a/mm/slub.c b/mm/slub.c
> index 95d2142..0c01584 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>         dump_stack();
>  }
>
> -static void object_err(struct kmem_cache *s, struct page *page,
> +void object_err(struct kmem_cache *s, struct page *page,
>                         u8 *object, char *reason)
>  {
>         slab_bug(s, "%s", reason);
>         print_trailer(s, page, object);
>  }
>
> -static void slab_err(struct kmem_cache *s, struct page *page,
> +void slab_err(struct kmem_cache *s, struct page *page,
>                         const char *fmt, ...)
>  {
>         va_list args;
> --
> 2.1.3
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 04/12] mm: page_alloc: add kasan hooks on alloc and free paths
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-25 12:28       ` Dmitry Chernenkov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:28 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML

LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as inaccessible.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  6 ++++++
>  mm/compaction.c       |  2 ++
>  mm/kasan/kasan.c      | 14 ++++++++++++++
>  mm/kasan/kasan.h      |  1 +
>  mm/kasan/report.c     |  7 +++++++
>  mm/page_alloc.c       |  3 +++
>  6 files changed, 33 insertions(+)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 01c99fe..9714fba 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
>
>  void kasan_unpoison_shadow(const void *address, size_t size);
>
> +void kasan_alloc_pages(struct page *page, unsigned int order);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
>  #else /* CONFIG_KASAN */
>
>  static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
> @@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
>  static inline void kasan_enable_local(void) {}
>  static inline void kasan_disable_local(void) {}
>
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +
>  #endif /* CONFIG_KASAN */
>
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index a857225..a5c8e84 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -16,6 +16,7 @@
>  #include <linux/sysfs.h>
>  #include <linux/balloon_compaction.h>
>  #include <linux/page-isolation.h>
> +#include <linux/kasan.h>
>  #include "internal.h"
>
>  #ifdef CONFIG_COMPACTION
> @@ -61,6 +62,7 @@ static void map_pages(struct list_head *list)
>         list_for_each_entry(page, list, lru) {
>                 arch_alloc_page(page, 0);
>                 kernel_map_pages(page, 1, 1);
> +               kasan_alloc_pages(page, 0);
>         }
>  }
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index f77be01..b336073 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
>         kasan_report(addr, size, write);
>  }
>
> +void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> +       if (likely(!PageHighMem(page)))
> +               kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +}
> +
> +void kasan_free_pages(struct page *page, unsigned int order)
> +{
> +       if (likely(!PageHighMem(page)))
> +               kasan_poison_shadow(page_address(page),
> +                               PAGE_SIZE << order,
> +                               KASAN_FREE_PAGE);
> +}
> +
>  void __asan_load1(unsigned long addr)
>  {
>         check_memory_region(addr, 1, false);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 6da1d78..2a6a961 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,7 @@
>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 56a2089..8ac3b6b 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
>         case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>                 bug_type = "out of bounds access";
>                 break;
> +       case KASAN_FREE_PAGE:
> +               bug_type = "use after free";
> +               break;
>         case KASAN_SHADOW_GAP:
>                 bug_type = "wild memory access";
>                 break;
> @@ -78,6 +81,10 @@ static void print_address_description(struct access_info *info)
>         page = virt_to_head_page((void *)info->access_addr);
>
>         switch (shadow_val) {
> +       case KASAN_FREE_PAGE:
> +               dump_page(page, "kasan error");
> +               dump_stack();
> +               break;
>         case KASAN_SHADOW_GAP:
>                 pr_err("No metainfo is available for this access.\n");
>                 dump_stack();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index b0e6eab..3829589 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -58,6 +58,7 @@
>  #include <linux/page-debug-flags.h>
>  #include <linux/hugetlb.h>
>  #include <linux/sched/rt.h>
> +#include <linux/kasan.h>
>
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -758,6 +759,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>
>         trace_mm_page_free(page, order);
>         kmemcheck_free_shadow(page, order);
> +       kasan_free_pages(page, order);
>
>         if (PageAnon(page))
>                 page->mapping = NULL;
> @@ -940,6 +942,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
>
>         arch_alloc_page(page, order);
>         kernel_map_pages(page, 1 << order, 1);
> +       kasan_alloc_pages(page, order);
>
>         if (gfp_flags & __GFP_ZERO)
>                 prep_zero_page(page, order, gfp_flags);
> --
> 2.1.3
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 04/12] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2014-11-25 12:28       ` Dmitry Chernenkov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:28 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML

LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as inaccessible.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/kasan.h |  6 ++++++
>  mm/compaction.c       |  2 ++
>  mm/kasan/kasan.c      | 14 ++++++++++++++
>  mm/kasan/kasan.h      |  1 +
>  mm/kasan/report.c     |  7 +++++++
>  mm/page_alloc.c       |  3 +++
>  6 files changed, 33 insertions(+)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 01c99fe..9714fba 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
>
>  void kasan_unpoison_shadow(const void *address, size_t size);
>
> +void kasan_alloc_pages(struct page *page, unsigned int order);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
>  #else /* CONFIG_KASAN */
>
>  static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
> @@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
>  static inline void kasan_enable_local(void) {}
>  static inline void kasan_disable_local(void) {}
>
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +
>  #endif /* CONFIG_KASAN */
>
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index a857225..a5c8e84 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -16,6 +16,7 @@
>  #include <linux/sysfs.h>
>  #include <linux/balloon_compaction.h>
>  #include <linux/page-isolation.h>
> +#include <linux/kasan.h>
>  #include "internal.h"
>
>  #ifdef CONFIG_COMPACTION
> @@ -61,6 +62,7 @@ static void map_pages(struct list_head *list)
>         list_for_each_entry(page, list, lru) {
>                 arch_alloc_page(page, 0);
>                 kernel_map_pages(page, 1, 1);
> +               kasan_alloc_pages(page, 0);
>         }
>  }
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index f77be01..b336073 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
>         kasan_report(addr, size, write);
>  }
>
> +void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> +       if (likely(!PageHighMem(page)))
> +               kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +}
> +
> +void kasan_free_pages(struct page *page, unsigned int order)
> +{
> +       if (likely(!PageHighMem(page)))
> +               kasan_poison_shadow(page_address(page),
> +                               PAGE_SIZE << order,
> +                               KASAN_FREE_PAGE);
> +}
> +
>  void __asan_load1(unsigned long addr)
>  {
>         check_memory_region(addr, 1, false);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 6da1d78..2a6a961 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,7 @@
>  #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
>  #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
>
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
>
>  struct access_info {
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 56a2089..8ac3b6b 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
>         case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
>                 bug_type = "out of bounds access";
>                 break;
> +       case KASAN_FREE_PAGE:
> +               bug_type = "use after free";
> +               break;
>         case KASAN_SHADOW_GAP:
>                 bug_type = "wild memory access";
>                 break;
> @@ -78,6 +81,10 @@ static void print_address_description(struct access_info *info)
>         page = virt_to_head_page((void *)info->access_addr);
>
>         switch (shadow_val) {
> +       case KASAN_FREE_PAGE:
> +               dump_page(page, "kasan error");
> +               dump_stack();
> +               break;
>         case KASAN_SHADOW_GAP:
>                 pr_err("No metainfo is available for this access.\n");
>                 dump_stack();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index b0e6eab..3829589 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -58,6 +58,7 @@
>  #include <linux/page-debug-flags.h>
>  #include <linux/hugetlb.h>
>  #include <linux/sched/rt.h>
> +#include <linux/kasan.h>
>
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -758,6 +759,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>
>         trace_mm_page_free(page, order);
>         kmemcheck_free_shadow(page, order);
> +       kasan_free_pages(page, order);
>
>         if (PageAnon(page))
>                 page->mapping = NULL;
> @@ -940,6 +942,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
>
>         arch_alloc_page(page, order);
>         kernel_map_pages(page, 1 << order, 1);
> +       kasan_alloc_pages(page, order);
>
>         if (gfp_flags & __GFP_ZERO)
>                 prep_zero_page(page, order, gfp_flags);
> --
> 2.1.3
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 01/12] Add kernel address sanitizer infrastructure.
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-25 12:40       ` Dmitry Chernenkov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:40 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, LKML, Jonathan Corbet,
	Michal Marek, Ingo Molnar, Peter Zijlstra

I'm a little concerned with how enabling/disabling works. If an
enable() is forgotten once, it's disabled forever. If disable() is
forgotten once, the toggle is reversed for the forseable future. MB
check for inequality in kasan_enabled()? like current->kasan_depth >=
0 (will need a signed int for the field). Do you think it's going to
decrease performance?

LGTM



On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>
> KASAN uses compile-time instrumentation for checking every memory access,
> therefore GCC >= v4.9.2 required.
>
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
>
> Basic idea:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
>
> Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
> and uses direct mapping with a scale and offset to translate a memory
> address to its corresponding shadow address.
>
> Here is function to translate address to corresponding shadow address:
>
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
>      }
> where KASAN_SHADOW_SCALE_SHIFT = 3.
>
> So for every 8 bytes there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are inaccessible.
> Different negative values used to distinguish between different kinds of
> inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
>
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
>
> Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
>         "We've developed the set of tools, AddressSanitizer (Asan),
>         ThreadSanitizer and MemorySanitizer, for user space. We actively use
>         them for testing inside of Google (continuous testing, fuzzing,
>         running prod services). To date the tools have found more than 10'000
>         scary bugs in Chromium, Google internal codebase and various
>         open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
>         lots of others): [2] [3] [4].
>         The tools are part of both gcc and clang compilers.
>
>         We have not yet done massive testing under the Kernel AddressSanitizer
>         (it's kind of chicken and egg problem, you need it to be upstream to
>         start applying it extensively). To date it has found about 50 bugs.
>         Bugs that we've found in upstream kernel are listed in [5].
>         We've also found ~20 bugs in out internal version of the kernel. Also
>         people from Samsung and Oracle have found some.
>
>         [...]
>
>         As others noted, the main feature of AddressSanitizer is its
>         performance due to inline compiler instrumentation and simple linear
>         shadow memory. User-space Asan has ~2x slowdown on computational
>         programs and ~2x memory consumption increase. Taking into account that
>         kernel usually consumes only small fraction of CPU and memory when
>         running real user-space programs, I would expect that kernel Asan will
>         have ~10-30% slowdown and similar memory consumption increase (when we
>         finish all tuning).
>
>         I agree that Asan can well replace kmemcheck. We have plans to start
>         working on Kernel MemorySanitizer that finds uses of unitialized
>         memory. Asan+Msan will provide feature-parity with kmemcheck. As
>         others noted, Asan will unlikely replace debug slab and pagealloc that
>         can be enabled at runtime. Asan uses compiler instrumentation, so even
>         if it is disabled, it still incurs visible overheads.
>
>         Asan technology is easily portable to other architectures. Compiler
>         instrumentation is fully portable. Runtime has some arch-dependent
>         parts like shadow mapping and atomic operation interception. They are
>         relatively easy to port."
>
> Comparison with other debugging features:
> ========================================
>
> KMEMCHECK:
>         - KASan can do almost everything that kmemcheck can. KASan uses compile-time
>           instrumentation, which makes it significantly faster than kmemcheck.
>           The only advantage of kmemcheck over KASan is detection of uninitialized
>           memory reads.
>
>           Some brief performance testing showed that kasan could be x500-x600 times
>           faster than kmemcheck:
>
> $ netperf -l 30
>                 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
>                 Recv   Send    Send
>                 Socket Socket  Message  Elapsed
>                 Size   Size    Size     Time     Throughput
>                 bytes  bytes   bytes    secs.    10^6bits/sec
>
> no debug:       87380  16384  16384    30.00    41624.72
>
> kasan inline:   87380  16384  16384    30.00    12870.54
>
> kasan outline:  87380  16384  16384    30.00    10586.39
>
> kmemcheck:      87380  16384  16384    30.03      20.23
>
>         - Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
>           KASan doesn't have such limitation.
>
> DEBUG_PAGEALLOC:
>         - KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
>           granularity level, so it able to find more bugs.
>
> SLUB_DEBUG (poisoning, redzones):
>         - SLUB_DEBUG has lower overhead than KASan.
>
>         - SLUB_DEBUG in most cases are not able to detect bad reads,
>           KASan able to detect both reads and writes.
>
>         - In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
>           bugs only on allocation/freeing of object. KASan catch
>           bugs right before it will happen, so we always know exact
>           place of first bad read/write.
>
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
> [2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
> [3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
> [4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
> [5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
>
> Based on work by Andrey Konovalov <adech.fo@gmail.com>
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Documentation/kasan.txt               | 169 +++++++++++++++
>  Makefile                              |  23 ++-
>  drivers/firmware/efi/libstub/Makefile |   1 +
>  include/linux/kasan.h                 |  42 ++++
>  include/linux/sched.h                 |   3 +
>  lib/Kconfig.debug                     |   2 +
>  lib/Kconfig.kasan                     |  43 ++++
>  mm/Makefile                           |   1 +
>  mm/kasan/Makefile                     |   7 +
>  mm/kasan/kasan.c                      | 374 ++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h                      |  49 +++++
>  mm/kasan/report.c                     | 205 +++++++++++++++++++
>  scripts/Makefile.lib                  |  10 +
>  13 files changed, 927 insertions(+), 2 deletions(-)
>  create mode 100644 Documentation/kasan.txt
>  create mode 100644 include/linux/kasan.h
>  create mode 100644 lib/Kconfig.kasan
>  create mode 100644 mm/kasan/Makefile
>  create mode 100644 mm/kasan/kasan.c
>  create mode 100644 mm/kasan/kasan.h
>  create mode 100644 mm/kasan/report.c
>
> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
> new file mode 100644
> index 0000000..a3a9009
> --- /dev/null
> +++ b/Documentation/kasan.txt
> @@ -0,0 +1,169 @@
> +Kernel address sanitizer
> +================
> +
> +0. Overview
> +===========
> +
> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> +a fast and comprehensive solution for finding use-after-free and out-of-bounds
> +bugs.
> +
> +KASan uses compile-time instrumentation for checking every memory access,
> +therefore you will need a certain version of GCC >= 4.9.2
> +
> +Currently KASan is supported only for x86_64 architecture and requires that the
> +kernel be built with the SLUB allocator.
> +
> +1. Usage
> +=========
> +
> +To enable KASAN configure kernel with:
> +
> +         CONFIG_KASAN = y
> +
> +and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
> +is compiler instrumentation types. The former produces smaller binary the
> +latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
> +latter.
> +
> +Currently KASAN works only with the SLUB memory allocator.
> +For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
> +at least 'slub_debug=U' in the boot cmdline.
> +
> +To disable instrumentation for specific files or directories, add a line
> +similar to the following to the respective kernel Makefile:
> +
> +        For a single file (e.g. main.o):
> +                KASAN_SANITIZE_main.o := n
> +
> +        For all files in one directory:
> +                KASAN_SANITIZE := n
> +
> +1.1 Error reports
> +==========
> +
> +A typical out of bounds access report looks like this:
> +
> +==================================================================
> +BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
> +Write of size 1 by task modprobe/1689
> +=============================================================================
> +BUG kmalloc-128 (Not tainted): kasan error
> +-----------------------------------------------------------------------------
> +
> +Disabling lock debugging due to kernel taint
> +INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
> + __slab_alloc+0x4b4/0x4f0
> + kmem_cache_alloc_trace+0x10b/0x190
> + kmalloc_oob_right+0x3d/0x75 [test_kasan]
> + init_module+0x9/0x47 [test_kasan]
> + do_one_initcall+0x99/0x200
> + load_module+0x2cb3/0x3b20
> + SyS_finit_module+0x76/0x80
> + system_call_fastpath+0x12/0x17
> +INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
> +INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
> +
> +Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
> +Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
> +Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
> +Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
> +CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
> +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
> + ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
> + ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
> + ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
> +Call Trace:
> + [<ffffffff81cc68ae>] dump_stack+0x46/0x58
> + [<ffffffff811fd848>] print_trailer+0xf8/0x160
> + [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
> + [<ffffffff811ff0f5>] object_err+0x35/0x40
> + [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
> + [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
> + [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
> + [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
> + [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
> + [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
> + [<ffffffff8120a995>] __asan_store1+0x75/0xb0
> + [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
> + [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
> + [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
> + [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
> + [<ffffffff810002d9>] do_one_initcall+0x99/0x200
> + [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
> + [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
> + [<ffffffff8110fd70>] ? m_show+0x240/0x240
> + [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
> + [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
> +Memory state around the buggy address:
> + ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
> + ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
> +>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
> +                                                 ^
> + ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
> + ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> + ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> +==================================================================
> +
> +First sections describe slub object where bad access happened.
> +See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
> +
> +In the last section the report shows memory state around the accessed address.
> +Reading this part requires some more understanding of how KASAN works.
> +
> +Each 8 bytes of memory are encoded in one shadow byte as accessible,
> +partially accessible, freed or they can be part of a redzone.
> +We use the following encoding for each shadow byte: 0 means that all 8 bytes
> +of the corresponding memory region are accessible; number N (1 <= N <= 7) means
> +that the first N bytes are accessible, and other (8 - N) bytes are not;
> +any negative value indicates that the entire 8-byte word is inaccessible.
> +We use different negative values to distinguish between different kinds of
> +inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
> +
> +In the report above the arrows point to the shadow byte 03, which means that
> +the accessed address is partially accessible.
> +
> +
> +2. Implementation details
> +========================
> +
> +From a high level, our approach to memory error detection is similar to that
> +of kmemcheck: use shadow memory to record whether each byte of memory is safe
> +to access, and use compile-time instrumentation to check shadow memory on each
> +memory access.
> +
> +AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
> +(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
> +offset to translate a memory address to its corresponding shadow address.
> +
> +Here is the function witch translate an address to its corresponding shadow
> +address:
> +
> +unsigned long kasan_mem_to_shadow(unsigned long addr)
> +{
> +       return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
> +}
> +
> +where KASAN_SHADOW_SCALE_SHIFT = 3.
> +
> +Compile-time instrumentation used for checking memory accesses. Compiler inserts
> +function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
> +access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
> +valid or not by checking corresponding shadow memory.
> +
> +GCC 5.0 has possibility to perform inline instrumentation. Instead of making
> +function calls GCC directly inserts the code to check the shadow memory.
> +This option significantly enlarges kernel but it gives x1.1-x2 performance
> +boost over outline instrumented kernel.
> diff --git a/Makefile b/Makefile
> index 92edae4..052c1f4 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
>  CFLAGS_KERNEL  =
>  AFLAGS_KERNEL  =
>  CFLAGS_GCOV    = -fprofile-arcs -ftest-coverage
> -
> +CFLAGS_KASAN   = $(call cc-option, -fsanitize=kernel-address)
>
>  # Use USERINCLUDE when you must reference the UAPI directories only.
>  USERINCLUDE    := \
> @@ -427,7 +427,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
>  export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
>
>  export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
> -export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
> +export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
>  export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
>  export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
>  export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
> @@ -758,6 +758,25 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
>  KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
>  endif
>
> +ifdef CONFIG_KASAN
> +ifdef CONFIG_KASAN_INLINE
> +  kasan_inline := $(call cc-option, $(CFLAGS_KASAN) \
> +                       -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
> +                       --param asan-instrumentation-with-call-threshold=10000)
> +  ifeq ($(kasan_inline),)
> +    $(warning Cannot use CONFIG_KASAN_INLINE: \
> +             inline instrumentation is not supported by compiler. Trying CONFIG_KASAN_OUTLINE.)
> +  else
> +    CFLAGS_KASAN := $(kasan_inline)
> +  endif
> +
> +endif
> +  ifeq ($(CFLAGS_KASAN),)
> +    $(warning Cannot use CONFIG_KASAN: \
> +             -fsanitize=kernel-address is not supported by compiler)
> +  endif
> +endif
> +
>  # arch Makefile may override CC so keep this after arch Makefile is included
>  NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
>  CHECKFLAGS     += $(NOSTDINC_FLAGS)
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index b14bc2b..c5533c7 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -19,6 +19,7 @@ KBUILD_CFLAGS                 := $(cflags-y) \
>                                    $(call cc-option,-fno-stack-protector)
>
>  GCOV_PROFILE                   := n
> +KASAN_SANITIZE                 := n
>
>  lib-y                          := efi-stub-helper.o
>  lib-$(CONFIG_EFI_ARMSTUB)      += arm-stub.o fdt.o
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> new file mode 100644
> index 0000000..01c99fe
> --- /dev/null
> +++ b/include/linux/kasan.h
> @@ -0,0 +1,42 @@
> +#ifndef _LINUX_KASAN_H
> +#define _LINUX_KASAN_H
> +
> +#include <linux/types.h>
> +
> +struct kmem_cache;
> +struct page;
> +
> +#ifdef CONFIG_KASAN
> +#include <asm/kasan.h>
> +#include <linux/sched.h>
> +
> +#define KASAN_SHADOW_SCALE_SHIFT 3
> +#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
> +
> +static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
> +{
> +       return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
> +}
> +
> +static inline void kasan_enable_local(void)
> +{
> +       current->kasan_depth++;
> +}
> +
> +static inline void kasan_disable_local(void)
> +{
> +       current->kasan_depth--;
> +}
> +
> +void kasan_unpoison_shadow(const void *address, size_t size);
> +
> +#else /* CONFIG_KASAN */
> +
> +static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
> +
> +static inline void kasan_enable_local(void) {}
> +static inline void kasan_disable_local(void) {}
> +
> +#endif /* CONFIG_KASAN */
> +
> +#endif /* LINUX_KASAN_H */
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 8db31ef..26e1b47 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1662,6 +1662,9 @@ struct task_struct {
>         unsigned long timer_slack_ns;
>         unsigned long default_timer_slack_ns;
>
> +#ifdef CONFIG_KASAN
> +       unsigned int kasan_depth;
> +#endif
>  #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>         /* Index of current stored address in ret_stack */
>         int curr_ret_stack;
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index ddd070a..bb26ec3 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
>
>  source "lib/Kconfig.kmemcheck"
>
> +source "lib/Kconfig.kasan"
> +
>  endmenu # "Memory Debugging"
>
>  config DEBUG_SHIRQ
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> new file mode 100644
> index 0000000..10341df
> --- /dev/null
> +++ b/lib/Kconfig.kasan
> @@ -0,0 +1,43 @@
> +config HAVE_ARCH_KASAN
> +       bool
> +
> +if HAVE_ARCH_KASAN
> +
> +config KASAN
> +       bool "AddressSanitizer: runtime memory debugger"
> +       help
> +         Enables address sanitizer - runtime memory debugger,
> +         designed to find out-of-bounds accesses and use-after-free bugs.
> +         This is strictly debugging feature. It consumes about 1/8
> +         of available memory and brings about ~x3 performance slowdown.
> +         For better error detection enable CONFIG_STACKTRACE,
> +         and add slub_debug=U to boot cmdline.
> +
> +config KASAN_SHADOW_OFFSET
> +       hex
> +
> +choice
> +       prompt "Instrumentation type"
> +       depends on KASAN
> +       default KASAN_OUTLINE
> +
> +config KASAN_OUTLINE
> +       bool "Outline instrumentation"
> +       help
> +         Before every memory access compiler insert function call
> +         __asan_load*/__asan_store*. These functions performs check
> +         of shadow memory. This is slower than inline instrumentation,
> +         however it doesn't bloat size of kernel's .text section so
> +         much as inline does.
> +
> +config KASAN_INLINE
> +       bool "Inline instrumentation"
> +       help
> +         Compiler directly inserts code checking shadow memory before
> +         memory accesses. This is faster than outline (in some workloads
> +         it gives about x2 boost over outline instrumentation), but
> +         make kernel's .text size much bigger.
> +
> +endchoice
> +
> +endif
> diff --git a/mm/Makefile b/mm/Makefile
> index d9d5794..33d9971 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -72,3 +72,4 @@ obj-$(CONFIG_ZSMALLOC)        += zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
>  obj-$(CONFIG_CMA)      += cma.o
>  obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
> +obj-$(CONFIG_KASAN)    += kasan/
> diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
> new file mode 100644
> index 0000000..ef2d313
> --- /dev/null
> +++ b/mm/kasan/Makefile
> @@ -0,0 +1,7 @@
> +KASAN_SANITIZE := n
> +
> +# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
> +# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
> +CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack)
> +
> +obj-y := kasan.o report.o
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..f77be01
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,374 @@
> +/*
> + * This file contains shadow memory manipulation code.
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * Some of code borrowed from https://github.com/xairy/linux by
> + *        Andrey Konovalov <adech.fo@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +#define DISABLE_BRANCH_PROFILING
> +
> +#include <linux/export.h>
> +#include <linux/init.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/mm.h>
> +#include <linux/printk.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/stacktrace.h>
> +#include <linux/string.h>
> +#include <linux/types.h>
> +#include <linux/kasan.h>
> +
> +#include "kasan.h"
> +
> +/*
> + * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> + * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
> + */
> +static void kasan_poison_shadow(const void *address, size_t size, u8 value)
> +{
> +       unsigned long shadow_start, shadow_end;
> +       unsigned long addr = (unsigned long)address;
> +
> +       shadow_start = kasan_mem_to_shadow(addr);
> +       shadow_end = kasan_mem_to_shadow(addr + size);
> +
> +       memset((void *)shadow_start, value, shadow_end - shadow_start);
> +}
> +
> +void kasan_unpoison_shadow(const void *address, size_t size)
> +{
> +       kasan_poison_shadow(address, size, 0);
> +
> +       if (size & KASAN_SHADOW_MASK) {
> +               u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
> +                                               + size);
> +               *shadow = size & KASAN_SHADOW_MASK;
> +       }
> +}
> +
> +static __always_inline bool memory_is_poisoned_1(unsigned long addr)
> +{
> +       s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(shadow_value)) {
> +               s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
> +               return unlikely(last_accessible_byte >= shadow_value);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned_2(unsigned long addr)
> +{
> +       u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(*shadow_addr)) {
> +               if (memory_is_poisoned_1(addr + 1))
> +                       return true;
> +
> +               if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
> +                       return false;
> +
> +               return unlikely(*(u8 *)shadow_addr);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned_4(unsigned long addr)
> +{
> +       u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(*shadow_addr)) {
> +               if (memory_is_poisoned_1(addr + 3))
> +                       return true;
> +
> +               if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
> +                       return false;
> +
> +               return unlikely(*(u8 *)shadow_addr);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned_8(unsigned long addr)
> +{
> +       u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(*shadow_addr)) {
> +               if (memory_is_poisoned_1(addr + 7))
> +                       return true;
> +
> +               if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
> +                       return false;
> +
> +               return unlikely(*(u8 *)shadow_addr);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> +{
> +       u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(*shadow_addr)) {
> +               u16 shadow_first_bytes = *(u16 *)shadow_addr;
> +               s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
> +
> +               if (unlikely(shadow_first_bytes))
> +                       return true;
> +
> +               if (likely(!last_byte))
> +                       return false;
> +
> +               return memory_is_poisoned_1(addr + 15);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline unsigned long bytes_is_zero(unsigned long start,
> +                                       size_t size)
> +{
> +       while (size) {
> +               if (unlikely(*(u8 *)start))
> +                       return start;
> +               start++;
> +               size--;
> +       }
> +
> +       return 0;
> +}
> +
> +static __always_inline unsigned long memory_is_zero(unsigned long start,
> +                                               unsigned long end)
> +{
> +       unsigned int prefix = start % 8;
> +       unsigned int words;
> +       unsigned long ret;
> +
> +       if (end - start <= 16)
> +               return bytes_is_zero(start, end - start);
> +
> +       if (prefix) {
> +               prefix = 8 - prefix;
> +               ret = bytes_is_zero(start, prefix);
> +               if (unlikely(ret))
> +                       return ret;
> +               start += prefix;
> +       }
> +
> +       words = (end - start) / 8;
> +       while (words) {
> +               if (unlikely(*(u64 *)start))
> +                       return bytes_is_zero(start, 8);
> +               start += 8;
> +               words--;
> +       }
> +
> +       return bytes_is_zero(start, (end - start) % 8);
> +}
> +
> +static __always_inline bool memory_is_poisoned_n(unsigned long addr,
> +                                               size_t size)
> +{
> +       unsigned long ret;
> +
> +       ret = memory_is_zero(kasan_mem_to_shadow(addr),
> +                       kasan_mem_to_shadow(addr + size - 1) + 1);
> +
> +       if (unlikely(ret)) {
> +               unsigned long last_byte = addr + size - 1;
> +               s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
> +
> +               if (unlikely(ret != (unsigned long)last_shadow ||
> +                       ((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
> +                       return true;
> +       }
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
> +{
> +       if (__builtin_constant_p(size)) {
> +               switch (size) {
> +               case 1:
> +                       return memory_is_poisoned_1(addr);
> +               case 2:
> +                       return memory_is_poisoned_2(addr);
> +               case 4:
> +                       return memory_is_poisoned_4(addr);
> +               case 8:
> +                       return memory_is_poisoned_8(addr);
> +               case 16:
> +                       return memory_is_poisoned_16(addr);
> +               default:
> +                       BUILD_BUG();
> +               }
> +       }
> +
> +       return memory_is_poisoned_n(addr, size);
> +}
> +
> +
> +static __always_inline void check_memory_region(unsigned long addr,
> +                                               size_t size, bool write)
> +{
> +       struct access_info info;
> +
> +       if (unlikely(size == 0))
> +               return;
> +
> +       if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
> +               info.access_addr = addr;
> +               info.access_size = size;
> +               info.is_write = write;
> +               info.ip = _RET_IP_;
> +               kasan_report_user_access(&info);
> +               return;
> +       }
> +
> +       if (likely(!memory_is_poisoned(addr, size)))
> +               return;
> +
> +       kasan_report(addr, size, write);
> +}
> +
> +void __asan_load1(unsigned long addr)
> +{
> +       check_memory_region(addr, 1, false);
> +}
> +EXPORT_SYMBOL(__asan_load1);
> +
> +void __asan_load2(unsigned long addr)
> +{
> +       check_memory_region(addr, 2, false);
> +}
> +EXPORT_SYMBOL(__asan_load2);
> +
> +void __asan_load4(unsigned long addr)
> +{
> +       check_memory_region(addr, 4, false);
> +}
> +EXPORT_SYMBOL(__asan_load4);
> +
> +void __asan_load8(unsigned long addr)
> +{
> +       check_memory_region(addr, 8, false);
> +}
> +EXPORT_SYMBOL(__asan_load8);
> +
> +void __asan_load16(unsigned long addr)
> +{
> +       check_memory_region(addr, 16, false);
> +}
> +EXPORT_SYMBOL(__asan_load16);
> +
> +void __asan_loadN(unsigned long addr, size_t size)
> +{
> +       check_memory_region(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_loadN);
> +
> +void __asan_store1(unsigned long addr)
> +{
> +       check_memory_region(addr, 1, true);
> +}
> +EXPORT_SYMBOL(__asan_store1);
> +
> +void __asan_store2(unsigned long addr)
> +{
> +       check_memory_region(addr, 2, true);
> +}
> +EXPORT_SYMBOL(__asan_store2);
> +
> +void __asan_store4(unsigned long addr)
> +{
> +       check_memory_region(addr, 4, true);
> +}
> +EXPORT_SYMBOL(__asan_store4);
> +
> +void __asan_store8(unsigned long addr)
> +{
> +       check_memory_region(addr, 8, true);
> +}
> +EXPORT_SYMBOL(__asan_store8);
> +
> +void __asan_store16(unsigned long addr)
> +{
> +       check_memory_region(addr, 16, true);
> +}
> +EXPORT_SYMBOL(__asan_store16);
> +
> +void __asan_storeN(unsigned long addr, size_t size)
> +{
> +       check_memory_region(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_storeN);
> +
> +/* to shut up compiler complaints */
> +void __asan_handle_no_return(void) {}
> +EXPORT_SYMBOL(__asan_handle_no_return);
> +
> +
> +/* GCC 5.0 has different function names by default */
> +__attribute__((alias("__asan_load1")))
> +void __asan_load1_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load1_noabort);
> +
> +__attribute__((alias("__asan_load2")))
> +void __asan_load2_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load2_noabort);
> +
> +__attribute__((alias("__asan_load4")))
> +void __asan_load4_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load4_noabort);
> +
> +__attribute__((alias("__asan_load8")))
> +void __asan_load8_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load8_noabort);
> +
> +__attribute__((alias("__asan_load16")))
> +void __asan_load16_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load16_noabort);
> +
> +__attribute__((alias("__asan_loadN")))
> +void __asan_loadN_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_loadN_noabort);
> +
> +__attribute__((alias("__asan_store1")))
> +void __asan_store1_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store1_noabort);
> +
> +__attribute__((alias("__asan_store2")))
> +void __asan_store2_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store2_noabort);
> +
> +__attribute__((alias("__asan_store4")))
> +void __asan_store4_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store4_noabort);
> +
> +__attribute__((alias("__asan_store8")))
> +void __asan_store8_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store8_noabort);
> +
> +__attribute__((alias("__asan_store16")))
> +void __asan_store16_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store16_noabort);
> +
> +__attribute__((alias("__asan_storeN")))
> +void __asan_storeN_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_storeN_noabort);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> new file mode 100644
> index 0000000..6da1d78
> --- /dev/null
> +++ b/mm/kasan/kasan.h
> @@ -0,0 +1,49 @@
> +#ifndef __MM_KASAN_KASAN_H
> +#define __MM_KASAN_KASAN_H
> +
> +#include <linux/kasan.h>
> +
> +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> +#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
> +
> +#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
> +
> +struct access_info {
> +       unsigned long access_addr;
> +       unsigned long first_bad_addr;
> +       size_t access_size;
> +       bool is_write;
> +       unsigned long ip;
> +};
> +
> +void kasan_report_error(struct access_info *info);
> +void kasan_report_user_access(struct access_info *info);
> +
> +static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
> +{
> +       return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
> +}
> +
> +static inline bool kasan_enabled(void)
> +{
> +       return !current->kasan_depth;
> +}
> +
> +static __always_inline void kasan_report(unsigned long addr,
> +                                       size_t size,
> +                                       bool is_write)
> +{
> +       struct access_info info;
> +
> +       if (likely(!kasan_enabled()))
> +               return;
> +
> +       info.access_addr = addr;
> +       info.access_size = size;
> +       info.is_write = is_write;
> +       info.ip = _RET_IP_;
> +       kasan_report_error(&info);
> +}
> +
> +
> +#endif
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> new file mode 100644
> index 0000000..56a2089
> --- /dev/null
> +++ b/mm/kasan/report.c
> @@ -0,0 +1,205 @@
> +/*
> + * This file contains error reporting code.
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * Some of code borrowed from https://github.com/xairy/linux by
> + *        Andrey Konovalov <adech.fo@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/mm.h>
> +#include <linux/printk.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/stacktrace.h>
> +#include <linux/string.h>
> +#include <linux/types.h>
> +#include <linux/kasan.h>
> +
> +#include "kasan.h"
> +
> +/* Shadow layout customization. */
> +#define SHADOW_BYTES_PER_BLOCK 1
> +#define SHADOW_BLOCKS_PER_ROW 16
> +#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
> +#define SHADOW_ROWS_AROUND_ADDR 5
> +
> +static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
> +{
> +       u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
> +       unsigned long first_bad_addr = addr;
> +
> +       while (!shadow_val && first_bad_addr < addr + size) {
> +               first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
> +               shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
> +       }
> +       return first_bad_addr;
> +}
> +
> +static void print_error_description(struct access_info *info)
> +{
> +       const char *bug_type = "unknown crash";
> +       u8 shadow_val;
> +
> +       info->first_bad_addr = find_first_bad_addr(info->access_addr,
> +                                               info->access_size);
> +
> +       shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
> +
> +       switch (shadow_val) {
> +       case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +               bug_type = "out of bounds access";
> +               break;
> +       case KASAN_SHADOW_GAP:
> +               bug_type = "wild memory access";
> +               break;
> +       }
> +
> +       pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
> +               bug_type, (void *)info->ip,
> +               (void *)info->access_addr);
> +       pr_err("%s of size %zu by task %s/%d\n",
> +               info->is_write ? "Write" : "Read",
> +               info->access_size, current->comm, task_pid_nr(current));
> +}
> +
> +static void print_address_description(struct access_info *info)
> +{
> +       struct page *page;
> +       u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
> +
> +       page = virt_to_head_page((void *)info->access_addr);
> +
> +       switch (shadow_val) {
> +       case KASAN_SHADOW_GAP:
> +               pr_err("No metainfo is available for this access.\n");
> +               dump_stack();
> +               break;
> +       default:
> +               WARN_ON(1);
> +       }
> +}
> +
> +static bool row_is_guilty(unsigned long row, unsigned long guilty)
> +{
> +       return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
> +}
> +
> +static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
> +{
> +       /* The length of ">ff00ff00ff00ff00: " is
> +        *    3 + (BITS_PER_LONG/8)*2 chars.
> +        */
> +       return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
> +               (shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
> +}
> +
> +static void print_shadow_for_address(unsigned long addr)
> +{
> +       int i;
> +       unsigned long shadow = kasan_mem_to_shadow(addr);
> +       unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
> +               - SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
> +
> +       pr_err("Memory state around the buggy address:\n");
> +
> +       for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
> +               unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
> +               char buffer[4 + (BITS_PER_LONG/8)*2];
> +
> +               snprintf(buffer, sizeof(buffer),
> +                       (i == 0) ? ">%lx: " : " %lx: ", kaddr);
> +
> +               kasan_disable_local();
> +               print_hex_dump(KERN_ERR, buffer,
> +                       DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
> +                       (void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
> +               kasan_enable_local();
> +
> +               if (row_is_guilty(aligned_shadow, shadow))
> +                       pr_err("%*c\n",
> +                               shadow_pointer_offset(aligned_shadow, shadow),
> +                               '^');
> +
> +               aligned_shadow += SHADOW_BYTES_PER_ROW;
> +       }
> +}
> +
> +static DEFINE_SPINLOCK(report_lock);
> +
> +void kasan_report_error(struct access_info *info)
> +{
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&report_lock, flags);
> +       pr_err("================================="
> +               "=================================\n");
> +       print_error_description(info);
> +       print_address_description(info);
> +       print_shadow_for_address(info->first_bad_addr);
> +       pr_err("================================="
> +               "=================================\n");
> +       spin_unlock_irqrestore(&report_lock, flags);
> +}
> +
> +void kasan_report_user_access(struct access_info *info)
> +{
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&report_lock, flags);
> +       pr_err("================================="
> +               "=================================\n");
> +       pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
> +               info->access_addr);
> +       pr_err("%s of size %zu by thread T%d:\n",
> +               info->is_write ? "Write" : "Read",
> +               info->access_size, current->pid);
> +       dump_stack();
> +       pr_err("================================="
> +               "=================================\n");
> +       spin_unlock_irqrestore(&report_lock, flags);
> +}
> +
> +#define DEFINE_ASAN_REPORT_LOAD(size)                     \
> +void __asan_report_load##size##_noabort(unsigned long addr) \
> +{                                                         \
> +       kasan_report(addr, size, false);                  \
> +}                                                         \
> +EXPORT_SYMBOL(__asan_report_load##size##_noabort)
> +
> +#define DEFINE_ASAN_REPORT_STORE(size)                     \
> +void __asan_report_store##size##_noabort(unsigned long addr) \
> +{                                                          \
> +       kasan_report(addr, size, true);                    \
> +}                                                          \
> +EXPORT_SYMBOL(__asan_report_store##size##_noabort)
> +
> +DEFINE_ASAN_REPORT_LOAD(1);
> +DEFINE_ASAN_REPORT_LOAD(2);
> +DEFINE_ASAN_REPORT_LOAD(4);
> +DEFINE_ASAN_REPORT_LOAD(8);
> +DEFINE_ASAN_REPORT_LOAD(16);
> +DEFINE_ASAN_REPORT_STORE(1);
> +DEFINE_ASAN_REPORT_STORE(2);
> +DEFINE_ASAN_REPORT_STORE(4);
> +DEFINE_ASAN_REPORT_STORE(8);
> +DEFINE_ASAN_REPORT_STORE(16);
> +
> +void __asan_report_load_n_noabort(unsigned long addr, size_t size)
> +{
> +       kasan_report(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_report_load_n_noabort);
> +
> +void __asan_report_store_n_noabort(unsigned long addr, size_t size)
> +{
> +       kasan_report(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_report_store_n_noabort);
> diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
> index 5117552..a5845a2 100644
> --- a/scripts/Makefile.lib
> +++ b/scripts/Makefile.lib
> @@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
>                 $(CFLAGS_GCOV))
>  endif
>
> +#
> +# Enable address sanitizer flags for kernel except some files or directories
> +# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
> +#
> +ifeq ($(CONFIG_KASAN),y)
> +_c_flags += $(if $(patsubst n%,, \
> +               $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
> +               $(CFLAGS_KASAN))
> +endif
> +
>  # If building the kernel in a separate objtree expand all occurrences
>  # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
>
> --
> 2.1.3
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 01/12] Add kernel address sanitizer infrastructure.
@ 2014-11-25 12:40       ` Dmitry Chernenkov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:40 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, LKML, Jonathan Corbet,
	Michal Marek, Ingo Molnar, Peter Zijlstra

I'm a little concerned with how enabling/disabling works. If an
enable() is forgotten once, it's disabled forever. If disable() is
forgotten once, the toggle is reversed for the forseable future. MB
check for inequality in kasan_enabled()? like current->kasan_depth >=
0 (will need a signed int for the field). Do you think it's going to
decrease performance?

LGTM



On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>
> KASAN uses compile-time instrumentation for checking every memory access,
> therefore GCC >= v4.9.2 required.
>
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
>
> Basic idea:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
>
> Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
> and uses direct mapping with a scale and offset to translate a memory
> address to its corresponding shadow address.
>
> Here is function to translate address to corresponding shadow address:
>
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
>      }
> where KASAN_SHADOW_SCALE_SHIFT = 3.
>
> So for every 8 bytes there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are inaccessible.
> Different negative values used to distinguish between different kinds of
> inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
>
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
>
> Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
>         "We've developed the set of tools, AddressSanitizer (Asan),
>         ThreadSanitizer and MemorySanitizer, for user space. We actively use
>         them for testing inside of Google (continuous testing, fuzzing,
>         running prod services). To date the tools have found more than 10'000
>         scary bugs in Chromium, Google internal codebase and various
>         open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
>         lots of others): [2] [3] [4].
>         The tools are part of both gcc and clang compilers.
>
>         We have not yet done massive testing under the Kernel AddressSanitizer
>         (it's kind of chicken and egg problem, you need it to be upstream to
>         start applying it extensively). To date it has found about 50 bugs.
>         Bugs that we've found in upstream kernel are listed in [5].
>         We've also found ~20 bugs in out internal version of the kernel. Also
>         people from Samsung and Oracle have found some.
>
>         [...]
>
>         As others noted, the main feature of AddressSanitizer is its
>         performance due to inline compiler instrumentation and simple linear
>         shadow memory. User-space Asan has ~2x slowdown on computational
>         programs and ~2x memory consumption increase. Taking into account that
>         kernel usually consumes only small fraction of CPU and memory when
>         running real user-space programs, I would expect that kernel Asan will
>         have ~10-30% slowdown and similar memory consumption increase (when we
>         finish all tuning).
>
>         I agree that Asan can well replace kmemcheck. We have plans to start
>         working on Kernel MemorySanitizer that finds uses of unitialized
>         memory. Asan+Msan will provide feature-parity with kmemcheck. As
>         others noted, Asan will unlikely replace debug slab and pagealloc that
>         can be enabled at runtime. Asan uses compiler instrumentation, so even
>         if it is disabled, it still incurs visible overheads.
>
>         Asan technology is easily portable to other architectures. Compiler
>         instrumentation is fully portable. Runtime has some arch-dependent
>         parts like shadow mapping and atomic operation interception. They are
>         relatively easy to port."
>
> Comparison with other debugging features:
> ========================================
>
> KMEMCHECK:
>         - KASan can do almost everything that kmemcheck can. KASan uses compile-time
>           instrumentation, which makes it significantly faster than kmemcheck.
>           The only advantage of kmemcheck over KASan is detection of uninitialized
>           memory reads.
>
>           Some brief performance testing showed that kasan could be x500-x600 times
>           faster than kmemcheck:
>
> $ netperf -l 30
>                 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
>                 Recv   Send    Send
>                 Socket Socket  Message  Elapsed
>                 Size   Size    Size     Time     Throughput
>                 bytes  bytes   bytes    secs.    10^6bits/sec
>
> no debug:       87380  16384  16384    30.00    41624.72
>
> kasan inline:   87380  16384  16384    30.00    12870.54
>
> kasan outline:  87380  16384  16384    30.00    10586.39
>
> kmemcheck:      87380  16384  16384    30.03      20.23
>
>         - Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
>           KASan doesn't have such limitation.
>
> DEBUG_PAGEALLOC:
>         - KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
>           granularity level, so it able to find more bugs.
>
> SLUB_DEBUG (poisoning, redzones):
>         - SLUB_DEBUG has lower overhead than KASan.
>
>         - SLUB_DEBUG in most cases are not able to detect bad reads,
>           KASan able to detect both reads and writes.
>
>         - In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
>           bugs only on allocation/freeing of object. KASan catch
>           bugs right before it will happen, so we always know exact
>           place of first bad read/write.
>
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
> [2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
> [3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
> [4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
> [5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
>
> Based on work by Andrey Konovalov <adech.fo@gmail.com>
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  Documentation/kasan.txt               | 169 +++++++++++++++
>  Makefile                              |  23 ++-
>  drivers/firmware/efi/libstub/Makefile |   1 +
>  include/linux/kasan.h                 |  42 ++++
>  include/linux/sched.h                 |   3 +
>  lib/Kconfig.debug                     |   2 +
>  lib/Kconfig.kasan                     |  43 ++++
>  mm/Makefile                           |   1 +
>  mm/kasan/Makefile                     |   7 +
>  mm/kasan/kasan.c                      | 374 ++++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h                      |  49 +++++
>  mm/kasan/report.c                     | 205 +++++++++++++++++++
>  scripts/Makefile.lib                  |  10 +
>  13 files changed, 927 insertions(+), 2 deletions(-)
>  create mode 100644 Documentation/kasan.txt
>  create mode 100644 include/linux/kasan.h
>  create mode 100644 lib/Kconfig.kasan
>  create mode 100644 mm/kasan/Makefile
>  create mode 100644 mm/kasan/kasan.c
>  create mode 100644 mm/kasan/kasan.h
>  create mode 100644 mm/kasan/report.c
>
> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
> new file mode 100644
> index 0000000..a3a9009
> --- /dev/null
> +++ b/Documentation/kasan.txt
> @@ -0,0 +1,169 @@
> +Kernel address sanitizer
> +================
> +
> +0. Overview
> +===========
> +
> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> +a fast and comprehensive solution for finding use-after-free and out-of-bounds
> +bugs.
> +
> +KASan uses compile-time instrumentation for checking every memory access,
> +therefore you will need a certain version of GCC >= 4.9.2
> +
> +Currently KASan is supported only for x86_64 architecture and requires that the
> +kernel be built with the SLUB allocator.
> +
> +1. Usage
> +=========
> +
> +To enable KASAN configure kernel with:
> +
> +         CONFIG_KASAN = y
> +
> +and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
> +is compiler instrumentation types. The former produces smaller binary the
> +latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
> +latter.
> +
> +Currently KASAN works only with the SLUB memory allocator.
> +For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
> +at least 'slub_debug=U' in the boot cmdline.
> +
> +To disable instrumentation for specific files or directories, add a line
> +similar to the following to the respective kernel Makefile:
> +
> +        For a single file (e.g. main.o):
> +                KASAN_SANITIZE_main.o := n
> +
> +        For all files in one directory:
> +                KASAN_SANITIZE := n
> +
> +1.1 Error reports
> +==========
> +
> +A typical out of bounds access report looks like this:
> +
> +==================================================================
> +BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
> +Write of size 1 by task modprobe/1689
> +=============================================================================
> +BUG kmalloc-128 (Not tainted): kasan error
> +-----------------------------------------------------------------------------
> +
> +Disabling lock debugging due to kernel taint
> +INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
> + __slab_alloc+0x4b4/0x4f0
> + kmem_cache_alloc_trace+0x10b/0x190
> + kmalloc_oob_right+0x3d/0x75 [test_kasan]
> + init_module+0x9/0x47 [test_kasan]
> + do_one_initcall+0x99/0x200
> + load_module+0x2cb3/0x3b20
> + SyS_finit_module+0x76/0x80
> + system_call_fastpath+0x12/0x17
> +INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
> +INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
> +
> +Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
> +Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
> +Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
> +Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
> +Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
> +CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
> +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
> + ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
> + ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
> + ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
> +Call Trace:
> + [<ffffffff81cc68ae>] dump_stack+0x46/0x58
> + [<ffffffff811fd848>] print_trailer+0xf8/0x160
> + [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
> + [<ffffffff811ff0f5>] object_err+0x35/0x40
> + [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
> + [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
> + [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
> + [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
> + [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
> + [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
> + [<ffffffff8120a995>] __asan_store1+0x75/0xb0
> + [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
> + [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
> + [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
> + [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
> + [<ffffffff810002d9>] do_one_initcall+0x99/0x200
> + [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
> + [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
> + [<ffffffff8110fd70>] ? m_show+0x240/0x240
> + [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
> + [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
> +Memory state around the buggy address:
> + ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
> + ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
> +>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
> +                                                 ^
> + ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> + ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
> + ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> + ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> +==================================================================
> +
> +First sections describe slub object where bad access happened.
> +See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
> +
> +In the last section the report shows memory state around the accessed address.
> +Reading this part requires some more understanding of how KASAN works.
> +
> +Each 8 bytes of memory are encoded in one shadow byte as accessible,
> +partially accessible, freed or they can be part of a redzone.
> +We use the following encoding for each shadow byte: 0 means that all 8 bytes
> +of the corresponding memory region are accessible; number N (1 <= N <= 7) means
> +that the first N bytes are accessible, and other (8 - N) bytes are not;
> +any negative value indicates that the entire 8-byte word is inaccessible.
> +We use different negative values to distinguish between different kinds of
> +inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
> +
> +In the report above the arrows point to the shadow byte 03, which means that
> +the accessed address is partially accessible.
> +
> +
> +2. Implementation details
> +========================
> +
> +From a high level, our approach to memory error detection is similar to that
> +of kmemcheck: use shadow memory to record whether each byte of memory is safe
> +to access, and use compile-time instrumentation to check shadow memory on each
> +memory access.
> +
> +AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
> +(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
> +offset to translate a memory address to its corresponding shadow address.
> +
> +Here is the function witch translate an address to its corresponding shadow
> +address:
> +
> +unsigned long kasan_mem_to_shadow(unsigned long addr)
> +{
> +       return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
> +}
> +
> +where KASAN_SHADOW_SCALE_SHIFT = 3.
> +
> +Compile-time instrumentation used for checking memory accesses. Compiler inserts
> +function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
> +access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
> +valid or not by checking corresponding shadow memory.
> +
> +GCC 5.0 has possibility to perform inline instrumentation. Instead of making
> +function calls GCC directly inserts the code to check the shadow memory.
> +This option significantly enlarges kernel but it gives x1.1-x2 performance
> +boost over outline instrumented kernel.
> diff --git a/Makefile b/Makefile
> index 92edae4..052c1f4 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
>  CFLAGS_KERNEL  =
>  AFLAGS_KERNEL  =
>  CFLAGS_GCOV    = -fprofile-arcs -ftest-coverage
> -
> +CFLAGS_KASAN   = $(call cc-option, -fsanitize=kernel-address)
>
>  # Use USERINCLUDE when you must reference the UAPI directories only.
>  USERINCLUDE    := \
> @@ -427,7 +427,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
>  export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
>
>  export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
> -export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
> +export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
>  export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
>  export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
>  export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
> @@ -758,6 +758,25 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
>  KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
>  endif
>
> +ifdef CONFIG_KASAN
> +ifdef CONFIG_KASAN_INLINE
> +  kasan_inline := $(call cc-option, $(CFLAGS_KASAN) \
> +                       -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
> +                       --param asan-instrumentation-with-call-threshold=10000)
> +  ifeq ($(kasan_inline),)
> +    $(warning Cannot use CONFIG_KASAN_INLINE: \
> +             inline instrumentation is not supported by compiler. Trying CONFIG_KASAN_OUTLINE.)
> +  else
> +    CFLAGS_KASAN := $(kasan_inline)
> +  endif
> +
> +endif
> +  ifeq ($(CFLAGS_KASAN),)
> +    $(warning Cannot use CONFIG_KASAN: \
> +             -fsanitize=kernel-address is not supported by compiler)
> +  endif
> +endif
> +
>  # arch Makefile may override CC so keep this after arch Makefile is included
>  NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
>  CHECKFLAGS     += $(NOSTDINC_FLAGS)
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index b14bc2b..c5533c7 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -19,6 +19,7 @@ KBUILD_CFLAGS                 := $(cflags-y) \
>                                    $(call cc-option,-fno-stack-protector)
>
>  GCOV_PROFILE                   := n
> +KASAN_SANITIZE                 := n
>
>  lib-y                          := efi-stub-helper.o
>  lib-$(CONFIG_EFI_ARMSTUB)      += arm-stub.o fdt.o
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> new file mode 100644
> index 0000000..01c99fe
> --- /dev/null
> +++ b/include/linux/kasan.h
> @@ -0,0 +1,42 @@
> +#ifndef _LINUX_KASAN_H
> +#define _LINUX_KASAN_H
> +
> +#include <linux/types.h>
> +
> +struct kmem_cache;
> +struct page;
> +
> +#ifdef CONFIG_KASAN
> +#include <asm/kasan.h>
> +#include <linux/sched.h>
> +
> +#define KASAN_SHADOW_SCALE_SHIFT 3
> +#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
> +
> +static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
> +{
> +       return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
> +}
> +
> +static inline void kasan_enable_local(void)
> +{
> +       current->kasan_depth++;
> +}
> +
> +static inline void kasan_disable_local(void)
> +{
> +       current->kasan_depth--;
> +}
> +
> +void kasan_unpoison_shadow(const void *address, size_t size);
> +
> +#else /* CONFIG_KASAN */
> +
> +static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
> +
> +static inline void kasan_enable_local(void) {}
> +static inline void kasan_disable_local(void) {}
> +
> +#endif /* CONFIG_KASAN */
> +
> +#endif /* LINUX_KASAN_H */
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 8db31ef..26e1b47 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1662,6 +1662,9 @@ struct task_struct {
>         unsigned long timer_slack_ns;
>         unsigned long default_timer_slack_ns;
>
> +#ifdef CONFIG_KASAN
> +       unsigned int kasan_depth;
> +#endif
>  #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>         /* Index of current stored address in ret_stack */
>         int curr_ret_stack;
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index ddd070a..bb26ec3 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -666,6 +666,8 @@ config DEBUG_STACKOVERFLOW
>
>  source "lib/Kconfig.kmemcheck"
>
> +source "lib/Kconfig.kasan"
> +
>  endmenu # "Memory Debugging"
>
>  config DEBUG_SHIRQ
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> new file mode 100644
> index 0000000..10341df
> --- /dev/null
> +++ b/lib/Kconfig.kasan
> @@ -0,0 +1,43 @@
> +config HAVE_ARCH_KASAN
> +       bool
> +
> +if HAVE_ARCH_KASAN
> +
> +config KASAN
> +       bool "AddressSanitizer: runtime memory debugger"
> +       help
> +         Enables address sanitizer - runtime memory debugger,
> +         designed to find out-of-bounds accesses and use-after-free bugs.
> +         This is strictly debugging feature. It consumes about 1/8
> +         of available memory and brings about ~x3 performance slowdown.
> +         For better error detection enable CONFIG_STACKTRACE,
> +         and add slub_debug=U to boot cmdline.
> +
> +config KASAN_SHADOW_OFFSET
> +       hex
> +
> +choice
> +       prompt "Instrumentation type"
> +       depends on KASAN
> +       default KASAN_OUTLINE
> +
> +config KASAN_OUTLINE
> +       bool "Outline instrumentation"
> +       help
> +         Before every memory access compiler insert function call
> +         __asan_load*/__asan_store*. These functions performs check
> +         of shadow memory. This is slower than inline instrumentation,
> +         however it doesn't bloat size of kernel's .text section so
> +         much as inline does.
> +
> +config KASAN_INLINE
> +       bool "Inline instrumentation"
> +       help
> +         Compiler directly inserts code checking shadow memory before
> +         memory accesses. This is faster than outline (in some workloads
> +         it gives about x2 boost over outline instrumentation), but
> +         make kernel's .text size much bigger.
> +
> +endchoice
> +
> +endif
> diff --git a/mm/Makefile b/mm/Makefile
> index d9d5794..33d9971 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -72,3 +72,4 @@ obj-$(CONFIG_ZSMALLOC)        += zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
>  obj-$(CONFIG_CMA)      += cma.o
>  obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
> +obj-$(CONFIG_KASAN)    += kasan/
> diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
> new file mode 100644
> index 0000000..ef2d313
> --- /dev/null
> +++ b/mm/kasan/Makefile
> @@ -0,0 +1,7 @@
> +KASAN_SANITIZE := n
> +
> +# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
> +# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
> +CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack)
> +
> +obj-y := kasan.o report.o
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..f77be01
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,374 @@
> +/*
> + * This file contains shadow memory manipulation code.
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * Some of code borrowed from https://github.com/xairy/linux by
> + *        Andrey Konovalov <adech.fo@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +#define DISABLE_BRANCH_PROFILING
> +
> +#include <linux/export.h>
> +#include <linux/init.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/mm.h>
> +#include <linux/printk.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/stacktrace.h>
> +#include <linux/string.h>
> +#include <linux/types.h>
> +#include <linux/kasan.h>
> +
> +#include "kasan.h"
> +
> +/*
> + * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> + * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
> + */
> +static void kasan_poison_shadow(const void *address, size_t size, u8 value)
> +{
> +       unsigned long shadow_start, shadow_end;
> +       unsigned long addr = (unsigned long)address;
> +
> +       shadow_start = kasan_mem_to_shadow(addr);
> +       shadow_end = kasan_mem_to_shadow(addr + size);
> +
> +       memset((void *)shadow_start, value, shadow_end - shadow_start);
> +}
> +
> +void kasan_unpoison_shadow(const void *address, size_t size)
> +{
> +       kasan_poison_shadow(address, size, 0);
> +
> +       if (size & KASAN_SHADOW_MASK) {
> +               u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
> +                                               + size);
> +               *shadow = size & KASAN_SHADOW_MASK;
> +       }
> +}
> +
> +static __always_inline bool memory_is_poisoned_1(unsigned long addr)
> +{
> +       s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(shadow_value)) {
> +               s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
> +               return unlikely(last_accessible_byte >= shadow_value);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned_2(unsigned long addr)
> +{
> +       u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(*shadow_addr)) {
> +               if (memory_is_poisoned_1(addr + 1))
> +                       return true;
> +
> +               if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
> +                       return false;
> +
> +               return unlikely(*(u8 *)shadow_addr);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned_4(unsigned long addr)
> +{
> +       u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(*shadow_addr)) {
> +               if (memory_is_poisoned_1(addr + 3))
> +                       return true;
> +
> +               if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
> +                       return false;
> +
> +               return unlikely(*(u8 *)shadow_addr);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned_8(unsigned long addr)
> +{
> +       u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(*shadow_addr)) {
> +               if (memory_is_poisoned_1(addr + 7))
> +                       return true;
> +
> +               if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
> +                       return false;
> +
> +               return unlikely(*(u8 *)shadow_addr);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> +{
> +       u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
> +
> +       if (unlikely(*shadow_addr)) {
> +               u16 shadow_first_bytes = *(u16 *)shadow_addr;
> +               s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
> +
> +               if (unlikely(shadow_first_bytes))
> +                       return true;
> +
> +               if (likely(!last_byte))
> +                       return false;
> +
> +               return memory_is_poisoned_1(addr + 15);
> +       }
> +
> +       return false;
> +}
> +
> +static __always_inline unsigned long bytes_is_zero(unsigned long start,
> +                                       size_t size)
> +{
> +       while (size) {
> +               if (unlikely(*(u8 *)start))
> +                       return start;
> +               start++;
> +               size--;
> +       }
> +
> +       return 0;
> +}
> +
> +static __always_inline unsigned long memory_is_zero(unsigned long start,
> +                                               unsigned long end)
> +{
> +       unsigned int prefix = start % 8;
> +       unsigned int words;
> +       unsigned long ret;
> +
> +       if (end - start <= 16)
> +               return bytes_is_zero(start, end - start);
> +
> +       if (prefix) {
> +               prefix = 8 - prefix;
> +               ret = bytes_is_zero(start, prefix);
> +               if (unlikely(ret))
> +                       return ret;
> +               start += prefix;
> +       }
> +
> +       words = (end - start) / 8;
> +       while (words) {
> +               if (unlikely(*(u64 *)start))
> +                       return bytes_is_zero(start, 8);
> +               start += 8;
> +               words--;
> +       }
> +
> +       return bytes_is_zero(start, (end - start) % 8);
> +}
> +
> +static __always_inline bool memory_is_poisoned_n(unsigned long addr,
> +                                               size_t size)
> +{
> +       unsigned long ret;
> +
> +       ret = memory_is_zero(kasan_mem_to_shadow(addr),
> +                       kasan_mem_to_shadow(addr + size - 1) + 1);
> +
> +       if (unlikely(ret)) {
> +               unsigned long last_byte = addr + size - 1;
> +               s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
> +
> +               if (unlikely(ret != (unsigned long)last_shadow ||
> +                       ((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
> +                       return true;
> +       }
> +       return false;
> +}
> +
> +static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
> +{
> +       if (__builtin_constant_p(size)) {
> +               switch (size) {
> +               case 1:
> +                       return memory_is_poisoned_1(addr);
> +               case 2:
> +                       return memory_is_poisoned_2(addr);
> +               case 4:
> +                       return memory_is_poisoned_4(addr);
> +               case 8:
> +                       return memory_is_poisoned_8(addr);
> +               case 16:
> +                       return memory_is_poisoned_16(addr);
> +               default:
> +                       BUILD_BUG();
> +               }
> +       }
> +
> +       return memory_is_poisoned_n(addr, size);
> +}
> +
> +
> +static __always_inline void check_memory_region(unsigned long addr,
> +                                               size_t size, bool write)
> +{
> +       struct access_info info;
> +
> +       if (unlikely(size == 0))
> +               return;
> +
> +       if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
> +               info.access_addr = addr;
> +               info.access_size = size;
> +               info.is_write = write;
> +               info.ip = _RET_IP_;
> +               kasan_report_user_access(&info);
> +               return;
> +       }
> +
> +       if (likely(!memory_is_poisoned(addr, size)))
> +               return;
> +
> +       kasan_report(addr, size, write);
> +}
> +
> +void __asan_load1(unsigned long addr)
> +{
> +       check_memory_region(addr, 1, false);
> +}
> +EXPORT_SYMBOL(__asan_load1);
> +
> +void __asan_load2(unsigned long addr)
> +{
> +       check_memory_region(addr, 2, false);
> +}
> +EXPORT_SYMBOL(__asan_load2);
> +
> +void __asan_load4(unsigned long addr)
> +{
> +       check_memory_region(addr, 4, false);
> +}
> +EXPORT_SYMBOL(__asan_load4);
> +
> +void __asan_load8(unsigned long addr)
> +{
> +       check_memory_region(addr, 8, false);
> +}
> +EXPORT_SYMBOL(__asan_load8);
> +
> +void __asan_load16(unsigned long addr)
> +{
> +       check_memory_region(addr, 16, false);
> +}
> +EXPORT_SYMBOL(__asan_load16);
> +
> +void __asan_loadN(unsigned long addr, size_t size)
> +{
> +       check_memory_region(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_loadN);
> +
> +void __asan_store1(unsigned long addr)
> +{
> +       check_memory_region(addr, 1, true);
> +}
> +EXPORT_SYMBOL(__asan_store1);
> +
> +void __asan_store2(unsigned long addr)
> +{
> +       check_memory_region(addr, 2, true);
> +}
> +EXPORT_SYMBOL(__asan_store2);
> +
> +void __asan_store4(unsigned long addr)
> +{
> +       check_memory_region(addr, 4, true);
> +}
> +EXPORT_SYMBOL(__asan_store4);
> +
> +void __asan_store8(unsigned long addr)
> +{
> +       check_memory_region(addr, 8, true);
> +}
> +EXPORT_SYMBOL(__asan_store8);
> +
> +void __asan_store16(unsigned long addr)
> +{
> +       check_memory_region(addr, 16, true);
> +}
> +EXPORT_SYMBOL(__asan_store16);
> +
> +void __asan_storeN(unsigned long addr, size_t size)
> +{
> +       check_memory_region(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_storeN);
> +
> +/* to shut up compiler complaints */
> +void __asan_handle_no_return(void) {}
> +EXPORT_SYMBOL(__asan_handle_no_return);
> +
> +
> +/* GCC 5.0 has different function names by default */
> +__attribute__((alias("__asan_load1")))
> +void __asan_load1_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load1_noabort);
> +
> +__attribute__((alias("__asan_load2")))
> +void __asan_load2_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load2_noabort);
> +
> +__attribute__((alias("__asan_load4")))
> +void __asan_load4_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load4_noabort);
> +
> +__attribute__((alias("__asan_load8")))
> +void __asan_load8_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load8_noabort);
> +
> +__attribute__((alias("__asan_load16")))
> +void __asan_load16_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_load16_noabort);
> +
> +__attribute__((alias("__asan_loadN")))
> +void __asan_loadN_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_loadN_noabort);
> +
> +__attribute__((alias("__asan_store1")))
> +void __asan_store1_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store1_noabort);
> +
> +__attribute__((alias("__asan_store2")))
> +void __asan_store2_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store2_noabort);
> +
> +__attribute__((alias("__asan_store4")))
> +void __asan_store4_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store4_noabort);
> +
> +__attribute__((alias("__asan_store8")))
> +void __asan_store8_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store8_noabort);
> +
> +__attribute__((alias("__asan_store16")))
> +void __asan_store16_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_store16_noabort);
> +
> +__attribute__((alias("__asan_storeN")))
> +void __asan_storeN_noabort(unsigned long);
> +EXPORT_SYMBOL(__asan_storeN_noabort);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> new file mode 100644
> index 0000000..6da1d78
> --- /dev/null
> +++ b/mm/kasan/kasan.h
> @@ -0,0 +1,49 @@
> +#ifndef __MM_KASAN_KASAN_H
> +#define __MM_KASAN_KASAN_H
> +
> +#include <linux/kasan.h>
> +
> +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> +#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
> +
> +#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
> +
> +struct access_info {
> +       unsigned long access_addr;
> +       unsigned long first_bad_addr;
> +       size_t access_size;
> +       bool is_write;
> +       unsigned long ip;
> +};
> +
> +void kasan_report_error(struct access_info *info);
> +void kasan_report_user_access(struct access_info *info);
> +
> +static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
> +{
> +       return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
> +}
> +
> +static inline bool kasan_enabled(void)
> +{
> +       return !current->kasan_depth;
> +}
> +
> +static __always_inline void kasan_report(unsigned long addr,
> +                                       size_t size,
> +                                       bool is_write)
> +{
> +       struct access_info info;
> +
> +       if (likely(!kasan_enabled()))
> +               return;
> +
> +       info.access_addr = addr;
> +       info.access_size = size;
> +       info.is_write = is_write;
> +       info.ip = _RET_IP_;
> +       kasan_report_error(&info);
> +}
> +
> +
> +#endif
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> new file mode 100644
> index 0000000..56a2089
> --- /dev/null
> +++ b/mm/kasan/report.c
> @@ -0,0 +1,205 @@
> +/*
> + * This file contains error reporting code.
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * Some of code borrowed from https://github.com/xairy/linux by
> + *        Andrey Konovalov <adech.fo@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/mm.h>
> +#include <linux/printk.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/stacktrace.h>
> +#include <linux/string.h>
> +#include <linux/types.h>
> +#include <linux/kasan.h>
> +
> +#include "kasan.h"
> +
> +/* Shadow layout customization. */
> +#define SHADOW_BYTES_PER_BLOCK 1
> +#define SHADOW_BLOCKS_PER_ROW 16
> +#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
> +#define SHADOW_ROWS_AROUND_ADDR 5
> +
> +static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
> +{
> +       u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
> +       unsigned long first_bad_addr = addr;
> +
> +       while (!shadow_val && first_bad_addr < addr + size) {
> +               first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
> +               shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
> +       }
> +       return first_bad_addr;
> +}
> +
> +static void print_error_description(struct access_info *info)
> +{
> +       const char *bug_type = "unknown crash";
> +       u8 shadow_val;
> +
> +       info->first_bad_addr = find_first_bad_addr(info->access_addr,
> +                                               info->access_size);
> +
> +       shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
> +
> +       switch (shadow_val) {
> +       case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +               bug_type = "out of bounds access";
> +               break;
> +       case KASAN_SHADOW_GAP:
> +               bug_type = "wild memory access";
> +               break;
> +       }
> +
> +       pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
> +               bug_type, (void *)info->ip,
> +               (void *)info->access_addr);
> +       pr_err("%s of size %zu by task %s/%d\n",
> +               info->is_write ? "Write" : "Read",
> +               info->access_size, current->comm, task_pid_nr(current));
> +}
> +
> +static void print_address_description(struct access_info *info)
> +{
> +       struct page *page;
> +       u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
> +
> +       page = virt_to_head_page((void *)info->access_addr);
> +
> +       switch (shadow_val) {
> +       case KASAN_SHADOW_GAP:
> +               pr_err("No metainfo is available for this access.\n");
> +               dump_stack();
> +               break;
> +       default:
> +               WARN_ON(1);
> +       }
> +}
> +
> +static bool row_is_guilty(unsigned long row, unsigned long guilty)
> +{
> +       return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
> +}
> +
> +static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
> +{
> +       /* The length of ">ff00ff00ff00ff00: " is
> +        *    3 + (BITS_PER_LONG/8)*2 chars.
> +        */
> +       return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
> +               (shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
> +}
> +
> +static void print_shadow_for_address(unsigned long addr)
> +{
> +       int i;
> +       unsigned long shadow = kasan_mem_to_shadow(addr);
> +       unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
> +               - SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
> +
> +       pr_err("Memory state around the buggy address:\n");
> +
> +       for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
> +               unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
> +               char buffer[4 + (BITS_PER_LONG/8)*2];
> +
> +               snprintf(buffer, sizeof(buffer),
> +                       (i == 0) ? ">%lx: " : " %lx: ", kaddr);
> +
> +               kasan_disable_local();
> +               print_hex_dump(KERN_ERR, buffer,
> +                       DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
> +                       (void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
> +               kasan_enable_local();
> +
> +               if (row_is_guilty(aligned_shadow, shadow))
> +                       pr_err("%*c\n",
> +                               shadow_pointer_offset(aligned_shadow, shadow),
> +                               '^');
> +
> +               aligned_shadow += SHADOW_BYTES_PER_ROW;
> +       }
> +}
> +
> +static DEFINE_SPINLOCK(report_lock);
> +
> +void kasan_report_error(struct access_info *info)
> +{
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&report_lock, flags);
> +       pr_err("================================="
> +               "=================================\n");
> +       print_error_description(info);
> +       print_address_description(info);
> +       print_shadow_for_address(info->first_bad_addr);
> +       pr_err("================================="
> +               "=================================\n");
> +       spin_unlock_irqrestore(&report_lock, flags);
> +}
> +
> +void kasan_report_user_access(struct access_info *info)
> +{
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&report_lock, flags);
> +       pr_err("================================="
> +               "=================================\n");
> +       pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
> +               info->access_addr);
> +       pr_err("%s of size %zu by thread T%d:\n",
> +               info->is_write ? "Write" : "Read",
> +               info->access_size, current->pid);
> +       dump_stack();
> +       pr_err("================================="
> +               "=================================\n");
> +       spin_unlock_irqrestore(&report_lock, flags);
> +}
> +
> +#define DEFINE_ASAN_REPORT_LOAD(size)                     \
> +void __asan_report_load##size##_noabort(unsigned long addr) \
> +{                                                         \
> +       kasan_report(addr, size, false);                  \
> +}                                                         \
> +EXPORT_SYMBOL(__asan_report_load##size##_noabort)
> +
> +#define DEFINE_ASAN_REPORT_STORE(size)                     \
> +void __asan_report_store##size##_noabort(unsigned long addr) \
> +{                                                          \
> +       kasan_report(addr, size, true);                    \
> +}                                                          \
> +EXPORT_SYMBOL(__asan_report_store##size##_noabort)
> +
> +DEFINE_ASAN_REPORT_LOAD(1);
> +DEFINE_ASAN_REPORT_LOAD(2);
> +DEFINE_ASAN_REPORT_LOAD(4);
> +DEFINE_ASAN_REPORT_LOAD(8);
> +DEFINE_ASAN_REPORT_LOAD(16);
> +DEFINE_ASAN_REPORT_STORE(1);
> +DEFINE_ASAN_REPORT_STORE(2);
> +DEFINE_ASAN_REPORT_STORE(4);
> +DEFINE_ASAN_REPORT_STORE(8);
> +DEFINE_ASAN_REPORT_STORE(16);
> +
> +void __asan_report_load_n_noabort(unsigned long addr, size_t size)
> +{
> +       kasan_report(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_report_load_n_noabort);
> +
> +void __asan_report_store_n_noabort(unsigned long addr, size_t size)
> +{
> +       kasan_report(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_report_store_n_noabort);
> diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
> index 5117552..a5845a2 100644
> --- a/scripts/Makefile.lib
> +++ b/scripts/Makefile.lib
> @@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
>                 $(CFLAGS_GCOV))
>  endif
>
> +#
> +# Enable address sanitizer flags for kernel except some files or directories
> +# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
> +#
> +ifeq ($(CONFIG_KASAN),y)
> +_c_flags += $(if $(patsubst n%,, \
> +               $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
> +               $(CFLAGS_KASAN))
> +endif
> +
>  # If building the kernel in a separate objtree expand all occurrences
>  # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
>
> --
> 2.1.3
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 02/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
  2014-11-24 18:02     ` Andrey Ryabinin
@ 2014-11-25 12:41       ` Dmitry Chernenkov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:41 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Thomas Gleixner, Ingo Molnar

LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
> Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
> built with stackprotector this will result in boot failure because __asan_load
> has stackprotector.
>
> To avoid this irq_stack_union.gs_base stored to temporary variable before
> load_segment, so __asan_load will be called before load_segment().
>
> There are two alternative ways to fix this:
>  a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
>     which tells compiler to not instrument this function. However this
>     will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.
>
>  b) Add -fno-stack-protector for mm/kasan/kasan.c
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  arch/x86/kernel/cpu/common.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 8779d63..97f56f6 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
>  #ifdef CONFIG_X86_32
>         loadsegment(fs, __KERNEL_PERCPU);
>  #else
> +       void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
> +
>         loadsegment(gs, 0);
> -       wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
> +       wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
>  #endif
>         load_stack_canary_segment();
>  }
> --
> 2.1.3
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 02/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
@ 2014-11-25 12:41       ` Dmitry Chernenkov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Chernenkov @ 2014-11-25 12:41 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Thomas Gleixner, Ingo Molnar

LGTM

On Mon, Nov 24, 2014 at 9:02 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
> Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
> built with stackprotector this will result in boot failure because __asan_load
> has stackprotector.
>
> To avoid this irq_stack_union.gs_base stored to temporary variable before
> load_segment, so __asan_load will be called before load_segment().
>
> There are two alternative ways to fix this:
>  a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
>     which tells compiler to not instrument this function. However this
>     will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.
>
>  b) Add -fno-stack-protector for mm/kasan/kasan.c
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  arch/x86/kernel/cpu/common.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 8779d63..97f56f6 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -389,8 +389,10 @@ void load_percpu_segment(int cpu)
>  #ifdef CONFIG_X86_32
>         loadsegment(fs, __KERNEL_PERCPU);
>  #else
> +       void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
> +
>         loadsegment(gs, 0);
> -       wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
> +       wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
>  #endif
>         load_stack_canary_segment();
>  }
> --
> 2.1.3
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 11/12] lib: add kasan test module
  2014-11-25 11:14       ` Dmitry Chernenkov
@ 2014-11-25 13:09         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-25 13:09 UTC (permalink / raw)
  To: Dmitry Chernenkov
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, linux-kernel

On 11/25/2014 02:14 PM, Dmitry Chernenkov wrote:
> I have a bit of concern about tests.
> A) they are not fully automated, there is no checking whether they
> pass or not. This is implemented in our repository using special tags
> in the log (https://github.com/google/kasan/commit/33b267553e7ffe66d5207152a3294112361b75fe;
> don't mmind the TODOs, they weren't broken to begin with), and a
> parser script (https://code.google.com/p/address-sanitizer/source/browse/trunk/tools/kernel_test_parse.py)
> to feed the kernel log to.
> 
> B) They are not thorough enough - they don't check false negatives,

False negative means kasan's report on valid access, right? Most of the memory accesses
in kernel are valid, so just booting kernel should give you the best check for false
negatives you can ever write.

Though I agree that it's not very thorough. Currently this more demonstrational module,
and there are a lot of cases not covered by it.

> accesses more than 1 byte away etc.
> 
> C) (more of general concern for current Kasan realiability) - when
> running multiple times, some tests are flaky, specificially oob_right
> and uaf2. The latter needs quarantine to work reliably (I know
> Konstantin is working on it). oob_right needs redzones in the
> beginning of the slabs.
> 
> I know all of these may seem like long shots, but if we want a
> reliable solution (also a backportable solution), we need to at least
> consider them.
> 
> Otherwise, LGTM
> 



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 11/12] lib: add kasan test module
@ 2014-11-25 13:09         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-25 13:09 UTC (permalink / raw)
  To: Dmitry Chernenkov
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, linux-kernel

On 11/25/2014 02:14 PM, Dmitry Chernenkov wrote:
> I have a bit of concern about tests.
> A) they are not fully automated, there is no checking whether they
> pass or not. This is implemented in our repository using special tags
> in the log (https://github.com/google/kasan/commit/33b267553e7ffe66d5207152a3294112361b75fe;
> don't mmind the TODOs, they weren't broken to begin with), and a
> parser script (https://code.google.com/p/address-sanitizer/source/browse/trunk/tools/kernel_test_parse.py)
> to feed the kernel log to.
> 
> B) They are not thorough enough - they don't check false negatives,

False negative means kasan's report on valid access, right? Most of the memory accesses
in kernel are valid, so just booting kernel should give you the best check for false
negatives you can ever write.

Though I agree that it's not very thorough. Currently this more demonstrational module,
and there are a lot of cases not covered by it.

> accesses more than 1 byte away etc.
> 
> C) (more of general concern for current Kasan realiability) - when
> running multiple times, some tests are flaky, specificially oob_right
> and uaf2. The latter needs quarantine to work reliably (I know
> Konstantin is working on it). oob_right needs redzones in the
> beginning of the slabs.
> 
> I know all of these may seem like long shots, but if we want a
> reliable solution (also a backportable solution), we need to at least
> consider them.
> 
> Otherwise, LGTM
> 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-11-25 12:22       ` Dmitry Chernenkov
@ 2014-11-25 13:11         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-25 13:11 UTC (permalink / raw)
  To: Dmitry Chernenkov
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

On 11/25/2014 03:22 PM, Dmitry Chernenkov wrote:
> LGTM
> 
> Does this mean we're going to sanitize the slub code itself?)
> 

Nope, to sanitize slub itself we need much more than just this.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-11-25 13:11         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-25 13:11 UTC (permalink / raw)
  To: Dmitry Chernenkov
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

On 11/25/2014 03:22 PM, Dmitry Chernenkov wrote:
> LGTM
> 
> Does this mean we're going to sanitize the slub code itself?)
> 

Nope, to sanitize slub itself we need much more than just this.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 08/12] mm: slub: add kernel address sanitizer support for slub allocator
  2014-11-25 12:17       ` Dmitry Chernenkov
@ 2014-11-25 13:18         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-25 13:18 UTC (permalink / raw)
  To: Dmitry Chernenkov
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

On 11/25/2014 03:17 PM, Dmitry Chernenkov wrote:
> FYI, when I backported Kasan to 3.14, in kasan_mark_slab_padding()
> sometimes a negative size of padding was generated.

I don't see how this could happen if pointers passed to kasan_mark_slab_padding() are correct.

Negative padding would mean that (object + s->size) is crossing slab page boundary.
This is either slub allocator bug (very unlikely), or some pointers passed to kasan_mark_slab_padding()
not correct.

Or maybe I'm missing something?

> This started
> working when the patch below was applied:
> 
> @@ -262,12 +264,11 @@ void kasan_free_pages(struct page *page,
> unsigned int order)
>  void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
>   struct page *page)
>  {
> - unsigned long object_end = (unsigned long)object + s->size;
> - unsigned long padding_start = round_up(object_end,
> - KASAN_SHADOW_SCALE_SIZE);
> - unsigned long padding_end = (unsigned long)page_address(page) +
> - (PAGE_SIZE << compound_order(page));
> - size_t size = padding_end - padding_start;
> + unsigned long page_start = (unsigned long) page_address(page);
> + unsigned long page_end = page_start + (PAGE_SIZE << compound_order(page));
> + unsigned long padding_start = round_up(page_end - s->reserved,
> + KASAN_SHADOW_SCALE_SIZE);
> + size_t size = page_end - padding_start;
> 
>   kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
>  }
> 
> Also, in kasan_slab_free you poison the shadow with FREE not just the
> object space, but also redzones. This is inefficient and will mistake
> right out-of-bounds error for the next object with use-after-free.
> This is fixed here
> https://github.com/google/kasan/commit/4b3238be392ba0bc56bbc934ac545df3ff840782
> , please patch.
> 

Makes sense.


> 
> LGTM
> 




^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 08/12] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-11-25 13:18         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-25 13:18 UTC (permalink / raw)
  To: Dmitry Chernenkov
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, Dave Jones, x86,
	linux-mm, LKML, Pekka Enberg, David Rientjes

On 11/25/2014 03:17 PM, Dmitry Chernenkov wrote:
> FYI, when I backported Kasan to 3.14, in kasan_mark_slab_padding()
> sometimes a negative size of padding was generated.

I don't see how this could happen if pointers passed to kasan_mark_slab_padding() are correct.

Negative padding would mean that (object + s->size) is crossing slab page boundary.
This is either slub allocator bug (very unlikely), or some pointers passed to kasan_mark_slab_padding()
not correct.

Or maybe I'm missing something?

> This started
> working when the patch below was applied:
> 
> @@ -262,12 +264,11 @@ void kasan_free_pages(struct page *page,
> unsigned int order)
>  void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
>   struct page *page)
>  {
> - unsigned long object_end = (unsigned long)object + s->size;
> - unsigned long padding_start = round_up(object_end,
> - KASAN_SHADOW_SCALE_SIZE);
> - unsigned long padding_end = (unsigned long)page_address(page) +
> - (PAGE_SIZE << compound_order(page));
> - size_t size = padding_end - padding_start;
> + unsigned long page_start = (unsigned long) page_address(page);
> + unsigned long page_end = page_start + (PAGE_SIZE << compound_order(page));
> + unsigned long padding_start = round_up(page_end - s->reserved,
> + KASAN_SHADOW_SCALE_SIZE);
> + size_t size = page_end - padding_start;
> 
>   kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
>  }
> 
> Also, in kasan_slab_free you poison the shadow with FREE not just the
> object space, but also redzones. This is inefficient and will mistake
> right out-of-bounds error for the next object with use-after-free.
> This is fixed here
> https://github.com/google/kasan/commit/4b3238be392ba0bc56bbc934ac545df3ff840782
> , please patch.
> 

Makes sense.


> 
> LGTM
> 



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 01/12] Add kernel address sanitizer infrastructure.
  2014-11-25 12:40       ` Dmitry Chernenkov
@ 2014-11-25 14:16         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-25 14:16 UTC (permalink / raw)
  To: Dmitry Chernenkov
  Cc: Andrew Morton, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, LKML, Jonathan Corbet,
	Michal Marek, Ingo Molnar, Peter Zijlstra

On 11/25/2014 03:40 PM, Dmitry Chernenkov wrote:
> I'm a little concerned with how enabling/disabling works. If an
> enable() is forgotten once, it's disabled forever. If disable() is
> forgotten once, the toggle is reversed for the forseable future. MB
> check for inequality in kasan_enabled()? like current->kasan_depth >=
> 0 (will need a signed int for the field). Do you think it's going to
> decrease performance?

I think that check in kasan_enabled shouldn't hurt much.
But it also doesn't look very useful for me.

There are only few user of kasan_disable_local/kasan_enable_local, it's easy to review them.
And in future we also shouldn't have a lot of new users of those functions.

> 
> LGTM
> 
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v7 01/12] Add kernel address sanitizer infrastructure.
@ 2014-11-25 14:16         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-25 14:16 UTC (permalink / raw)
  To: Dmitry Chernenkov
  Cc: Andrew Morton, Randy Dunlap, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, Vegard Nossum,
	H. Peter Anvin, Dave Jones, x86, linux-mm, LKML, Jonathan Corbet,
	Michal Marek, Ingo Molnar, Peter Zijlstra

On 11/25/2014 03:40 PM, Dmitry Chernenkov wrote:
> I'm a little concerned with how enabling/disabling works. If an
> enable() is forgotten once, it's disabled forever. If disable() is
> forgotten once, the toggle is reversed for the forseable future. MB
> check for inequality in kasan_enabled()? like current->kasan_depth >=
> 0 (will need a signed int for the field). Do you think it's going to
> decrease performance?

I think that check in kasan_enabled shouldn't hurt much.
But it also doesn't look very useful for me.

There are only few user of kasan_disable_local/kasan_enable_local, it's easy to review them.
And in future we also shouldn't have a lot of new users of those functions.

> 
> LGTM
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v8 00/12] Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2014-11-27 16:00   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, Linus Torvalds, linux-kernel

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based should apply cleanly on top of 3.18-rc6 and mmotm-2014-11-26-15-45
Patches  available in git as well:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v8

Changes since v7:
        - Fix build with CONFIG_KASAN_INLINE=y from Sasha.

        - Don't poison redzone on freeing, since it is poisoned already from Dmitry Chernenkov.

        - Fix altinstruction_entry for memcpy.

        - Move kasan_slab_free() call after debug_obj_free to prevent some false-positives
            with CONFIG_DEBUG_OBJECTS=y

        - Drop -pg flag for kasan internals to avoid recursion with function tracer
           enabled.

        - Added ack from Christoph.

Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unnecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC version requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Andrey Ryabinin (12):
  Add kernel address sanitizer infrastructure.
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions

 Documentation/kasan.txt                | 169 +++++++++++
 Makefile                               |  23 +-
 arch/x86/Kconfig                       |   1 +
 arch/x86/boot/Makefile                 |   2 +
 arch/x86/boot/compressed/Makefile      |   2 +
 arch/x86/boot/compressed/eboot.c       |   3 +-
 arch/x86/boot/compressed/misc.h        |   1 +
 arch/x86/include/asm/kasan.h           |  27 ++
 arch/x86/include/asm/string_64.h       |  18 +-
 arch/x86/kernel/Makefile               |   2 +
 arch/x86/kernel/cpu/common.c           |   4 +-
 arch/x86/kernel/dumpstack.c            |   5 +-
 arch/x86/kernel/head64.c               |   9 +-
 arch/x86/kernel/head_64.S              |  28 ++
 arch/x86/kernel/setup.c                |   3 +
 arch/x86/kernel/x8664_ksyms_64.c       |  10 +-
 arch/x86/lib/memcpy_64.S               |   6 +-
 arch/x86/lib/memmove_64.S              |   4 +
 arch/x86/lib/memset_64.S               |  10 +-
 arch/x86/mm/Makefile                   |   3 +
 arch/x86/mm/kasan_init_64.c            | 108 +++++++
 arch/x86/realmode/Makefile             |   2 +-
 arch/x86/realmode/rm/Makefile          |   1 +
 arch/x86/vdso/Makefile                 |   1 +
 drivers/firmware/efi/libstub/Makefile  |   1 +
 drivers/firmware/efi/libstub/efistub.h |   4 +
 fs/dcache.c                            |   6 +
 include/linux/kasan.h                  |  69 +++++
 include/linux/sched.h                  |   3 +
 include/linux/slab.h                   |  11 +-
 include/linux/slub_def.h               |  10 +
 lib/Kconfig.debug                      |   2 +
 lib/Kconfig.kasan                      |  54 ++++
 lib/Makefile                           |   1 +
 lib/test_kasan.c                       | 254 ++++++++++++++++
 mm/Makefile                            |   4 +
 mm/compaction.c                        |   2 +
 mm/kasan/Makefile                      |   8 +
 mm/kasan/kasan.c                       | 509 +++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                       |  54 ++++
 mm/kasan/report.c                      | 237 +++++++++++++++
 mm/kmemleak.c                          |   6 +
 mm/page_alloc.c                        |   3 +
 mm/slab_common.c                       |   5 +-
 mm/slub.c                              |  56 +++-
 scripts/Makefile.lib                   |  10 +
 46 files changed, 1725 insertions(+), 26 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Joe Perches <joe@perches.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
-- 
2.1.3


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v8 00/12] Kernel address sanitizer - runtime memory debugger.
@ 2014-11-27 16:00   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Joe Perches, Linus Torvalds, linux-kernel

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based should apply cleanly on top of 3.18-rc6 and mmotm-2014-11-26-15-45
Patches  available in git as well:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v8

Changes since v7:
        - Fix build with CONFIG_KASAN_INLINE=y from Sasha.

        - Don't poison redzone on freeing, since it is poisoned already from Dmitry Chernenkov.

        - Fix altinstruction_entry for memcpy.

        - Move kasan_slab_free() call after debug_obj_free to prevent some false-positives
            with CONFIG_DEBUG_OBJECTS=y

        - Drop -pg flag for kasan internals to avoid recursion with function tracer
           enabled.

        - Added ack from Christoph.

Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unnecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC version requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Andrey Ryabinin (12):
  Add kernel address sanitizer infrastructure.
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions

 Documentation/kasan.txt                | 169 +++++++++++
 Makefile                               |  23 +-
 arch/x86/Kconfig                       |   1 +
 arch/x86/boot/Makefile                 |   2 +
 arch/x86/boot/compressed/Makefile      |   2 +
 arch/x86/boot/compressed/eboot.c       |   3 +-
 arch/x86/boot/compressed/misc.h        |   1 +
 arch/x86/include/asm/kasan.h           |  27 ++
 arch/x86/include/asm/string_64.h       |  18 +-
 arch/x86/kernel/Makefile               |   2 +
 arch/x86/kernel/cpu/common.c           |   4 +-
 arch/x86/kernel/dumpstack.c            |   5 +-
 arch/x86/kernel/head64.c               |   9 +-
 arch/x86/kernel/head_64.S              |  28 ++
 arch/x86/kernel/setup.c                |   3 +
 arch/x86/kernel/x8664_ksyms_64.c       |  10 +-
 arch/x86/lib/memcpy_64.S               |   6 +-
 arch/x86/lib/memmove_64.S              |   4 +
 arch/x86/lib/memset_64.S               |  10 +-
 arch/x86/mm/Makefile                   |   3 +
 arch/x86/mm/kasan_init_64.c            | 108 +++++++
 arch/x86/realmode/Makefile             |   2 +-
 arch/x86/realmode/rm/Makefile          |   1 +
 arch/x86/vdso/Makefile                 |   1 +
 drivers/firmware/efi/libstub/Makefile  |   1 +
 drivers/firmware/efi/libstub/efistub.h |   4 +
 fs/dcache.c                            |   6 +
 include/linux/kasan.h                  |  69 +++++
 include/linux/sched.h                  |   3 +
 include/linux/slab.h                   |  11 +-
 include/linux/slub_def.h               |  10 +
 lib/Kconfig.debug                      |   2 +
 lib/Kconfig.kasan                      |  54 ++++
 lib/Makefile                           |   1 +
 lib/test_kasan.c                       | 254 ++++++++++++++++
 mm/Makefile                            |   4 +
 mm/compaction.c                        |   2 +
 mm/kasan/Makefile                      |   8 +
 mm/kasan/kasan.c                       | 509 +++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                       |  54 ++++
 mm/kasan/report.c                      | 237 +++++++++++++++
 mm/kmemleak.c                          |   6 +
 mm/page_alloc.c                        |   3 +
 mm/slab_common.c                       |   5 +-
 mm/slub.c                              |  56 +++-
 scripts/Makefile.lib                   |  10 +
 46 files changed, 1725 insertions(+), 26 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Joe Perches <joe@perches.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v8 01/12] Add kernel address sanitizer infrastructure.
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Jonathan Corbet, Michal Marek,
	Ingo Molnar, Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++
 Makefile                              |  23 ++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  42 ++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 ++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 374 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  49 +++++
 mm/kasan/report.c                     | 205 +++++++++++++++++++
 scripts/Makefile.lib                  |  10 +
 13 files changed, 928 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 8869dc8..b382b62 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -427,7 +427,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -758,6 +758,25 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+  kasan_inline := $(call cc-option, $(CFLAGS_KASAN) \
+			-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+			--param asan-instrumentation-with-call-threshold=10000)
+  ifeq ($(kasan_inline),)
+    $(warning Cannot use CONFIG_KASAN_INLINE: \
+	      inline instrumentation is not supported by compiler. Trying CONFIG_KASAN_OUTLINE.)
+  else
+    CFLAGS_KASAN := $(kasan_inline)
+  endif
+
+endif
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address is not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8db31ef..26e1b47 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1662,6 +1662,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 1c23b54..9843de2 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -670,6 +670,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 3548460..930b52d 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_FS_XIP) += filemap_xip.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..28486bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..f77be01
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,374 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
+
+
+/* GCC 5.0 has different function names by default */
+__attribute__((alias("__asan_load1")))
+void __asan_load1_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load1_noabort);
+
+__attribute__((alias("__asan_load2")))
+void __asan_load2_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load2_noabort);
+
+__attribute__((alias("__asan_load4")))
+void __asan_load4_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load4_noabort);
+
+__attribute__((alias("__asan_load8")))
+void __asan_load8_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load8_noabort);
+
+__attribute__((alias("__asan_load16")))
+void __asan_load16_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load16_noabort);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+__attribute__((alias("__asan_store1")))
+void __asan_store1_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store1_noabort);
+
+__attribute__((alias("__asan_store2")))
+void __asan_store2_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store2_noabort);
+
+__attribute__((alias("__asan_store4")))
+void __asan_store4_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store4_noabort);
+
+__attribute__((alias("__asan_store8")))
+void __asan_store8_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store8_noabort);
+
+__attribute__((alias("__asan_store16")))
+void __asan_store16_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store16_noabort);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_storeN_noabort);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..6da1d78
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,49 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..56a2089
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,205 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..a5845a2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 01/12] Add kernel address sanitizer infrastructure.
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Jonathan Corbet, Michal Marek,
	Ingo Molnar, Peter Zijlstra

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++
 Makefile                              |  23 ++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  42 ++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 ++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 374 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  49 +++++
 mm/kasan/report.c                     | 205 +++++++++++++++++++
 scripts/Makefile.lib                  |  10 +
 13 files changed, 928 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 8869dc8..b382b62 100644
--- a/Makefile
+++ b/Makefile
@@ -382,7 +382,7 @@ LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
-
+CFLAGS_KASAN	= $(call cc-option, -fsanitize=kernel-address)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -427,7 +427,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -758,6 +758,25 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+  kasan_inline := $(call cc-option, $(CFLAGS_KASAN) \
+			-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+			--param asan-instrumentation-with-call-threshold=10000)
+  ifeq ($(kasan_inline),)
+    $(warning Cannot use CONFIG_KASAN_INLINE: \
+	      inline instrumentation is not supported by compiler. Trying CONFIG_KASAN_OUTLINE.)
+  else
+    CFLAGS_KASAN := $(kasan_inline)
+  endif
+
+endif
+  ifeq ($(CFLAGS_KASAN),)
+    $(warning Cannot use CONFIG_KASAN: \
+	      -fsanitize=kernel-address is not supported by compiler)
+  endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..01c99fe
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8db31ef..26e1b47 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1662,6 +1662,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 1c23b54..9843de2 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -670,6 +670,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 3548460..930b52d 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_FS_XIP) += filemap_xip.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..28486bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..f77be01
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,374 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
+
+
+/* GCC 5.0 has different function names by default */
+__attribute__((alias("__asan_load1")))
+void __asan_load1_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load1_noabort);
+
+__attribute__((alias("__asan_load2")))
+void __asan_load2_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load2_noabort);
+
+__attribute__((alias("__asan_load4")))
+void __asan_load4_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load4_noabort);
+
+__attribute__((alias("__asan_load8")))
+void __asan_load8_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load8_noabort);
+
+__attribute__((alias("__asan_load16")))
+void __asan_load16_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_load16_noabort);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+__attribute__((alias("__asan_store1")))
+void __asan_store1_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store1_noabort);
+
+__attribute__((alias("__asan_store2")))
+void __asan_store2_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store2_noabort);
+
+__attribute__((alias("__asan_store4")))
+void __asan_store4_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store4_noabort);
+
+__attribute__((alias("__asan_store8")))
+void __asan_store8_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store8_noabort);
+
+__attribute__((alias("__asan_store16")))
+void __asan_store16_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_store16_noabort);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long);
+EXPORT_SYMBOL(__asan_storeN_noabort);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..6da1d78
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,49 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..56a2089
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,205 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	page = virt_to_head_page((void *)info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..a5845a2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 02/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 5475f67..1291d69 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -391,8 +391,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 02/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Thomas Gleixner, Ingo Molnar

Reading irq_stack_union.gs_base after load_segment creates troubles for kasan.
Compiler inserts __asan_load in between load_segment and wrmsrl. If kernel
built with stackprotector this will result in boot failure because __asan_load
has stackprotector.

To avoid this irq_stack_union.gs_base stored to temporary variable before
load_segment, so __asan_load will be called before load_segment().

There are two alternative ways to fix this:
 a) Add __attribute__((no_sanitize_address)) to load_percpu_segment(),
    which tells compiler to not instrument this function. However this
    will result in build failure with CONFIG_KASAN=y and CONFIG_OPTIMIZE_INLINING=y.

 b) Add -fno-stack-protector for mm/kasan/kasan.c

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/cpu/common.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 5475f67..1291d69 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -391,8 +391,10 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
+	void *gs_base = per_cpu(irq_stack_union.gs_base, cpu);
+
 	loadsegment(gs, 0);
-	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
+	wrmsrl(MSR_GS_BASE, (unsigned long)gs_base);
 #endif
 	load_stack_canary_segment();
 }
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 03/12] x86_64: add KASan support
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  27 ++++++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +++-
 arch/x86/kernel/head_64.S         |  28 ++++++++++
 arch/x86/kernel/setup.c           |   3 ++
 arch/x86/mm/Makefile              |   3 ++
 arch/x86/mm/kasan_init_64.c       | 108 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   2 +
 15 files changed, 192 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index da51602..f761193 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -84,6 +84,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index d999398..0bf4d9f 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..47e0d42
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d4502c..74d3f3e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ab4734e..4912b74 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1176,6 +1177,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index ecfdc46..c4cc740 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,6 +20,9 @@ obj-$(CONFIG_HIGHMEM)		+= highmem_32.o
 
 obj-$(CONFIG_KMEMCHECK)		+= kmemcheck/
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+
 obj-$(CONFIG_MMIOTRACE)		+= mmiotrace.o
 mmiotrace-y			:= kmmio.o pf_in.o mmio-mod.o
 obj-$(CONFIG_MMIOTRACE_TEST)	+= testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..995de9d
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,108 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled\n");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..386cc8b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdfffe90000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 03/12] x86_64: add KASan support
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Thomas Gleixner, Ingo Molnar

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [0xffffd90000000000 - 0xffffe90000000000]
which belongs to vmalloc area.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  27 ++++++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +++-
 arch/x86/kernel/head_64.S         |  28 ++++++++++
 arch/x86/kernel/setup.c           |   3 ++
 arch/x86/mm/Makefile              |   3 ++
 arch/x86/mm/kasan_init_64.c       | 108 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   2 +
 15 files changed, 192 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index da51602..f761193 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -84,6 +84,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 5b016e2..1ef2724 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index d999398..0bf4d9f 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..47e0d42
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+#define KASAN_SHADOW_START	0xffffd90000000000UL
+#define KASAN_SHADOW_END	0xffffe90000000000UL
+
+#ifndef __ASSEMBLY__
+
+extern pte_t zero_pte[];
+extern pte_t zero_pmd[];
+extern pte_t zero_pud[];
+
+extern pte_t poisoned_pte[];
+extern pte_t poisoned_pmd[];
+extern pte_t poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_zero_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_zero_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d4502c..74d3f3e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..b9e4e50 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_zero_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_zero_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..444105c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,36 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pmd)
+	FILL(zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(zero_pud)
+	FILL(zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(poisoned_pte)
+	FILL(poisoned_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pmd)
+	FILL(poisoned_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(poisoned_pud)
+	FILL(poisoned_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+#ifdef CONFIG_KASAN
+NEXT_PAGE(poisoned_page)
+	.fill PAGE_SIZE,1,0xF9
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ab4734e..4912b74 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1176,6 +1177,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index ecfdc46..c4cc740 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,6 +20,9 @@ obj-$(CONFIG_HIGHMEM)		+= highmem_32.o
 
 obj-$(CONFIG_KMEMCHECK)		+= kmemcheck/
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+
 obj-$(CONFIG_MMIOTRACE)		+= mmiotrace.o
 mmiotrace-y			:= kmmio.o pf_in.o mmio-mod.o
 obj-$(CONFIG_MMIOTRACE_TEST)	+= testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..995de9d
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,108 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+struct vm_struct kasan_vm __initdata = {
+	.addr = (void *)KASAN_SHADOW_START,
+	.size = (16UL << 40),
+};
+
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_zero_shadow_mapping(unsigned long start,
+					unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_zero_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = kasan_mem_to_shadow(KASAN_SHADOW_START);
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = kasan_mem_to_shadow(KASAN_SHADOW_END);
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(poisoned_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+	start = end;
+	end = KASAN_SHADOW_END;
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(zero_pud) | __PAGE_KERNEL_RO);
+		start += PGDIR_SIZE;
+	}
+
+}
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled\n");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+	vm_area_add_early(&kasan_vm);
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
+				kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..386cc8b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdfffe90000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 04/12] mm: page_alloc: add kasan hooks on alloc and free paths
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 546e571..12f2c7d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -61,6 +62,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index f77be01..b336073 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6da1d78..2a6a961 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 56a2089..8ac3b6b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -78,6 +81,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0b2f5a6..4ea0e33 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include <linux/kmemcheck.h>
+#include <linux/kasan.h>
 #include <linux/module.h>
 #include <linux/suspend.h>
 #include <linux/pagevec.h>
@@ -804,6 +805,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -982,6 +984,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 04/12] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  1 +
 mm/kasan/report.c     |  7 +++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 33 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 01c99fe..9714fba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,6 +30,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 546e571..12f2c7d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -61,6 +62,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index f77be01..b336073 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6da1d78..2a6a961 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,7 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 56a2089..8ac3b6b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
@@ -78,6 +81,10 @@ static void print_address_description(struct access_info *info)
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		dump_page(page, "kasan error");
+		dump_stack();
+		break;
 	case KASAN_SHADOW_GAP:
 		pr_err("No metainfo is available for this access.\n");
 		dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0b2f5a6..4ea0e33 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include <linux/kmemcheck.h>
+#include <linux/kasan.h>
 #include <linux/module.h>
 #include <linux/suspend.h>
 #include <linux/pagevec.h>
@@ -804,6 +805,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -982,6 +984,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 05/12] mm: slub: introduce virt_to_obj function.
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 05/12] mm: slub: introduce virt_to_obj function.
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 06/12] mm: slub: share slab_err and object_err functions
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 mm/slub.c                | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..144b5cb 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,9 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+__printf(3, 4)
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 95d2142..0c01584 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 06/12] mm: slub: share slab_err and object_err functions
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 5 +++++
 mm/slub.c                | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..144b5cb 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,9 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+__printf(3, 4)
+void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 95d2142..0c01584 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,14 +629,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
 			const char *fmt, ...)
 {
 	va_list args;
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 0c01584..88ad8b8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 0c01584..88ad8b8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 08/12] mm: slub: add kernel address sanitizer support for slub allocator
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 36 ++++++++++++++++++--
 9 files changed, 192 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 8a2457d..5dc0d69 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 386cc8b..1fa4fe8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 930b52d..088c68e 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index b336073..7bb20ad 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2a6a961..049349b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 8ac3b6b..185d04c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -76,11 +81,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index e03dd6f..4dcbc2d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -789,6 +789,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -973,8 +974,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 88ad8b8..cb2aba4 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,6 +1269,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1287,6 +1293,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1381,8 +1389,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1419,8 +1430,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2491,6 +2504,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2517,6 +2531,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2900,6 +2916,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3272,6 +3289,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3315,12 +3334,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3336,6 +3357,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 08/12] mm: slub: add kernel address sanitizer support for slub allocator
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as free.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h | 21 ++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  4 +++
 mm/kasan/report.c     | 25 ++++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 36 ++++++++++++++++++--
 9 files changed, 192 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9714fba..0463b90 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -32,6 +32,16 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
 
 #else /* CONFIG_KASAN */
 
@@ -42,6 +52,17 @@ static inline void kasan_disable_local(void) {}
 
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+					struct page *page) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 8a2457d..5dc0d69 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 386cc8b..1fa4fe8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 930b52d..088c68e 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index b336073..7bb20ad 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,97 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
+			struct page *page)
+{
+	unsigned long object_end = (unsigned long)object + s->size;
+	unsigned long padding_start = round_up(object_end,
+					KASAN_SHADOW_SCALE_SIZE);
+	unsigned long padding_end = (unsigned long)page_address(page) +
+					(PAGE_SIZE << compound_order(page));
+	size_t size = padding_end - padding_start;
+
+	kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)object + cache->size;
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 void __asan_load1(unsigned long addr)
 {
 	check_memory_region(addr, 1, false);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2a6a961..049349b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,10 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_PADDING      0xFD  /* Slab page padding, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 8ac3b6b..185d04c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -54,10 +55,14 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_PAGE_REDZONE:
+	case KASAN_SLAB_PADDING:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
 	case KASAN_SHADOW_GAP:
@@ -76,11 +81,31 @@ static void print_error_description(struct access_info *info)
 static void print_address_description(struct access_info *info)
 {
 	struct page *page;
+	struct kmem_cache *cache;
 	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	page = virt_to_head_page((void *)info->access_addr);
 
 	switch (shadow_val) {
+	case KASAN_SLAB_PADDING:
+		cache = page->slab_cache;
+		slab_err(cache, page, "access to slab redzone");
+		dump_stack();
+		break;
+	case KASAN_KMALLOC_FREE:
+	case KASAN_KMALLOC_REDZONE:
+	case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		if (PageSlab(page)) {
+			void *object;
+			void *slab_page = page_address(page);
+
+			cache = page->slab_cache;
+			object = virt_to_obj(cache, slab_page,
+					(void *)info->access_addr);
+			object_err(cache, page, object, "kasan error");
+			break;
+		}
+	case KASAN_PAGE_REDZONE:
 	case KASAN_FREE_PAGE:
 		dump_page(page, "kasan error");
 		dump_stack();
diff --git a/mm/slab_common.c b/mm/slab_common.c
index e03dd6f..4dcbc2d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -789,6 +789,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -973,8 +974,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 88ad8b8..cb2aba4 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1264,6 +1269,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	flags &= gfp_allowed_mask;
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1287,6 +1293,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1381,8 +1389,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_slab_alloc(s, object);
 		s->ctor(object);
+	}
+	kasan_slab_free(s, object);
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1419,8 +1430,10 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
 			set_freepointer(s, p, p + s->size);
-		else
+		else {
 			set_freepointer(s, p, NULL);
+			kasan_mark_slab_padding(s, p, page);
+		}
 	}
 
 	page->freelist = start;
@@ -2491,6 +2504,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2517,6 +2531,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2900,6 +2916,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3272,6 +3289,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3315,12 +3334,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3336,6 +3357,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 09/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index e368d4f..81561c8 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1430,6 +1432,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 09/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Alexander Viro

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index e368d4f..81561c8 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1430,6 +1432,10 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
+		kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
+#endif
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 10/12] kmemleak: disable kasan instrumentation for kmemleak
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 10/12] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 11/12] lib: add kasan test module
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 1fa4fe8..8548646 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index c47f092..4a562a6 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..896dee5
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 11/12] lib: add kasan test module
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 263 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 1fa4fe8..8548646 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index c47f092..4a562a6 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..896dee5
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 12/12] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  2014-11-27 16:00   ` Andrey Ryabinin
@ 2014-11-27 16:00     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Matt Fleming, Thomas Gleixner,
	Ingo Molnar

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c       |  3 +--
 arch/x86/boot/compressed/misc.h        |  1 +
 arch/x86/include/asm/string_64.h       | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c       | 10 ++++++++--
 arch/x86/lib/memcpy_64.S               |  6 ++++--
 arch/x86/lib/memmove_64.S              |  4 ++++
 arch/x86/lib/memset_64.S               | 10 ++++++----
 drivers/firmware/efi/libstub/efistub.h |  4 ++++
 mm/kasan/kasan.c                       | 31 ++++++++++++++++++++++++++++++-
 9 files changed, 75 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 92b9a5f..ef17683 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -13,8 +13,7 @@
 #include <asm/setup.h>
 #include <asm/desc.h>
 
-#undef memcpy			/* Use memcpy from misc.c */
-
+#include "../string.h"
 #include "eboot.h"
 
 static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
 	 * only outcome...
 	 */
 	.section .altinstructions, "a"
-	altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+	altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
 			     .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
-	altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+	altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
 			     .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
 	.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 304ab29..fbe0548 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -39,4 +39,8 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
 
 void *get_fdt(efi_system_table_t *sys_table);
 
+#undef memcpy
+#undef memset
+#undef memmove
+
 #endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7bb20ad..bb0443b 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -44,7 +44,7 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 	shadow_start = kasan_mem_to_shadow(addr);
 	shadow_end = kasan_mem_to_shadow(addr + size);
 
-	memset((void *)shadow_start, value, shadow_end - shadow_start);
+	__memset((void *)shadow_start, value, shadow_end - shadow_start);
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size)
@@ -248,6 +248,35 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void __asan_loadN(unsigned long addr, size_t size);
+void __asan_storeN(unsigned long addr, size_t size);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	__asan_storeN((unsigned long)addr, len);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memcpy(dest, src, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v8 12/12] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
@ 2014-11-27 16:00     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2014-11-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Matt Fleming, Thomas Gleixner,
	Ingo Molnar

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c       |  3 +--
 arch/x86/boot/compressed/misc.h        |  1 +
 arch/x86/include/asm/string_64.h       | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c       | 10 ++++++++--
 arch/x86/lib/memcpy_64.S               |  6 ++++--
 arch/x86/lib/memmove_64.S              |  4 ++++
 arch/x86/lib/memset_64.S               | 10 ++++++----
 drivers/firmware/efi/libstub/efistub.h |  4 ++++
 mm/kasan/kasan.c                       | 31 ++++++++++++++++++++++++++++++-
 9 files changed, 75 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 92b9a5f..ef17683 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -13,8 +13,7 @@
 #include <asm/setup.h>
 #include <asm/desc.h>
 
-#undef memcpy			/* Use memcpy from misc.c */
-
+#include "../string.h"
 #include "eboot.h"
 
 static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
 	 * only outcome...
 	 */
 	.section .altinstructions, "a"
-	altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+	altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
 			     .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
-	altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+	altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
 			     .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
 	.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 304ab29..fbe0548 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -39,4 +39,8 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
 
 void *get_fdt(efi_system_table_t *sys_table);
 
+#undef memcpy
+#undef memset
+#undef memmove
+
 #endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7bb20ad..bb0443b 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -44,7 +44,7 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 	shadow_start = kasan_mem_to_shadow(addr);
 	shadow_end = kasan_mem_to_shadow(addr + size);
 
-	memset((void *)shadow_start, value, shadow_end - shadow_start);
+	__memset((void *)shadow_start, value, shadow_end - shadow_start);
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size)
@@ -248,6 +248,35 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void __asan_loadN(unsigned long addr, size_t size);
+void __asan_storeN(unsigned long addr, size_t size);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	__asan_storeN((unsigned long)addr, len);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memcpy(dest, src, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.1.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v8 10/12] kmemleak: disable kasan instrumentation for kmemleak
  2014-11-27 16:00     ` Andrey Ryabinin
@ 2014-12-01 16:28       ` Catalin Marinas
  -1 siblings, 0 replies; 862+ messages in thread
From: Catalin Marinas @ 2014-12-01 16:28 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel

On Thu, Nov 27, 2014 at 04:00:54PM +0000, Andrey Ryabinin wrote:
> kmalloc internally round up allocation size, and kmemleak
> uses rounded up size as object's size. This makes kasan
> to complain while kmemleak scans memory or calculates of object's
> checksum. The simplest solution here is to disable kasan.

This would indeed be the simplest since by the time kmemleak callbacks
get called (from slub) we lose the original size information (especially
for kmem_cache_alloc).

> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/kmemleak.c | 6 ++++++
>  1 file changed, 6 insertions(+)

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v8 10/12] kmemleak: disable kasan instrumentation for kmemleak
@ 2014-12-01 16:28       ` Catalin Marinas
  0 siblings, 0 replies; 862+ messages in thread
From: Catalin Marinas @ 2014-12-01 16:28 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel

On Thu, Nov 27, 2014 at 04:00:54PM +0000, Andrey Ryabinin wrote:
> kmalloc internally round up allocation size, and kmemleak
> uses rounded up size as object's size. This makes kasan
> to complain while kmemleak scans memory or calculates of object's
> checksum. The simplest solution here is to disable kasan.

This would indeed be the simplest since by the time kmemleak callbacks
get called (from slub) we lose the original size information (especially
for kmem_cache_alloc).

> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  mm/kmemleak.c | 6 ++++++
>  1 file changed, 6 insertions(+)

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v8 01/12] Add kernel address sanitizer infrastructure.
  2014-11-27 16:00     ` Andrey Ryabinin
@ 2014-12-01 23:13       ` David Rientjes
  -1 siblings, 0 replies; 862+ messages in thread
From: David Rientjes @ 2014-12-01 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Jonathan Corbet, Michal Marek,
	Ingo Molnar, Peter Zijlstra

On Thu, 27 Nov 2014, Andrey Ryabinin wrote:

> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
> new file mode 100644
> index 0000000..a3a9009
> --- /dev/null
> +++ b/Documentation/kasan.txt
> @@ -0,0 +1,169 @@
> +Kernel address sanitizer
> +================
> +
> +0. Overview
> +===========
> +
> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> +a fast and comprehensive solution for finding use-after-free and out-of-bounds
> +bugs.
> +
> +KASan uses compile-time instrumentation for checking every memory access,
> +therefore you will need a certain version of GCC >= 4.9.2
> +
> +Currently KASan is supported only for x86_64 architecture and requires that the
> +kernel be built with the SLUB allocator.
> +
> +1. Usage
> +=========
> +
> +To enable KASAN configure kernel with:
> +
> +	  CONFIG_KASAN = y
> +
> +and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
> +is compiler instrumentation types. The former produces smaller binary the
> +latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
> +latter.
> +
> +Currently KASAN works only with the SLUB memory allocator.
> +For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
> +at least 'slub_debug=U' in the boot cmdline.
> +
> +To disable instrumentation for specific files or directories, add a line
> +similar to the following to the respective kernel Makefile:
> +
> +        For a single file (e.g. main.o):
> +                KASAN_SANITIZE_main.o := n
> +
> +        For all files in one directory:
> +                KASAN_SANITIZE := n
> +

More precisely, this requires CONFIG_SLUB_DEBUG and not just CONFIG_SLUB.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v8 01/12] Add kernel address sanitizer infrastructure.
@ 2014-12-01 23:13       ` David Rientjes
  0 siblings, 0 replies; 862+ messages in thread
From: David Rientjes @ 2014-12-01 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, H. Peter Anvin, x86,
	linux-mm, linux-kernel, Jonathan Corbet, Michal Marek,
	Ingo Molnar, Peter Zijlstra

On Thu, 27 Nov 2014, Andrey Ryabinin wrote:

> diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
> new file mode 100644
> index 0000000..a3a9009
> --- /dev/null
> +++ b/Documentation/kasan.txt
> @@ -0,0 +1,169 @@
> +Kernel address sanitizer
> +================
> +
> +0. Overview
> +===========
> +
> +Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> +a fast and comprehensive solution for finding use-after-free and out-of-bounds
> +bugs.
> +
> +KASan uses compile-time instrumentation for checking every memory access,
> +therefore you will need a certain version of GCC >= 4.9.2
> +
> +Currently KASan is supported only for x86_64 architecture and requires that the
> +kernel be built with the SLUB allocator.
> +
> +1. Usage
> +=========
> +
> +To enable KASAN configure kernel with:
> +
> +	  CONFIG_KASAN = y
> +
> +and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
> +is compiler instrumentation types. The former produces smaller binary the
> +latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
> +latter.
> +
> +Currently KASAN works only with the SLUB memory allocator.
> +For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
> +at least 'slub_debug=U' in the boot cmdline.
> +
> +To disable instrumentation for specific files or directories, add a line
> +similar to the following to the respective kernel Makefile:
> +
> +        For a single file (e.g. main.o):
> +                KASAN_SANITIZE_main.o := n
> +
> +        For all files in one directory:
> +                KASAN_SANITIZE := n
> +

More precisely, this requires CONFIG_SLUB_DEBUG and not just CONFIG_SLUB.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v9 00/17]  Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2015-01-21 16:51   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Linus Torvalds, Catalin Marinas

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based on top of 3.19-rc5 and available in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v9

Changes since v8:
	- Fixed unpoisoned redzones for not-allocated-yet object
	    in newly allocated slab page. (from Dmitry C.)

	- Some minor non-function cleanups in kasan internals.

	- Added ack from Catalin

	- Added stack instrumentation. With this we could detect
	    out of bounds accesses in stack variables. (patch 12)

	- Added globals instrumentation - catching out of bounds in
	    global varibles. (patches 13-17)

	- Shadow moved out from vmalloc into hole between vmemmap
	    and %esp fixup stacks. For globals instrumentation
	    we will need shadow backing modules addresses.
	    So we need some sort of a shadow memory allocator
	    (something like vmmemap_populate() function, except
	    that it should be available after boot).

	    __vmalloc_node_range() suits that purpose, except that
	    it can't be used for allocating for shadow in vmalloc
	    area because shadow in vmalloc is already 'allocated'
	    to protect us from other vmalloc users. So we need
	    16TB of unused addresses. And we have big enough hole
	    between vmemmap and %esp fixup stacks. So I moved shadow
	    there.

Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v7:
        - Fix build with CONFIG_KASAN_INLINE=y from Sasha.

        - Don't poison redzone on freeing, since it is poisend already from Dmitry Chernenkov.

        - Fix altinstruction_entry for memcpy.

        - Move kasan_slab_free() call after debug_obj_free to prevent some false-positives
            with CONFIG_DEBUG_OBJECTS=y

        - Drop -pg flag for kasan internals to avoid recursion with function tracer
           enabled.

        - Added ack from Christoph.


Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Andrey Ryabinin (17):
  Add kernel address sanitizer infrastructure.
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share object_err function
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  kasan: enable stack instrumentation
  mm: vmalloc: add flag preventing guard hole allocation
  mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  kernel: add support for .init_array.* constructors
  module: fix types of device tables aliases
  kasan: enable instrumentation of global variables

 Documentation/kasan.txt                | 169 ++++++++++++
 Documentation/x86/x86_64/mm.txt        |   2 +
 Makefile                               |  29 +-
 arch/arm/kernel/module.c               |   2 +-
 arch/arm64/kernel/module.c             |   2 +-
 arch/mips/kernel/module.c              |   2 +-
 arch/parisc/kernel/module.c            |   2 +-
 arch/s390/kernel/module.c              |   2 +-
 arch/sparc/kernel/module.c             |   2 +-
 arch/unicore32/kernel/module.c         |   2 +-
 arch/x86/Kconfig                       |   1 +
 arch/x86/boot/Makefile                 |   2 +
 arch/x86/boot/compressed/Makefile      |   2 +
 arch/x86/boot/compressed/eboot.c       |   3 +-
 arch/x86/boot/compressed/misc.h        |   1 +
 arch/x86/include/asm/kasan.h           |  35 +++
 arch/x86/include/asm/page_64_types.h   |  12 +-
 arch/x86/include/asm/string_64.h       |  18 +-
 arch/x86/kernel/Makefile               |   4 +
 arch/x86/kernel/dumpstack.c            |   5 +-
 arch/x86/kernel/head64.c               |   9 +-
 arch/x86/kernel/head_64.S              |  34 +++
 arch/x86/kernel/module.c               |  14 +-
 arch/x86/kernel/setup.c                |   3 +
 arch/x86/kernel/x8664_ksyms_64.c       |  10 +-
 arch/x86/lib/memcpy_64.S               |   6 +-
 arch/x86/lib/memmove_64.S              |   4 +
 arch/x86/lib/memset_64.S               |  10 +-
 arch/x86/mm/Makefile                   |   3 +
 arch/x86/mm/kasan_init_64.c            | 223 +++++++++++++++
 arch/x86/realmode/Makefile             |   2 +-
 arch/x86/realmode/rm/Makefile          |   1 +
 arch/x86/vdso/Makefile                 |   1 +
 drivers/firmware/efi/libstub/Makefile  |   1 +
 drivers/firmware/efi/libstub/efistub.h |   4 +
 fs/dcache.c                            |   5 +
 include/asm-generic/vmlinux.lds.h      |   1 +
 include/linux/compiler-gcc4.h          |   4 +
 include/linux/compiler-gcc5.h          |   2 +
 include/linux/init_task.h              |   8 +
 include/linux/kasan.h                  | 102 +++++++
 include/linux/module.h                 |   2 +-
 include/linux/sched.h                  |   3 +
 include/linux/slab.h                   |  11 +-
 include/linux/slub_def.h               |   8 +
 include/linux/vmalloc.h                |  13 +-
 kernel/module.c                        |   2 +
 lib/Kconfig.debug                      |   2 +
 lib/Kconfig.kasan                      |  55 ++++
 lib/Makefile                           |   1 +
 lib/test_kasan.c                       | 277 +++++++++++++++++++
 mm/Makefile                            |   4 +
 mm/compaction.c                        |   2 +
 mm/kasan/Makefile                      |   8 +
 mm/kasan/kasan.c                       | 487 +++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                       |  70 +++++
 mm/kasan/report.c                      | 254 +++++++++++++++++
 mm/kmemleak.c                          |   6 +
 mm/page_alloc.c                        |   3 +
 mm/slab_common.c                       |   5 +-
 mm/slub.c                              |  52 +++-
 mm/vmalloc.c                           |  16 +-
 scripts/Makefile.lib                   |  10 +
 scripts/module-common.lds              |   3 +
 64 files changed, 1992 insertions(+), 46 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

--
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
-- 
2.2.1

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v9 00/17]  Kernel address sanitizer - runtime memory debugger.
@ 2015-01-21 16:51   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Linus Torvalds, Catalin Marinas

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches are based on top of 3.19-rc5 and available in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v9

Changes since v8:
	- Fixed unpoisoned redzones for not-allocated-yet object
	    in newly allocated slab page. (from Dmitry C.)

	- Some minor non-function cleanups in kasan internals.

	- Added ack from Catalin

	- Added stack instrumentation. With this we could detect
	    out of bounds accesses in stack variables. (patch 12)

	- Added globals instrumentation - catching out of bounds in
	    global varibles. (patches 13-17)

	- Shadow moved out from vmalloc into hole between vmemmap
	    and %esp fixup stacks. For globals instrumentation
	    we will need shadow backing modules addresses.
	    So we need some sort of a shadow memory allocator
	    (something like vmmemap_populate() function, except
	    that it should be available after boot).

	    __vmalloc_node_range() suits that purpose, except that
	    it can't be used for allocating for shadow in vmalloc
	    area because shadow in vmalloc is already 'allocated'
	    to protect us from other vmalloc users. So we need
	    16TB of unused addresses. And we have big enough hole
	    between vmemmap and %esp fixup stacks. So I moved shadow
	    there.

Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v7:
        - Fix build with CONFIG_KASAN_INLINE=y from Sasha.

        - Don't poison redzone on freeing, since it is poisend already from Dmitry Chernenkov.

        - Fix altinstruction_entry for memcpy.

        - Move kasan_slab_free() call after debug_obj_free to prevent some false-positives
            with CONFIG_DEBUG_OBJECTS=y

        - Drop -pg flag for kasan internals to avoid recursion with function tracer
           enabled.

        - Added ack from Christoph.


Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Andrey Ryabinin (17):
  Add kernel address sanitizer infrastructure.
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share object_err function
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  kasan: enable stack instrumentation
  mm: vmalloc: add flag preventing guard hole allocation
  mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  kernel: add support for .init_array.* constructors
  module: fix types of device tables aliases
  kasan: enable instrumentation of global variables

 Documentation/kasan.txt                | 169 ++++++++++++
 Documentation/x86/x86_64/mm.txt        |   2 +
 Makefile                               |  29 +-
 arch/arm/kernel/module.c               |   2 +-
 arch/arm64/kernel/module.c             |   2 +-
 arch/mips/kernel/module.c              |   2 +-
 arch/parisc/kernel/module.c            |   2 +-
 arch/s390/kernel/module.c              |   2 +-
 arch/sparc/kernel/module.c             |   2 +-
 arch/unicore32/kernel/module.c         |   2 +-
 arch/x86/Kconfig                       |   1 +
 arch/x86/boot/Makefile                 |   2 +
 arch/x86/boot/compressed/Makefile      |   2 +
 arch/x86/boot/compressed/eboot.c       |   3 +-
 arch/x86/boot/compressed/misc.h        |   1 +
 arch/x86/include/asm/kasan.h           |  35 +++
 arch/x86/include/asm/page_64_types.h   |  12 +-
 arch/x86/include/asm/string_64.h       |  18 +-
 arch/x86/kernel/Makefile               |   4 +
 arch/x86/kernel/dumpstack.c            |   5 +-
 arch/x86/kernel/head64.c               |   9 +-
 arch/x86/kernel/head_64.S              |  34 +++
 arch/x86/kernel/module.c               |  14 +-
 arch/x86/kernel/setup.c                |   3 +
 arch/x86/kernel/x8664_ksyms_64.c       |  10 +-
 arch/x86/lib/memcpy_64.S               |   6 +-
 arch/x86/lib/memmove_64.S              |   4 +
 arch/x86/lib/memset_64.S               |  10 +-
 arch/x86/mm/Makefile                   |   3 +
 arch/x86/mm/kasan_init_64.c            | 223 +++++++++++++++
 arch/x86/realmode/Makefile             |   2 +-
 arch/x86/realmode/rm/Makefile          |   1 +
 arch/x86/vdso/Makefile                 |   1 +
 drivers/firmware/efi/libstub/Makefile  |   1 +
 drivers/firmware/efi/libstub/efistub.h |   4 +
 fs/dcache.c                            |   5 +
 include/asm-generic/vmlinux.lds.h      |   1 +
 include/linux/compiler-gcc4.h          |   4 +
 include/linux/compiler-gcc5.h          |   2 +
 include/linux/init_task.h              |   8 +
 include/linux/kasan.h                  | 102 +++++++
 include/linux/module.h                 |   2 +-
 include/linux/sched.h                  |   3 +
 include/linux/slab.h                   |  11 +-
 include/linux/slub_def.h               |   8 +
 include/linux/vmalloc.h                |  13 +-
 kernel/module.c                        |   2 +
 lib/Kconfig.debug                      |   2 +
 lib/Kconfig.kasan                      |  55 ++++
 lib/Makefile                           |   1 +
 lib/test_kasan.c                       | 277 +++++++++++++++++++
 mm/Makefile                            |   4 +
 mm/compaction.c                        |   2 +
 mm/kasan/Makefile                      |   8 +
 mm/kasan/kasan.c                       | 487 +++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                       |  70 +++++
 mm/kasan/report.c                      | 254 +++++++++++++++++
 mm/kmemleak.c                          |   6 +
 mm/page_alloc.c                        |   3 +
 mm/slab_common.c                       |   5 +-
 mm/slub.c                              |  52 +++-
 mm/vmalloc.c                           |  16 +-
 scripts/Makefile.lib                   |  10 +
 scripts/module-common.lds              |   3 +
 64 files changed, 1992 insertions(+), 46 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

--
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v9 01/17] Add kernel address sanitizer infrastructure.
  2015-01-21 16:51   ` Andrey Ryabinin
  (?)
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++++++
 Makefile                              |  27 +++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  45 ++++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 296 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  47 ++++++
 mm/kasan/report.c                     | 193 ++++++++++++++++++++++
 scripts/Makefile.lib                  |  10 ++
 13 files changed, 844 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index fb93350..ee5830b 100644
--- a/Makefile
+++ b/Makefile
@@ -423,7 +423,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -744,6 +744,31 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..063a3f3
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,45 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8db31ef..26e1b47 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1662,6 +1662,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 5f2ce61..b2b0d95 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 4bf586e..af0d917 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_FS_XIP) += filemap_xip.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..740d5b2
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,296 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+#define DECLARE_ASAN_CHECK(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__attribute__((alias("__asan_load"#size)))		\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__attribute__((alias("__asan_store"#size)))		\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort);
+
+DECLARE_ASAN_CHECK(1);
+DECLARE_ASAN_CHECK(2);
+DECLARE_ASAN_CHECK(4);
+DECLARE_ASAN_CHECK(8);
+DECLARE_ASAN_CHECK(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..da0e53c
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,47 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..62b942a
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,193 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..a5845a2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 01/17] Add kernel address sanitizer infrastructure.
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++++++
 Makefile                              |  27 +++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  45 ++++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 296 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  47 ++++++
 mm/kasan/report.c                     | 193 ++++++++++++++++++++++
 scripts/Makefile.lib                  |  10 ++
 13 files changed, 844 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index fb93350..ee5830b 100644
--- a/Makefile
+++ b/Makefile
@@ -423,7 +423,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -744,6 +744,31 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..063a3f3
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,45 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8db31ef..26e1b47 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1662,6 +1662,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 5f2ce61..b2b0d95 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 4bf586e..af0d917 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_FS_XIP) += filemap_xip.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..740d5b2
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,296 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+#define DECLARE_ASAN_CHECK(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__attribute__((alias("__asan_load"#size)))		\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__attribute__((alias("__asan_store"#size)))		\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort);
+
+DECLARE_ASAN_CHECK(1);
+DECLARE_ASAN_CHECK(2);
+DECLARE_ASAN_CHECK(4);
+DECLARE_ASAN_CHECK(8);
+DECLARE_ASAN_CHECK(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..da0e53c
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,47 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..62b942a
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,193 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..a5845a2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 01/17] Add kernel address sanitizer infrastructure.
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++++++
 Makefile                              |  27 +++-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  45 ++++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 296 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  47 ++++++
 mm/kasan/report.c                     | 193 ++++++++++++++++++++++
 scripts/Makefile.lib                  |  10 ++
 13 files changed, 844 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index fb93350..ee5830b 100644
--- a/Makefile
+++ b/Makefile
@@ -423,7 +423,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -744,6 +744,31 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
 KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
+
 # arch Makefile may override CC so keep this after arch Makefile is included
 NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
 CHECKFLAGS     += $(NOSTDINC_FLAGS)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..063a3f3
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,45 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8db31ef..26e1b47 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1662,6 +1662,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 5f2ce61..b2b0d95 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index 4bf586e..af0d917 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_FS_XIP) += filemap_xip.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..740d5b2
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,296 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+#define DECLARE_ASAN_CHECK(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__attribute__((alias("__asan_load"#size)))		\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__attribute__((alias("__asan_store"#size)))		\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort);
+
+DECLARE_ASAN_CHECK(1);
+DECLARE_ASAN_CHECK(2);
+DECLARE_ASAN_CHECK(4);
+DECLARE_ASAN_CHECK(8);
+DECLARE_ASAN_CHECK(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..da0e53c
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,47 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..62b942a
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,193 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..a5845a2 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 02/17] x86_64: add KASan support
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Jonathan Corbet, Andy Lutomirski, open list:DOCUMENTATION

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [ffffec0000000000 - fffffc0000000000]
between vmemmap and %esp fixup stacks.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/x86/x86_64/mm.txt   |   2 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  35 +++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +-
 arch/x86/kernel/head_64.S         |  34 ++++++
 arch/x86/kernel/setup.c           |   3 +
 arch/x86/mm/Makefile              |   3 +
 arch/x86/mm/kasan_init_64.c       | 215 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   2 +
 16 files changed, 315 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 052ee64..05712ac 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
+ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB)
+... unused hole ...
 ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
 ... unused hole ...
 ffffffff80000000 - ffffffffa0000000 (=512 MB)  kernel text mapping, from phys 0
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ba397bd..f3c0c7d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -84,6 +84,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 3db07f3..57bbf2f 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index d999398..0bf4d9f 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..67f8650
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,35 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from compiler's shadow offset +
+ * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
+ */
+#define KASAN_SHADOW_START      (KASAN_SHADOW_OFFSET + \
+					(0xffff800000000000ULL >> 3))
+/* 47 bits for kernel address -> (47 - 3) bits for shadow */
+#define KASAN_SHADOW_END        (KASAN_SHADOW_START + (1ULL << (47 - 3)))
+
+#ifndef __ASSEMBLY__
+
+extern pte_t kasan_zero_pte[];
+extern pte_t kasan_zero_pmd[];
+extern pte_t kasan_zero_pud[];
+
+extern pte_t kasan_poisoned_pte[];
+extern pte_t kasan_poisoned_pmd[];
+extern pte_t kasan_poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_early_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_early_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d4502c..74d3f3e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..efcddfa 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_early_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_early_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..287ae04 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,42 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(kasan_zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(kasan_zero_pmd)
+	FILL(kasan_zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(kasan_zero_pud)
+	FILL(kasan_zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(kasan_poisoned_pte)
+	FILL(kasan_poisoned_page - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_poisoned_pmd)
+	FILL(kasan_poisoned_pte - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_poisoned_pud)
+	FILL(kasan_poisoned_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+
+#ifdef CONFIG_KASAN
+/*
+ * This page used as early shadow.
+ * Latter we use it to poison large ranges of memory that
+ * shouldn't be accessed by anyone except kasan itself.
+ */
+NEXT_PAGE(kasan_poisoned_page)
+	.skip PAGE_SIZE
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ab4734e..4912b74 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1176,6 +1177,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index ecfdc46..c4cc740 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,6 +20,9 @@ obj-$(CONFIG_HIGHMEM)		+= highmem_32.o
 
 obj-$(CONFIG_KMEMCHECK)		+= kmemcheck/
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+
 obj-$(CONFIG_MMIOTRACE)		+= mmiotrace.o
 mmiotrace-y			:= kmmio.o pf_in.o mmio-mod.o
 obj-$(CONFIG_MMIOTRACE_TEST)	+= testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..70e8082
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,215 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+extern unsigned char kasan_poisoned_page[PAGE_SIZE];
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(kasan_poisoned_pud)
+				| _KERNPG_TABLE);
+		start += PGDIR_SIZE;
+	}
+}
+
+void __init populate_poison_shadow(unsigned long start, unsigned long end)
+{
+	int i;
+	pgd_t *pgd = init_level4_pgt;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(kasan_poisoned_pud)
+				| _KERNPG_TABLE);
+		start += PGDIR_SIZE;
+	}
+}
+
+static int __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
+				unsigned long end)
+{
+	pte_t *pte = pte_offset_kernel(pmd, addr);
+
+	while (addr + PAGE_SIZE <= end) {
+		WARN_ON(!pte_none(*pte));
+		set_pte(pte, __pte(__pa_nodebug(empty_zero_page)
+					| __PAGE_KERNEL_RO));
+		addr += PAGE_SIZE;
+		pte = pte_offset_kernel(pmd, addr);
+	}
+	return 0;
+}
+
+static int __init zero_pmd_populate(pud_t *pud, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pmd_t *pmd = pmd_offset(pud, addr);
+
+	while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {
+		WARN_ON(!pmd_none(*pmd));
+		set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)
+					| __PAGE_KERNEL_RO));
+		addr += PMD_SIZE;
+		pmd = pmd_offset(pud, addr);
+	}
+	if (addr < end) {
+		if (pmd_none(*pmd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pmd(pmd, __pmd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pte_populate(pmd, addr, end);
+	}
+	return ret;
+}
+
+
+static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pud_t *pud = pud_offset(pgd, addr);
+
+	while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {
+		WARN_ON(!pud_none(*pud));
+		set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)
+					| __PAGE_KERNEL_RO));
+		addr += PUD_SIZE;
+		pud = pud_offset(pgd, addr);
+	}
+
+	if (addr < end) {
+		if (pud_none(*pud)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pud(pud, __pud(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pmd_populate(pud, addr, end);
+	}
+	return ret;
+}
+
+static int __init zero_pgd_populate(unsigned long addr, unsigned long end)
+{
+	int ret = 0;
+	pgd_t *pgd = pgd_offset_k(addr);
+
+	while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {
+		WARN_ON(!pgd_none(*pgd));
+		set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)
+					| __PAGE_KERNEL_RO));
+		addr += PGDIR_SIZE;
+		pgd = pgd_offset_k(addr);
+	}
+
+	if (addr < end) {
+		if (pgd_none(*pgd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pgd(pgd, __pgd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pud_populate(pgd, addr, end);
+	}
+	return ret;
+}
+
+
+static void __init populate_zero_shadow(unsigned long start, unsigned long end)
+{
+	if (zero_pgd_populate(start, end))
+		panic("kasan: unable to map zero shadow!");
+}
+
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	populate_zero_shadow(KASAN_SHADOW_START,
+			kasan_mem_to_shadow(PAGE_OFFSET));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	populate_zero_shadow(kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM),
+			kasan_mem_to_shadow(KASAN_SHADOW_START));
+
+	populate_poison_shadow(kasan_mem_to_shadow(KASAN_SHADOW_START),
+			kasan_mem_to_shadow(KASAN_SHADOW_END));
+
+	populate_zero_shadow(kasan_mem_to_shadow(KASAN_SHADOW_END),
+			KASAN_SHADOW_END);
+
+	memset(kasan_poisoned_page, KASAN_SHADOW_GAP, PAGE_SIZE);
+
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..f86070d 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdffffc0000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 02/17] x86_64: add KASan support
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Jonathan Corbet, Andy Lutomirski, open list:DOCUMENTATION

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [ffffec0000000000 - fffffc0000000000]
between vmemmap and %esp fixup stacks.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/x86/x86_64/mm.txt   |   2 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  35 +++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +-
 arch/x86/kernel/head_64.S         |  34 ++++++
 arch/x86/kernel/setup.c           |   3 +
 arch/x86/mm/Makefile              |   3 +
 arch/x86/mm/kasan_init_64.c       | 215 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   2 +
 16 files changed, 315 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 052ee64..05712ac 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
+ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB)
+... unused hole ...
 ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
 ... unused hole ...
 ffffffff80000000 - ffffffffa0000000 (=512 MB)  kernel text mapping, from phys 0
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ba397bd..f3c0c7d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -84,6 +84,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 3db07f3..57bbf2f 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index d999398..0bf4d9f 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..67f8650
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,35 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from compiler's shadow offset +
+ * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
+ */
+#define KASAN_SHADOW_START      (KASAN_SHADOW_OFFSET + \
+					(0xffff800000000000ULL >> 3))
+/* 47 bits for kernel address -> (47 - 3) bits for shadow */
+#define KASAN_SHADOW_END        (KASAN_SHADOW_START + (1ULL << (47 - 3)))
+
+#ifndef __ASSEMBLY__
+
+extern pte_t kasan_zero_pte[];
+extern pte_t kasan_zero_pmd[];
+extern pte_t kasan_zero_pud[];
+
+extern pte_t kasan_poisoned_pte[];
+extern pte_t kasan_poisoned_pmd[];
+extern pte_t kasan_poisoned_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_early_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_early_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d4502c..74d3f3e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..efcddfa 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_early_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_early_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..287ae04 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,42 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(kasan_zero_pte)
+	FILL(empty_zero_page - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(kasan_zero_pmd)
+	FILL(kasan_zero_pte - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+NEXT_PAGE(kasan_zero_pud)
+	FILL(kasan_zero_pmd - __START_KERNEL_map + __PAGE_KERNEL_RO, 512)
+
+NEXT_PAGE(kasan_poisoned_pte)
+	FILL(kasan_poisoned_page - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_poisoned_pmd)
+	FILL(kasan_poisoned_pte - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_poisoned_pud)
+	FILL(kasan_poisoned_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+
+#ifdef CONFIG_KASAN
+/*
+ * This page used as early shadow.
+ * Latter we use it to poison large ranges of memory that
+ * shouldn't be accessed by anyone except kasan itself.
+ */
+NEXT_PAGE(kasan_poisoned_page)
+	.skip PAGE_SIZE
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ab4734e..4912b74 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1176,6 +1177,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index ecfdc46..c4cc740 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,6 +20,9 @@ obj-$(CONFIG_HIGHMEM)		+= highmem_32.o
 
 obj-$(CONFIG_KMEMCHECK)		+= kmemcheck/
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+
 obj-$(CONFIG_MMIOTRACE)		+= mmiotrace.o
 mmiotrace-y			:= kmmio.o pf_in.o mmio-mod.o
 obj-$(CONFIG_MMIOTRACE_TEST)	+= testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..70e8082
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,215 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+extern unsigned char kasan_poisoned_page[PAGE_SIZE];
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(kasan_poisoned_pud)
+				| _KERNPG_TABLE);
+		start += PGDIR_SIZE;
+	}
+}
+
+void __init populate_poison_shadow(unsigned long start, unsigned long end)
+{
+	int i;
+	pgd_t *pgd = init_level4_pgt;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(kasan_poisoned_pud)
+				| _KERNPG_TABLE);
+		start += PGDIR_SIZE;
+	}
+}
+
+static int __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
+				unsigned long end)
+{
+	pte_t *pte = pte_offset_kernel(pmd, addr);
+
+	while (addr + PAGE_SIZE <= end) {
+		WARN_ON(!pte_none(*pte));
+		set_pte(pte, __pte(__pa_nodebug(empty_zero_page)
+					| __PAGE_KERNEL_RO));
+		addr += PAGE_SIZE;
+		pte = pte_offset_kernel(pmd, addr);
+	}
+	return 0;
+}
+
+static int __init zero_pmd_populate(pud_t *pud, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pmd_t *pmd = pmd_offset(pud, addr);
+
+	while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {
+		WARN_ON(!pmd_none(*pmd));
+		set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)
+					| __PAGE_KERNEL_RO));
+		addr += PMD_SIZE;
+		pmd = pmd_offset(pud, addr);
+	}
+	if (addr < end) {
+		if (pmd_none(*pmd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pmd(pmd, __pmd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pte_populate(pmd, addr, end);
+	}
+	return ret;
+}
+
+
+static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pud_t *pud = pud_offset(pgd, addr);
+
+	while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {
+		WARN_ON(!pud_none(*pud));
+		set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)
+					| __PAGE_KERNEL_RO));
+		addr += PUD_SIZE;
+		pud = pud_offset(pgd, addr);
+	}
+
+	if (addr < end) {
+		if (pud_none(*pud)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pud(pud, __pud(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pmd_populate(pud, addr, end);
+	}
+	return ret;
+}
+
+static int __init zero_pgd_populate(unsigned long addr, unsigned long end)
+{
+	int ret = 0;
+	pgd_t *pgd = pgd_offset_k(addr);
+
+	while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {
+		WARN_ON(!pgd_none(*pgd));
+		set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)
+					| __PAGE_KERNEL_RO));
+		addr += PGDIR_SIZE;
+		pgd = pgd_offset_k(addr);
+	}
+
+	if (addr < end) {
+		if (pgd_none(*pgd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pgd(pgd, __pgd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pud_populate(pgd, addr, end);
+	}
+	return ret;
+}
+
+
+static void __init populate_zero_shadow(unsigned long start, unsigned long end)
+{
+	if (zero_pgd_populate(start, end))
+		panic("kasan: unable to map zero shadow!");
+}
+
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	populate_zero_shadow(KASAN_SHADOW_START,
+			kasan_mem_to_shadow(PAGE_OFFSET));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	populate_zero_shadow(kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM),
+			kasan_mem_to_shadow(KASAN_SHADOW_START));
+
+	populate_poison_shadow(kasan_mem_to_shadow(KASAN_SHADOW_START),
+			kasan_mem_to_shadow(KASAN_SHADOW_END));
+
+	populate_zero_shadow(kasan_mem_to_shadow(KASAN_SHADOW_END),
+			KASAN_SHADOW_END);
+
+	memset(kasan_poisoned_page, KASAN_SHADOW_GAP, PAGE_SIZE);
+
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..f86070d 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdffffc0000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 03/17] mm: page_alloc: add kasan hooks on alloc and free paths
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  7 +++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/report.c     | 11 +++++++++++
 mm/page_alloc.c       |  3 +++
 5 files changed, 37 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 063a3f3..a278ccc 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -11,6 +11,7 @@ struct page;
 #define KASAN_SHADOW_SCALE_SHIFT 3
 #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 #include <asm/kasan.h>
@@ -33,6 +34,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -40,6 +44,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 546e571..12f2c7d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -61,6 +62,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 740d5b2..efe8105 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 62b942a..7983ebb 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -54,6 +54,9 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -72,6 +75,14 @@ static void print_error_description(struct access_info *info)
 
 static void print_address_description(struct access_info *info)
 {
+	unsigned long addr = info->access_addr;
+
+	if ((addr >= PAGE_OFFSET) &&
+		(addr < (unsigned long)high_memory)) {
+		struct page *page = virt_to_head_page((void *)addr);
+		dump_page(page, "kasan: bad access detected");
+	}
+
 	dump_stack();
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7633c50..3a75171 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include <linux/kmemcheck.h>
+#include <linux/kasan.h>
 #include <linux/module.h>
 #include <linux/suspend.h>
 #include <linux/pagevec.h>
@@ -807,6 +808,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -985,6 +987,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 03/17] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  7 +++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/report.c     | 11 +++++++++++
 mm/page_alloc.c       |  3 +++
 5 files changed, 37 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 063a3f3..a278ccc 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -11,6 +11,7 @@ struct page;
 #define KASAN_SHADOW_SCALE_SHIFT 3
 #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 #include <asm/kasan.h>
@@ -33,6 +34,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -40,6 +44,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 546e571..12f2c7d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -61,6 +62,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 740d5b2..efe8105 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 62b942a..7983ebb 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -54,6 +54,9 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -72,6 +75,14 @@ static void print_error_description(struct access_info *info)
 
 static void print_address_description(struct access_info *info)
 {
+	unsigned long addr = info->access_addr;
+
+	if ((addr >= PAGE_OFFSET) &&
+		(addr < (unsigned long)high_memory)) {
+		struct page *page = virt_to_head_page((void *)addr);
+		dump_page(page, "kasan: bad access detected");
+	}
+
 	dump_stack();
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7633c50..3a75171 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include <linux/kmemcheck.h>
+#include <linux/kasan.h>
 #include <linux/module.h>
 #include <linux/suspend.h>
 #include <linux/pagevec.h>
@@ -807,6 +808,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -985,6 +987,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags)
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 04/17] mm: slub: introduce virt_to_obj function.
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 04/17] mm: slub: introduce virt_to_obj function.
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..c75bc1d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 05/17] mm: slub: share object_err function
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Remove static and add function declarations to
linux/slub_def.h so it could be used by kernel
address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 3 +++
 mm/slub.c                | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..d7d9f26 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,7 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index fe376fe..18777c9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,7 +629,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 05/17] mm: slub: share object_err function
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Remove static and add function declarations to
linux/slub_def.h so it could be used by kernel
address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 3 +++
 mm/slub.c                | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c75bc1d..d7d9f26 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,7 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index fe376fe..18777c9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,7 +629,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 18777c9..9747976 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 18777c9..9747976 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 07/17] mm: slub: add kernel address sanitizer support for slub allocator
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Chernenkov, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as redzone.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Dmitry Chernenkov <dmitryc@google.com>
---
 include/linux/kasan.h | 30 ++++++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/report.c     | 22 ++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 34 ++++++++++++++++--
 8 files changed, 199 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index a278ccc..940fc4f 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -12,6 +12,9 @@ struct page;
 #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 #include <asm/kasan.h>
@@ -37,6 +40,18 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_poison_slab(struct page *page);
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -47,6 +62,21 @@ static inline void kasan_disable_local(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+					void *object) {}
+static inline void kasan_poison_object_data(struct kmem_cache *cache,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 9a139b6..6dc0145 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f86070d..ada0260 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index af0d917..65a55ae 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= fremap.o gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index efe8105..c52350e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,103 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_poison_slab(struct page *page)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << compound_order(page),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_unpoison_shadow(object, cache->object_size);
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_poison_shadow(object,
+			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = round_up((unsigned long)object + cache->object_size,
+				KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 7983ebb..f9bc57a 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -55,8 +56,11 @@ static void print_error_description(struct access_info *info)
 
 	switch (shadow_val) {
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
+	case KASAN_PAGE_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +84,24 @@ static void print_address_description(struct access_info *info)
 	if ((addr >= PAGE_OFFSET) &&
 		(addr < (unsigned long)high_memory)) {
 		struct page *page = virt_to_head_page((void *)addr);
+
+		if (PageSlab(page)) {
+			void *object;
+			struct kmem_cache *cache = page->slab_cache;
+			void *last_object;
+
+			object = virt_to_obj(cache, page_address(page),
+					(void *)info->access_addr);
+			last_object = page_address(page) +
+				page->objects * cache->size;
+
+			if (unlikely(object > last_object))
+				object = last_object; /* we hit into padding */
+
+			object_err(cache, page, object,
+				"kasan: bad access detected");
+			return;
+		}
 		dump_page(page, "kasan: bad access detected");
 	}
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index e03dd6f..4dcbc2d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -789,6 +789,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -973,8 +974,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 9747976..226da95 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
@@ -1269,6 +1274,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
 	memcg_kmem_put_cache(s);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1292,6 +1298,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1386,8 +1394,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
+		kasan_poison_object_data(s, object);
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1420,6 +1431,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (unlikely(s->flags & SLAB_POISON))
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
 
+	kasan_poison_slab(page);
+
 	for_each_object_idx(p, idx, s, start, page->objects) {
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
@@ -2495,6 +2508,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2521,6 +2535,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2904,6 +2920,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3276,6 +3293,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3319,12 +3338,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3340,6 +3361,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 07/17] mm: slub: add kernel address sanitizer support for slub allocator
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Chernenkov, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as redzone.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Dmitry Chernenkov <dmitryc@google.com>
---
 include/linux/kasan.h | 30 ++++++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/report.c     | 22 ++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 34 ++++++++++++++++--
 8 files changed, 199 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index a278ccc..940fc4f 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -12,6 +12,9 @@ struct page;
 #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 #include <asm/kasan.h>
@@ -37,6 +40,18 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_poison_slab(struct page *page);
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -47,6 +62,21 @@ static inline void kasan_disable_local(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+					void *object) {}
+static inline void kasan_poison_object_data(struct kmem_cache *cache,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 9a139b6..6dc0145 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -326,7 +327,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -334,7 +338,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f86070d..ada0260 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index af0d917..65a55ae 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= fremap.o gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index efe8105..c52350e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,103 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_poison_slab(struct page *page)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << compound_order(page),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_unpoison_shadow(object, cache->object_size);
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_poison_shadow(object,
+			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = round_up((unsigned long)object + cache->object_size,
+				KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 7983ebb..f9bc57a 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -55,8 +56,11 @@ static void print_error_description(struct access_info *info)
 
 	switch (shadow_val) {
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
+	case KASAN_PAGE_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +84,24 @@ static void print_address_description(struct access_info *info)
 	if ((addr >= PAGE_OFFSET) &&
 		(addr < (unsigned long)high_memory)) {
 		struct page *page = virt_to_head_page((void *)addr);
+
+		if (PageSlab(page)) {
+			void *object;
+			struct kmem_cache *cache = page->slab_cache;
+			void *last_object;
+
+			object = virt_to_obj(cache, page_address(page),
+					(void *)info->access_addr);
+			last_object = page_address(page) +
+				page->objects * cache->size;
+
+			if (unlikely(object > last_object))
+				object = last_object; /* we hit into padding */
+
+			object_err(cache, page, object,
+				"kasan: bad access detected");
+			return;
+		}
 		dump_page(page, "kasan: bad access detected");
 	}
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index e03dd6f..4dcbc2d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -789,6 +789,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -973,8 +974,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 9747976..226da95 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
@@ -1269,6 +1274,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
 	memcg_kmem_put_cache(s);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1292,6 +1298,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1386,8 +1394,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
+		kasan_poison_object_data(s, object);
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1420,6 +1431,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (unlikely(s->flags & SLAB_POISON))
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
 
+	kasan_poison_slab(page);
+
 	for_each_object_idx(p, idx, s, start, page->objects) {
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
@@ -2495,6 +2508,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2521,6 +2535,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2904,6 +2920,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3276,6 +3293,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3319,12 +3338,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3340,6 +3361,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 08/17] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2015-01-21 16:51   ` Andrey Ryabinin
  (?)
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS (VFS...)

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index e368d4f..3c097f9 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1430,6 +1432,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 08/17] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS (VFS...)

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index e368d4f..3c097f9 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1430,6 +1432,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 08/17] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS VFS...

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index e368d4f..3c097f9 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1430,6 +1432,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 09/17] kmemleak: disable kasan instrumentation for kmemleak
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 09/17] kmemleak: disable kasan instrumentation for kmemleak
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 10/17] lib: add kasan test module
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 277 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 286 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index ada0260..f3bee26 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 3c3b30b..1c169f0 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..098c08e
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,277 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+static char global_array[10];
+
+static noinline void __init kasan_global_oob(void)
+{
+	volatile int i = 3;
+	char *p = &global_array[ARRAY_SIZE(global_array) + i];
+
+	pr_info("out-of-bounds global variable\n");
+	*(volatile char *)p;
+}
+
+static noinline void __init kasan_stack_oob(void)
+{
+	char stack_array[10];
+	volatile int i = 0;
+	char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
+
+	pr_info("out-of-bounds on stack\n");
+	*(volatile char *)p;
+}
+
+static int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	kasan_stack_oob();
+	kasan_global_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 10/17] lib: add kasan test module
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 277 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 286 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index ada0260..f3bee26 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index 3c3b30b..1c169f0 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..098c08e
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,277 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+static char global_array[10];
+
+static noinline void __init kasan_global_oob(void)
+{
+	volatile int i = 3;
+	char *p = &global_array[ARRAY_SIZE(global_array) + i];
+
+	pr_info("out-of-bounds global variable\n");
+	*(volatile char *)p;
+}
+
+static noinline void __init kasan_stack_oob(void)
+{
+	char stack_array[10];
+	volatile int i = 0;
+	char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
+
+	pr_info("out-of-bounds on stack\n");
+	*(volatile char *)p;
+}
+
+static int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	kasan_stack_oob();
+	kasan_global_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 11/17] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Matt Fleming, H. Peter Anvin, Thomas Gleixner,
	Ingo Molnar, open list:EXTENSIBLE FIRMWA...

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c       |  3 +--
 arch/x86/boot/compressed/misc.h        |  1 +
 arch/x86/include/asm/string_64.h       | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c       | 10 ++++++++--
 arch/x86/lib/memcpy_64.S               |  6 ++++--
 arch/x86/lib/memmove_64.S              |  4 ++++
 arch/x86/lib/memset_64.S               | 10 ++++++----
 drivers/firmware/efi/libstub/efistub.h |  4 ++++
 mm/kasan/kasan.c                       | 31 ++++++++++++++++++++++++++++++-
 9 files changed, 75 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 92b9a5f..ef17683 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -13,8 +13,7 @@
 #include <asm/setup.h>
 #include <asm/desc.h>
 
-#undef memcpy			/* Use memcpy from misc.c */
-
+#include "../string.h"
 #include "eboot.h"
 
 static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
 	 * only outcome...
 	 */
 	.section .altinstructions, "a"
-	altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+	altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
 			     .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
-	altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+	altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
 			     .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
 	.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 304ab29..fbe0548 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -39,4 +39,8 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
 
 void *get_fdt(efi_system_table_t *sys_table);
 
+#undef memcpy
+#undef memset
+#undef memmove
+
 #endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index c52350e..a59c976 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -44,7 +44,7 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 	shadow_start = kasan_mem_to_shadow(addr);
 	shadow_end = kasan_mem_to_shadow(addr + size);
 
-	memset((void *)shadow_start, value, shadow_end - shadow_start);
+	__memset((void *)shadow_start, value, shadow_end - shadow_start);
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size)
@@ -248,6 +248,35 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void __asan_loadN(unsigned long addr, size_t size);
+void __asan_storeN(unsigned long addr, size_t size);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	__asan_storeN((unsigned long)addr, len);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memcpy(dest, src, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 11/17] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Matt Fleming, H. Peter Anvin, Thomas Gleixner,
	Ingo Molnar, open list:EXTENSIBLE FIRMWA...

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c       |  3 +--
 arch/x86/boot/compressed/misc.h        |  1 +
 arch/x86/include/asm/string_64.h       | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c       | 10 ++++++++--
 arch/x86/lib/memcpy_64.S               |  6 ++++--
 arch/x86/lib/memmove_64.S              |  4 ++++
 arch/x86/lib/memset_64.S               | 10 ++++++----
 drivers/firmware/efi/libstub/efistub.h |  4 ++++
 mm/kasan/kasan.c                       | 31 ++++++++++++++++++++++++++++++-
 9 files changed, 75 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 92b9a5f..ef17683 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -13,8 +13,7 @@
 #include <asm/setup.h>
 #include <asm/desc.h>
 
-#undef memcpy			/* Use memcpy from misc.c */
-
+#include "../string.h"
 #include "eboot.h"
 
 static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
 	 * only outcome...
 	 */
 	.section .altinstructions, "a"
-	altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+	altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
 			     .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
-	altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+	altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
 			     .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
 	.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 304ab29..fbe0548 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -39,4 +39,8 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
 
 void *get_fdt(efi_system_table_t *sys_table);
 
+#undef memcpy
+#undef memset
+#undef memmove
+
 #endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index c52350e..a59c976 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -44,7 +44,7 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 	shadow_start = kasan_mem_to_shadow(addr);
 	shadow_end = kasan_mem_to_shadow(addr + size);
 
-	memset((void *)shadow_start, value, shadow_end - shadow_start);
+	__memset((void *)shadow_start, value, shadow_end - shadow_start);
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size)
@@ -248,6 +248,35 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void __asan_loadN(unsigned long addr, size_t size);
+void __asan_storeN(unsigned long addr, size_t size);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	__asan_storeN((unsigned long)addr, len);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memcpy(dest, src, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 12/17] kasan: enable stack instrumentation
  2015-01-21 16:51   ` Andrey Ryabinin
  (?)
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Michal Marek, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile                             |  1 +
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          |  8 ++++++++
 include/linux/init_task.h            |  8 ++++++++
 include/linux/kasan.h                |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 7 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index ee5830b..02530fa 100644
--- a/Makefile
+++ b/Makefile
@@ -755,6 +755,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 74d3f3e..fae4c4e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 70e8082..042f404 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -207,9 +207,17 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow(KASAN_SHADOW_END));
 
 	populate_zero_shadow(kasan_mem_to_shadow(KASAN_SHADOW_END),
+			kasan_mem_to_shadow(__START_KERNEL_map));
+
+	vmemmap_populate(kasan_mem_to_shadow((unsigned long)_stext),
+			kasan_mem_to_shadow((unsigned long)_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
 			KASAN_SHADOW_END);
 
 	memset(kasan_poisoned_page, KASAN_SHADOW_GAP, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 3037fc0..3932e0a 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -247,6 +254,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 940fc4f..f8eca6a 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -17,6 +17,15 @@ struct page;
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 #include <asm/kasan.h>
 #include <linux/sched.h>
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f9bc57a..faa07f0 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -67,6 +67,12 @@ static void print_error_description(struct access_info *info)
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 12/17] kasan: enable stack instrumentation
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Michal Marek, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile                             |  1 +
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          |  8 ++++++++
 include/linux/init_task.h            |  8 ++++++++
 include/linux/kasan.h                |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 7 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index ee5830b..02530fa 100644
--- a/Makefile
+++ b/Makefile
@@ -755,6 +755,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 74d3f3e..fae4c4e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 70e8082..042f404 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -207,9 +207,17 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow(KASAN_SHADOW_END));
 
 	populate_zero_shadow(kasan_mem_to_shadow(KASAN_SHADOW_END),
+			kasan_mem_to_shadow(__START_KERNEL_map));
+
+	vmemmap_populate(kasan_mem_to_shadow((unsigned long)_stext),
+			kasan_mem_to_shadow((unsigned long)_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
 			KASAN_SHADOW_END);
 
 	memset(kasan_poisoned_page, KASAN_SHADOW_GAP, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 3037fc0..3932e0a 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -247,6 +254,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 940fc4f..f8eca6a 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -17,6 +17,15 @@ struct page;
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 #include <asm/kasan.h>
 #include <linux/sched.h>
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f9bc57a..faa07f0 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -67,6 +67,12 @@ static void print_error_description(struct access_info *info)
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 12/17] kasan: enable stack instrumentation
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Michal Marek, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile                             |  1 +
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          |  8 ++++++++
 include/linux/init_task.h            |  8 ++++++++
 include/linux/kasan.h                |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 7 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index ee5830b..02530fa 100644
--- a/Makefile
+++ b/Makefile
@@ -755,6 +755,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 74d3f3e..fae4c4e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 70e8082..042f404 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -207,9 +207,17 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow(KASAN_SHADOW_END));
 
 	populate_zero_shadow(kasan_mem_to_shadow(KASAN_SHADOW_END),
+			kasan_mem_to_shadow(__START_KERNEL_map));
+
+	vmemmap_populate(kasan_mem_to_shadow((unsigned long)_stext),
+			kasan_mem_to_shadow((unsigned long)_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
 			KASAN_SHADOW_END);
 
 	memset(kasan_poisoned_page, KASAN_SHADOW_GAP, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 3037fc0..3932e0a 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -247,6 +254,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 940fc4f..f8eca6a 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -17,6 +17,15 @@ struct page;
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 #include <asm/kasan.h>
 #include <linux/sched.h>
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f9bc57a..faa07f0 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -67,6 +67,12 @@ static void print_error_description(struct access_info *info)
 	case KASAN_SHADOW_GAP:
 		bug_type = "wild memory access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 13/17] mm: vmalloc: add flag preventing guard hole allocation
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Add a new vm_struct flag 'VM_NO_GUARD' indicating that vm area
doesn't have a guard hole.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/vmalloc.h | 9 +++++++--
 mm/vmalloc.c            | 6 ++----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b87696f..1526fe7 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -16,6 +16,7 @@ struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
 #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
 #define VM_VPAGES		0x00000010	/* buffer for pages was vmalloc'ed */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
+#define VM_NO_GUARD		0x00000040      /* don't add guard page */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -96,8 +97,12 @@ void vmalloc_sync_all(void);
 
 static inline size_t get_vm_area_size(const struct vm_struct *area)
 {
-	/* return actual size without guard page */
-	return area->size - PAGE_SIZE;
+	if (!(area->flags & VM_NO_GUARD))
+		/* return actual size without guard page */
+		return area->size - PAGE_SIZE;
+	else
+		return area->size;
+
 }
 
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 39c3388..2e74e99 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1324,10 +1324,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (unlikely(!area))
 		return NULL;
 
-	/*
-	 * We always allocate a guard page.
-	 */
-	size += PAGE_SIZE;
+	if (!(flags & VM_NO_GUARD))
+		size += PAGE_SIZE;
 
 	va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
 	if (IS_ERR(va)) {
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 13/17] mm: vmalloc: add flag preventing guard hole allocation
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Add a new vm_struct flag 'VM_NO_GUARD' indicating that vm area
doesn't have a guard hole.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/vmalloc.h | 9 +++++++--
 mm/vmalloc.c            | 6 ++----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b87696f..1526fe7 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -16,6 +16,7 @@ struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
 #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
 #define VM_VPAGES		0x00000010	/* buffer for pages was vmalloc'ed */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
+#define VM_NO_GUARD		0x00000040      /* don't add guard page */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -96,8 +97,12 @@ void vmalloc_sync_all(void);
 
 static inline size_t get_vm_area_size(const struct vm_struct *area)
 {
-	/* return actual size without guard page */
-	return area->size - PAGE_SIZE;
+	if (!(area->flags & VM_NO_GUARD))
+		/* return actual size without guard page */
+		return area->size - PAGE_SIZE;
+	else
+		return area->size;
+
 }
 
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 39c3388..2e74e99 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1324,10 +1324,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (unlikely(!area))
 		return NULL;
 
-	/*
-	 * We always allocate a guard page.
-	 */
-	size += PAGE_SIZE;
+	if (!(flags & VM_NO_GUARD))
+		size += PAGE_SIZE;
 
 	va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
 	if (IS_ERR(va)) {
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  2015-01-21 16:51   ` Andrey Ryabinin
                       ` (3 preceding siblings ...)
  (?)
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  2 +-
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..5958d6d 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,7 +35,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 50dfafc..0d498ef 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b89b591..411a7ee 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.1

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  2 +-
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..5958d6d 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,7 +35,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 50dfafc..0d498ef 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b89b591..411a7ee 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  2 +-
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..5958d6d 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,7 +35,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 50dfafc..0d498ef 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b89b591..411a7ee 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.1

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-arm-kernel

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  2 +-
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..5958d6d 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,7 +35,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 50dfafc..0d498ef 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b89b591..411a7ee 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  2 +-
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..5958d6d 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,7 +35,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 50dfafc..0d498ef 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b89b591..411a7ee 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-arm-kernel

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  2 +-
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..5958d6d 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,7 +35,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 50dfafc..0d498ef 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index b89b591..411a7ee 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.1

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 15/17] kernel: add support for .init_array.* constructors
  2015-01-21 16:51   ` Andrey Ryabinin
  (?)
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Actually KASan doesn't need priorities for constructors,
so they were removed from GCC 5.0, but GCC 4.9.2 still generates
constructors with priorities.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 15/17] kernel: add support for .init_array.* constructors
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Actually KASan doesn't need priorities for constructors,
so they were removed from GCC 5.0, but GCC 4.9.2 still generates
constructors with priorities.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 15/17] kernel: add support for .init_array.* constructors
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Actually KASan doesn't need priorities for constructors,
so they were removed from GCC 5.0, but GCC 4.9.2 still generates
constructors with priorities.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 16/17] module: fix types of device tables aliases
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Rusty Russell

MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
Normally alias should have the same type as aliased symbol.

Device tables are arrays, so they have 'struct type##_device_id[x]'
types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
	'struct type##_device_id'.

This inconsistency confuses compiler, it could make a wrong
assumption about variable's size which leads KASan to
produce a false positive report about out of bounds access.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/module.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/module.h b/include/linux/module.h
index ebfb0e1..54e75a7 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
 #ifdef MODULE
 /* Creates an alias so file2alias.c can find device table. */
 #define MODULE_DEVICE_TABLE(type, name)					\
-  extern const struct type##_device_id __mod_##type##__##name##_device_table \
+extern typeof(name) __mod_##type##__##name##_device_table \
   __attribute__ ((unused, alias(__stringify(name))))
 #else  /* !MODULE */
 #define MODULE_DEVICE_TABLE(type, name)
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 16/17] module: fix types of device tables aliases
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Rusty Russell

MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
Normally alias should have the same type as aliased symbol.

Device tables are arrays, so they have 'struct type##_device_id[x]'
types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
	'struct type##_device_id'.

This inconsistency confuses compiler, it could make a wrong
assumption about variable's size which leads KASan to
produce a false positive report about out of bounds access.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/module.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/module.h b/include/linux/module.h
index ebfb0e1..54e75a7 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
 #ifdef MODULE
 /* Creates an alias so file2alias.c can find device table. */
 #define MODULE_DEVICE_TABLE(type, name)					\
-  extern const struct type##_device_id __mod_##type##__##name##_device_table \
+extern typeof(name) __mod_##type##__##name##_device_table \
   __attribute__ ((unused, alias(__stringify(name))))
 #else  /* !MODULE */
 #define MODULE_DEVICE_TABLE(type, name)
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 17/17] kasan: enable instrumentation of global variables
  2015-01-21 16:51   ` Andrey Ryabinin
  (?)
@ 2015-01-21 16:51     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Michal Marek, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Rusty Russell, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds
of global variables.

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile                      |  5 +++--
 arch/x86/kernel/module.c      | 12 ++++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 11 ++++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 50 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 23 ++++++++++++++++++++
 mm/kasan/report.c             | 22 +++++++++++++++++++
 11 files changed, 130 insertions(+), 4 deletions(-)

diff --git a/Makefile b/Makefile
index 02530fa..0a285fe 100644
--- a/Makefile
+++ b/Makefile
@@ -751,11 +751,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 042f404..112f537 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -213,7 +213,7 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow((unsigned long)_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_END),
 			KASAN_SHADOW_END);
 
 	memset(kasan_poisoned_page, KASAN_SHADOW_GAP, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f8eca6a..5b7debf 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -15,6 +15,7 @@ struct page;
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 /*
@@ -61,8 +62,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_local(void) {}
@@ -86,6 +94,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index 3965511..1689f43 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1798,6 +1799,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_free(struct module *mod, void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f3bee26..6b00c65 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -7,6 +7,7 @@ config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index a59c976..cf26766 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/memblock.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -388,6 +389,55 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+
+	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+				PAGE_SIZE);
+	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
+	void *ret;
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree((void *)kasan_mem_to_shadow((unsigned long)addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index da0e53c..e88a143 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,11 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
+
 struct access_info {
 	unsigned long access_addr;
 	unsigned long first_bad_addr;
@@ -14,6 +19,24 @@ struct access_info {
 	unsigned long ip;
 };
 
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct access_info *info);
 void kasan_report_user_access(struct access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index faa07f0..27f3d95 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -83,6 +86,19 @@ static void print_error_description(struct access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(unsigned long addr)
+{
+	return (addr >= (unsigned long)_stext && addr < (unsigned long)_end)
+		|| (addr >= MODULES_VADDR  && addr < MODULES_END);
+}
+
+static inline bool init_task_stack_addr(unsigned long addr)
+{
+	return addr >= (unsigned long)&init_thread_union.stack &&
+		(addr <= (unsigned long)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct access_info *info)
 {
 	unsigned long addr = info->access_addr;
@@ -111,6 +127,12 @@ static void print_address_description(struct access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n",
+				(void *)addr);
+	}
+
 	dump_stack();
 }
 
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 17/17] kasan: enable instrumentation of global variables
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Michal Marek, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Rusty Russell, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds
of global variables.

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile                      |  5 +++--
 arch/x86/kernel/module.c      | 12 ++++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 11 ++++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 50 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 23 ++++++++++++++++++++
 mm/kasan/report.c             | 22 +++++++++++++++++++
 11 files changed, 130 insertions(+), 4 deletions(-)

diff --git a/Makefile b/Makefile
index 02530fa..0a285fe 100644
--- a/Makefile
+++ b/Makefile
@@ -751,11 +751,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 042f404..112f537 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -213,7 +213,7 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow((unsigned long)_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_END),
 			KASAN_SHADOW_END);
 
 	memset(kasan_poisoned_page, KASAN_SHADOW_GAP, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f8eca6a..5b7debf 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -15,6 +15,7 @@ struct page;
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 /*
@@ -61,8 +62,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_local(void) {}
@@ -86,6 +94,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index 3965511..1689f43 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1798,6 +1799,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_free(struct module *mod, void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f3bee26..6b00c65 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -7,6 +7,7 @@ config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index a59c976..cf26766 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/memblock.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -388,6 +389,55 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+
+	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+				PAGE_SIZE);
+	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
+	void *ret;
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree((void *)kasan_mem_to_shadow((unsigned long)addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index da0e53c..e88a143 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,11 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
+
 struct access_info {
 	unsigned long access_addr;
 	unsigned long first_bad_addr;
@@ -14,6 +19,24 @@ struct access_info {
 	unsigned long ip;
 };
 
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct access_info *info);
 void kasan_report_user_access(struct access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index faa07f0..27f3d95 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -83,6 +86,19 @@ static void print_error_description(struct access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(unsigned long addr)
+{
+	return (addr >= (unsigned long)_stext && addr < (unsigned long)_end)
+		|| (addr >= MODULES_VADDR  && addr < MODULES_END);
+}
+
+static inline bool init_task_stack_addr(unsigned long addr)
+{
+	return addr >= (unsigned long)&init_thread_union.stack &&
+		(addr <= (unsigned long)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct access_info *info)
 {
 	unsigned long addr = info->access_addr;
@@ -111,6 +127,12 @@ static void print_address_description(struct access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n",
+				(void *)addr);
+	}
+
 	dump_stack();
 }
 
-- 
2.2.1


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v9 17/17] kasan: enable instrumentation of global variables
@ 2015-01-21 16:51     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 16:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Michal Marek, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Rusty Russell, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds
of global variables.

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Makefile                      |  5 +++--
 arch/x86/kernel/module.c      | 12 ++++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 11 ++++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 50 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 23 ++++++++++++++++++++
 mm/kasan/report.c             | 22 +++++++++++++++++++
 11 files changed, 130 insertions(+), 4 deletions(-)

diff --git a/Makefile b/Makefile
index 02530fa..0a285fe 100644
--- a/Makefile
+++ b/Makefile
@@ -751,11 +751,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 042f404..112f537 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -213,7 +213,7 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow((unsigned long)_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_END),
 			KASAN_SHADOW_END);
 
 	memset(kasan_poisoned_page, KASAN_SHADOW_GAP, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f8eca6a..5b7debf 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -15,6 +15,7 @@ struct page;
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 /*
@@ -61,8 +62,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_local(void) {}
@@ -86,6 +94,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index 3965511..1689f43 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1798,6 +1799,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_free(struct module *mod, void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f3bee26..6b00c65 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -7,6 +7,7 @@ config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index a59c976..cf26766 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/memblock.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -388,6 +389,55 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+
+	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+				PAGE_SIZE);
+	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
+	void *ret;
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree((void *)kasan_mem_to_shadow((unsigned long)addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index da0e53c..e88a143 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,11 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
+
 struct access_info {
 	unsigned long access_addr;
 	unsigned long first_bad_addr;
@@ -14,6 +19,24 @@ struct access_info {
 	unsigned long ip;
 };
 
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct access_info *info);
 void kasan_report_user_access(struct access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index faa07f0..27f3d95 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -83,6 +86,19 @@ static void print_error_description(struct access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(unsigned long addr)
+{
+	return (addr >= (unsigned long)_stext && addr < (unsigned long)_end)
+		|| (addr >= MODULES_VADDR  && addr < MODULES_END);
+}
+
+static inline bool init_task_stack_addr(unsigned long addr)
+{
+	return addr >= (unsigned long)&init_thread_union.stack &&
+		(addr <= (unsigned long)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct access_info *info)
 {
 	unsigned long addr = info->access_addr;
@@ -111,6 +127,12 @@ static void print_address_description(struct access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n",
+				(void *)addr);
+	}
+
 	dump_stack();
 }
 
-- 
2.2.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 07/17] mm: slub: add kernel address sanitizer support for slub allocator
  2015-01-21 16:51     ` Andrey Ryabinin
@ 2015-01-21 20:47       ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2015-01-21 20:47 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Chernenkov, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, x86, linux-mm, Pekka Enberg, David Rientjes

On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Initially all objects in newly allocated slab page, marked as redzone.
> Later, when allocation of slub object happens, requested by caller
> number of bytes marked as accessible, and the rest of the object
> (including slub's metadata) marked as redzone (inaccessible).
> 
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible.
> 
> Code in slub.c and slab_common.c files could validly access to object's
> metadata, so instrumentation for this files are disabled.

This one doesn't apply on -next. Is there a missing commit?

Applying: mm: slub: add kernel address sanitizer support for slub allocator
fatal: sha1 information is lacking or useless (mm/slub.c).
Repository lacks necessary blobs to fall back on 3-way merge.
Cannot fall back to three-way merge.
Patch failed at 0007 mm: slub: add kernel address sanitizer support for slub allocator
When you have resolved this problem run "git am --resolved".
If you would prefer to skip this patch, instead run "git am --skip".
To restore the original branch and stop patching run "git am --abort".


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 07/17] mm: slub: add kernel address sanitizer support for slub allocator
@ 2015-01-21 20:47       ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2015-01-21 20:47 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Chernenkov, Dmitry Vyukov, Konstantin Serebryany,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Christoph Lameter, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, x86, linux-mm, Pekka Enberg, David Rientjes

On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Initially all objects in newly allocated slab page, marked as redzone.
> Later, when allocation of slub object happens, requested by caller
> number of bytes marked as accessible, and the rest of the object
> (including slub's metadata) marked as redzone (inaccessible).
> 
> We also mark object as accessible if ksize was called for this object.
> There is some places in kernel where ksize function is called to inquire
> size of really allocated area. Such callers could validly access whole
> allocated memory, so it should be marked as accessible.
> 
> Code in slub.c and slab_common.c files could validly access to object's
> metadata, so instrumentation for this files are disabled.

This one doesn't apply on -next. Is there a missing commit?

Applying: mm: slub: add kernel address sanitizer support for slub allocator
fatal: sha1 information is lacking or useless (mm/slub.c).
Repository lacks necessary blobs to fall back on 3-way merge.
Cannot fall back to three-way merge.
Patch failed at 0007 mm: slub: add kernel address sanitizer support for slub allocator
When you have resolved this problem run "git am --resolved".
If you would prefer to skip this patch, instead run "git am --skip".
To restore the original branch and stop patching run "git am --abort".


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 07/17] mm: slub: add kernel address sanitizer support for slub allocator
  2015-01-21 20:47       ` Sasha Levin
  (?)
@ 2015-01-21 21:48       ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-21 21:48 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, LKML, Dmitry Chernenkov, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Christoph Lameter, Joonsoo Kim,
	Andrew Morton, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

[-- Attachment #1: Type: text/plain, Size: 1709 bytes --]

2015-01-21 23:47 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
> On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
>> With this patch kasan will be able to catch bugs in memory allocated
>> by slub.
>> Initially all objects in newly allocated slab page, marked as redzone.
>> Later, when allocation of slub object happens, requested by caller
>> number of bytes marked as accessible, and the rest of the object
>> (including slub's metadata) marked as redzone (inaccessible).
>>
>> We also mark object as accessible if ksize was called for this object.
>> There is some places in kernel where ksize function is called to inquire
>> size of really allocated area. Such callers could validly access whole
>> allocated memory, so it should be marked as accessible.
>>
>> Code in slub.c and slab_common.c files could validly access to object's
>> metadata, so instrumentation for this files are disabled.
>
> This one doesn't apply on -next. Is there a missing commit?
>

I don't see anything wrong. It's just difference between torvalds/-next trees.
Attached patched for -next just in case.

Also there is trivial conflict with -next in 11/17 patch.

> Applying: mm: slub: add kernel address sanitizer support for slub allocator
> fatal: sha1 information is lacking or useless (mm/slub.c).
> Repository lacks necessary blobs to fall back on 3-way merge.
> Cannot fall back to three-way merge.
> Patch failed at 0007 mm: slub: add kernel address sanitizer support for slub allocator
> When you have resolved this problem run "git am --resolved".
> If you would prefer to skip this patch, instead run "git am --skip".
> To restore the original branch and stop patching run "git am --abort".
>
>
> Thanks,
> Sasha
>

[-- Attachment #2: mm-slub-add-kernel-address-sanitizer-support-for-slu.patch --]
[-- Type: text/x-patch, Size: 14891 bytes --]

From b7b545981438ecfbedf7c525410908f901105e13 Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Date: Thu, 22 Jan 2015 04:12:03 +0300
Subject: [PATCH] mm: slub: add kernel address sanitizer support for slub
 allocator

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as redzone.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Dmitry Chernenkov <dmitryc@google.com>
---
 include/linux/kasan.h | 30 ++++++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/report.c     | 22 ++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 34 ++++++++++++++++--
 8 files changed, 199 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index a278ccc..940fc4f 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -12,6 +12,9 @@ struct page;
 #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 #define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
 
 #include <asm/kasan.h>
@@ -37,6 +40,18 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_poison_slab(struct page *page);
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -47,6 +62,21 @@ static inline void kasan_disable_local(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+					void *object) {}
+static inline void kasan_poison_object_data(struct kmem_cache *cache,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 2e3b448..f764096 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -325,7 +326,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -333,7 +337,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f86070d..ada0260 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 79f4fbc..3c1caa2 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index efe8105..c52350e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,103 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_poison_slab(struct page *page)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << compound_order(page),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_unpoison_shadow(object, cache->object_size);
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_poison_shadow(object,
+			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = round_up((unsigned long)object + cache->object_size,
+				KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 7983ebb..f9bc57a 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -55,8 +56,11 @@ static void print_error_description(struct access_info *info)
 
 	switch (shadow_val) {
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
+	case KASAN_PAGE_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +84,24 @@ static void print_address_description(struct access_info *info)
 	if ((addr >= PAGE_OFFSET) &&
 		(addr < (unsigned long)high_memory)) {
 		struct page *page = virt_to_head_page((void *)addr);
+
+		if (PageSlab(page)) {
+			void *object;
+			struct kmem_cache *cache = page->slab_cache;
+			void *last_object;
+
+			object = virt_to_obj(cache, page_address(page),
+					(void *)info->access_addr);
+			last_object = page_address(page) +
+				page->objects * cache->size;
+
+			if (unlikely(object > last_object))
+				object = last_object; /* we hit into padding */
+
+			object_err(cache, page, object,
+				"kasan: bad access detected");
+			return;
+		}
 		dump_page(page, "kasan: bad access detected");
 	}
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 6a7502d..40cef33 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -853,6 +853,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -1037,8 +1038,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 0d8eb4a..955155d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
@@ -1269,6 +1274,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
 	memcg_kmem_put_cache(s);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1292,6 +1298,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1386,8 +1394,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
+		kasan_poison_object_data(s, object);
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1420,6 +1431,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (unlikely(s->flags & SLAB_POISON))
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
 
+	kasan_poison_slab(page);
+
 	for_each_object_idx(p, idx, s, start, page->objects) {
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
@@ -2504,6 +2517,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2530,6 +2544,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2915,6 +2931,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3287,6 +3304,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3330,12 +3349,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3351,6 +3372,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17]  Kernel address sanitizer - runtime memory debugger.
  2015-01-21 16:51   ` Andrey Ryabinin
@ 2015-01-22  0:22     ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2015-01-22  0:22 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Linus Torvalds, Catalin Marinas

On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
> Changes since v8:
> 	- Fixed unpoisoned redzones for not-allocated-yet object
> 	    in newly allocated slab page. (from Dmitry C.)
> 
> 	- Some minor non-function cleanups in kasan internals.
> 
> 	- Added ack from Catalin
> 
> 	- Added stack instrumentation. With this we could detect
> 	    out of bounds accesses in stack variables. (patch 12)
> 
> 	- Added globals instrumentation - catching out of bounds in
> 	    global varibles. (patches 13-17)
> 
> 	- Shadow moved out from vmalloc into hole between vmemmap
> 	    and %esp fixup stacks. For globals instrumentation
> 	    we will need shadow backing modules addresses.
> 	    So we need some sort of a shadow memory allocator
> 	    (something like vmmemap_populate() function, except
> 	    that it should be available after boot).
> 
> 	    __vmalloc_node_range() suits that purpose, except that
> 	    it can't be used for allocating for shadow in vmalloc
> 	    area because shadow in vmalloc is already 'allocated'
> 	    to protect us from other vmalloc users. So we need
> 	    16TB of unused addresses. And we have big enough hole
> 	    between vmemmap and %esp fixup stacks. So I moved shadow
> 	    there.

I'm not sure which new addition caused it, but I'm getting tons of
false positives from platform drivers trying to access memory they
don't "own" - because they expect to find hardware there.

I suspect we'd need to mark that memory region somehow to prevent
accesses to it from triggering warnings?


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17]  Kernel address sanitizer - runtime memory debugger.
@ 2015-01-22  0:22     ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2015-01-22  0:22 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Linus Torvalds, Catalin Marinas

On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
> Changes since v8:
> 	- Fixed unpoisoned redzones for not-allocated-yet object
> 	    in newly allocated slab page. (from Dmitry C.)
> 
> 	- Some minor non-function cleanups in kasan internals.
> 
> 	- Added ack from Catalin
> 
> 	- Added stack instrumentation. With this we could detect
> 	    out of bounds accesses in stack variables. (patch 12)
> 
> 	- Added globals instrumentation - catching out of bounds in
> 	    global varibles. (patches 13-17)
> 
> 	- Shadow moved out from vmalloc into hole between vmemmap
> 	    and %esp fixup stacks. For globals instrumentation
> 	    we will need shadow backing modules addresses.
> 	    So we need some sort of a shadow memory allocator
> 	    (something like vmmemap_populate() function, except
> 	    that it should be available after boot).
> 
> 	    __vmalloc_node_range() suits that purpose, except that
> 	    it can't be used for allocating for shadow in vmalloc
> 	    area because shadow in vmalloc is already 'allocated'
> 	    to protect us from other vmalloc users. So we need
> 	    16TB of unused addresses. And we have big enough hole
> 	    between vmemmap and %esp fixup stacks. So I moved shadow
> 	    there.

I'm not sure which new addition caused it, but I'm getting tons of
false positives from platform drivers trying to access memory they
don't "own" - because they expect to find hardware there.

I suspect we'd need to mark that memory region somehow to prevent
accesses to it from triggering warnings?


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger.
  2015-01-22  0:22     ` Sasha Levin
  (?)
@ 2015-01-22  5:34     ` Andrey Ryabinin
  2015-01-22  5:53       ` Andrey Ryabinin
  -1 siblings, 1 reply; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-22  5:34 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Linus Torvalds, Catalin Marinas

[-- Attachment #1: Type: text/plain, Size: 1905 bytes --]

2015-01-22 3:22 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
> On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
>> Changes since v8:
>>       - Fixed unpoisoned redzones for not-allocated-yet object
>>           in newly allocated slab page. (from Dmitry C.)
>>
>>       - Some minor non-function cleanups in kasan internals.
>>
>>       - Added ack from Catalin
>>
>>       - Added stack instrumentation. With this we could detect
>>           out of bounds accesses in stack variables. (patch 12)
>>
>>       - Added globals instrumentation - catching out of bounds in
>>           global varibles. (patches 13-17)
>>
>>       - Shadow moved out from vmalloc into hole between vmemmap
>>           and %esp fixup stacks. For globals instrumentation
>>           we will need shadow backing modules addresses.
>>           So we need some sort of a shadow memory allocator
>>           (something like vmmemap_populate() function, except
>>           that it should be available after boot).
>>
>>           __vmalloc_node_range() suits that purpose, except that
>>           it can't be used for allocating for shadow in vmalloc
>>           area because shadow in vmalloc is already 'allocated'
>>           to protect us from other vmalloc users. So we need
>>           16TB of unused addresses. And we have big enough hole
>>           between vmemmap and %esp fixup stacks. So I moved shadow
>>           there.
>
> I'm not sure which new addition caused it, but I'm getting tons of
> false positives from platform drivers trying to access memory they
> don't "own" - because they expect to find hardware there.
>

To be sure, that this is really false positives, could you try with
patches in attachment?
That should fix some bugs in several platform drivers.

> I suspect we'd need to mark that memory region somehow to prevent
> accesses to it from triggering warnings?
>
>
> Thanks,
> Sasha
>

[-- Attachment #2: backlight-da9052_bl-terminate-da9052_wled_ids-array-with-empty-element.patch --]
[-- Type: text/x-patch, Size: 414 bytes --]

diff --git a/drivers/video/backlight/da9052_bl.c b/drivers/video/backlight/da9052_bl.c
index d4bd74bd..b1943e7 100644
--- a/drivers/video/backlight/da9052_bl.c
+++ b/drivers/video/backlight/da9052_bl.c
@@ -165,6 +165,7 @@ static struct platform_device_id da9052_wled_ids[] = {
 		.name		= "da9052-wled3",
 		.driver_data	= DA9052_TYPE_WLED3,
 	},
+	{ },
 };
 
 static struct platform_driver da9052_wled_driver = {

[-- Attachment #3: crypto-ccp-terminate-ccp_support-array-with-empty-element.patch --]
[-- Type: text/x-patch, Size: 360 bytes --]

diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index c6e6171..ca29c12 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -583,6 +583,7 @@ bool ccp_queues_suspended(struct ccp_device *ccp)
 #ifdef CONFIG_X86
 static const struct x86_cpu_id ccp_support[] = {
 	{ X86_VENDOR_AMD, 22, },
+	{ },
 };
 #endif
 

[-- Attachment #4: rtc-s5m-terminate-s5m_rtc_id-array-with-empty-element.patch --]
[-- Type: text/x-patch, Size: 419 bytes --]

diff --git a/drivers/rtc/rtc-s5m.c b/drivers/rtc/rtc-s5m.c
index b5e7c46..89ac1d5 100644
--- a/drivers/rtc/rtc-s5m.c
+++ b/drivers/rtc/rtc-s5m.c
@@ -832,6 +832,7 @@ static SIMPLE_DEV_PM_OPS(s5m_rtc_pm_ops, s5m_rtc_suspend, s5m_rtc_resume);
 static const struct platform_device_id s5m_rtc_id[] = {
 	{ "s5m-rtc",		S5M8767X },
 	{ "s2mps14-rtc",	S2MPS14X },
+	{ },
 };
 
 static struct platform_driver s5m_rtc_driver = {

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger.
  2015-01-22  5:34     ` Andrey Ryabinin
@ 2015-01-22  5:53       ` Andrey Ryabinin
  2015-01-22 21:46           ` Sasha Levin
  0 siblings, 1 reply; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-22  5:53 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Linus Torvalds, Catalin Marinas

[-- Attachment #1: Type: text/plain, Size: 1858 bytes --]

2015-01-22 8:34 GMT+03:00 Andrey Ryabinin <ryabinin.a.a@gmail.com>:
> 2015-01-22 3:22 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
>> On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
>>> Changes since v8:
>>>       - Fixed unpoisoned redzones for not-allocated-yet object
>>>           in newly allocated slab page. (from Dmitry C.)
>>>
>>>       - Some minor non-function cleanups in kasan internals.
>>>
>>>       - Added ack from Catalin
>>>
>>>       - Added stack instrumentation. With this we could detect
>>>           out of bounds accesses in stack variables. (patch 12)
>>>
>>>       - Added globals instrumentation - catching out of bounds in
>>>           global varibles. (patches 13-17)
>>>
>>>       - Shadow moved out from vmalloc into hole between vmemmap
>>>           and %esp fixup stacks. For globals instrumentation
>>>           we will need shadow backing modules addresses.
>>>           So we need some sort of a shadow memory allocator
>>>           (something like vmmemap_populate() function, except
>>>           that it should be available after boot).
>>>
>>>           __vmalloc_node_range() suits that purpose, except that
>>>           it can't be used for allocating for shadow in vmalloc
>>>           area because shadow in vmalloc is already 'allocated'
>>>           to protect us from other vmalloc users. So we need
>>>           16TB of unused addresses. And we have big enough hole
>>>           between vmemmap and %esp fixup stacks. So I moved shadow
>>>           there.
>>
>> I'm not sure which new addition caused it, but I'm getting tons of
>> false positives from platform drivers trying to access memory they
>> don't "own" - because they expect to find hardware there.
>>
>
> To be sure, that this is really false positives, could you try with
> patches in attachment?

Attaching properly formed patches

[-- Attachment #2: 0001-backlight-da9052_bl-terminate-da9052_wled_ids-array-.patch --]
[-- Type: text/x-patch, Size: 892 bytes --]

From 8aca28dc4df2ed597f4fe0d49468021db5f29c61 Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Date: Thu, 22 Jan 2015 12:44:42 +0300
Subject: [PATCH 1/3] backlight: da9052_bl: terminate da9052_wled_ids array
 with empty element

Array of platform_device_id elements should be terminated
with empty element.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 drivers/video/backlight/da9052_bl.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/video/backlight/da9052_bl.c b/drivers/video/backlight/da9052_bl.c
index d4bd74bd..b1943e7 100644
--- a/drivers/video/backlight/da9052_bl.c
+++ b/drivers/video/backlight/da9052_bl.c
@@ -165,6 +165,7 @@ static struct platform_device_id da9052_wled_ids[] = {
 		.name		= "da9052-wled3",
 		.driver_data	= DA9052_TYPE_WLED3,
 	},
+	{ },
 };
 
 static struct platform_driver da9052_wled_driver = {
-- 
2.0.4


[-- Attachment #3: 0002-crypto-ccp-terminate-ccp_support-array-with-empty-el.patch --]
[-- Type: text/x-patch, Size: 816 bytes --]

From 27f8cf0aff7d16c061dda9dd219887cae2214586 Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Date: Thu, 22 Jan 2015 12:46:44 +0300
Subject: [PATCH 2/3] crypto: ccp: terminate ccp_support array with empty
 element

x86_match_cpu() expects array of x86_cpu_ids terminated
with empty element.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 drivers/crypto/ccp/ccp-dev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index c6e6171..ca29c12 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -583,6 +583,7 @@ bool ccp_queues_suspended(struct ccp_device *ccp)
 #ifdef CONFIG_X86
 static const struct x86_cpu_id ccp_support[] = {
 	{ X86_VENDOR_AMD, 22, },
+	{ },
 };
 #endif
 
-- 
2.0.4


[-- Attachment #4: 0003-rtc-s5m-terminate-s5m_rtc_id-array-with-empty-elemen.patch --]
[-- Type: text/x-patch, Size: 865 bytes --]

From 3a3bd9cfd223f14d31352b9a44209476b3f5ef11 Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Date: Thu, 22 Jan 2015 12:48:15 +0300
Subject: [PATCH 3/3] rtc: s5m: terminate s5m_rtc_id array with empty element

Array of platform_device_id elements should be terminated
with empty element.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 drivers/rtc/rtc-s5m.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/rtc/rtc-s5m.c b/drivers/rtc/rtc-s5m.c
index b5e7c46..89ac1d5 100644
--- a/drivers/rtc/rtc-s5m.c
+++ b/drivers/rtc/rtc-s5m.c
@@ -832,6 +832,7 @@ static SIMPLE_DEV_PM_OPS(s5m_rtc_pm_ops, s5m_rtc_suspend, s5m_rtc_resume);
 static const struct platform_device_id s5m_rtc_id[] = {
 	{ "s5m-rtc",		S5M8767X },
 	{ "s2mps14-rtc",	S2MPS14X },
+	{ },
 };
 
 static struct platform_driver s5m_rtc_driver = {
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger.
  2015-01-22  5:53       ` Andrey Ryabinin
@ 2015-01-22 21:46           ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2015-01-22 21:46 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Linus Torvalds, Catalin Marinas

On 01/22/2015 12:53 AM, Andrey Ryabinin wrote:
> 2015-01-22 8:34 GMT+03:00 Andrey Ryabinin <ryabinin.a.a@gmail.com>:
>> 2015-01-22 3:22 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
>>> On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
>>>> Changes since v8:
>>>>       - Fixed unpoisoned redzones for not-allocated-yet object
>>>>           in newly allocated slab page. (from Dmitry C.)
>>>>
>>>>       - Some minor non-function cleanups in kasan internals.
>>>>
>>>>       - Added ack from Catalin
>>>>
>>>>       - Added stack instrumentation. With this we could detect
>>>>           out of bounds accesses in stack variables. (patch 12)
>>>>
>>>>       - Added globals instrumentation - catching out of bounds in
>>>>           global varibles. (patches 13-17)
>>>>
>>>>       - Shadow moved out from vmalloc into hole between vmemmap
>>>>           and %esp fixup stacks. For globals instrumentation
>>>>           we will need shadow backing modules addresses.
>>>>           So we need some sort of a shadow memory allocator
>>>>           (something like vmmemap_populate() function, except
>>>>           that it should be available after boot).
>>>>
>>>>           __vmalloc_node_range() suits that purpose, except that
>>>>           it can't be used for allocating for shadow in vmalloc
>>>>           area because shadow in vmalloc is already 'allocated'
>>>>           to protect us from other vmalloc users. So we need
>>>>           16TB of unused addresses. And we have big enough hole
>>>>           between vmemmap and %esp fixup stacks. So I moved shadow
>>>>           there.
>>>
>>> I'm not sure which new addition caused it, but I'm getting tons of
>>> false positives from platform drivers trying to access memory they
>>> don't "own" - because they expect to find hardware there.
>>>
>>
>> To be sure, that this is really false positives, could you try with
>> patches in attachment?
> 
> Attaching properly formed patches
> 

Yup, you're right - that did the trick.

Just to keep it going, here's a funny trace where kasan is catching issues
in ubsan: :)

[ 2652.320296] BUG: AddressSanitizer: out of bounds access in strnlen+0xa7/0xb0 at addr ffffffff97b5c9e4
[ 2652.320296] Read of size 1 by task trinity-c37/36198
[ 2652.320296] Address belongs to variable types__truncate+0xd884/0xde80
[ 2652.320296] CPU: 17 PID: 36198 Comm: trinity-c37 Not tainted 3.19.0-rc5-next-20150121-sasha-00064-g3c37e35-dirty #1809
[ 2652.320296]  0000000000000000 0000000000000000 ffff88011069f9f0 ffff88011069f938
[ 2652.320296]  ffffffff92e9e917 0000000000000039 0000000000000000 ffff88011069f9d8
[ 2652.320296]  ffffffff81b4a802 ffffffff843cd580 ffff880a70f24457 ffff00066c0a0100
[ 2652.320296] Call Trace:
[ 2652.320296]  [<ffffffff92e9e917>] dump_stack+0x4f/0x7b
[ 2652.320296]  [<ffffffff81b4a802>] kasan_report_error+0x642/0x9d0
[ 2652.320296]  [<ffffffff843cd580>] ? pointer.isra.16+0xe20/0xe20
[ 2652.320296]  [<ffffffff843bc882>] ? put_dec+0x72/0x90
[ 2652.320296]  [<ffffffff81b4abf1>] __asan_report_load1_noabort+0x61/0x80
[ 2652.320296]  [<ffffffff843b9a97>] ? strnlen+0xa7/0xb0
[ 2652.363888]  [<ffffffff843b9a97>] strnlen+0xa7/0xb0
[ 2652.363888]  [<ffffffff843c605f>] string.isra.0+0x3f/0x2f0
[ 2652.363888]  [<ffffffff843cd912>] vsnprintf+0x392/0x23b0
[ 2652.363888]  [<ffffffff843cd580>] ? pointer.isra.16+0xe20/0xe20
[ 2652.363888]  [<ffffffff81547101>] ? get_parent_ip+0x11/0x50
[ 2652.363888]  [<ffffffff843cf951>] vscnprintf+0x21/0x70
[ 2652.363888]  [<ffffffff81629ee0>] ? vprintk_emit+0xe0/0x960
[ 2652.363888]  [<ffffffff81629f14>] vprintk_emit+0x114/0x960
[ 2652.363888]  [<ffffffff843cf951>] ? vscnprintf+0x21/0x70
[ 2652.363888]  [<ffffffff8162aa1f>] vprintk_default+0x1f/0x30
[ 2652.363888]  [<ffffffff92e71c7c>] printk+0x97/0xb1
[ 2652.363888]  [<ffffffff92e71be5>] ? bitmap_weight+0xb/0xb
[ 2652.363888]  [<ffffffff92ea10f5>] ? val_to_string.constprop.3+0x191/0x1e4
[ 2652.363888]  [<ffffffff92ea1c4c>] __ubsan_handle_negate_overflow+0x13e/0x184
[ 2652.363888]  [<ffffffff92ea1b0e>] ? __ubsan_handle_divrem_overflow+0x284/0x284
[ 2652.363888]  [<ffffffff81612c20>] ? do_raw_spin_trylock+0x200/0x200
[ 2652.363888]  [<ffffffff81bba468>] rw_verify_area+0x318/0x440
[ 2652.363888]  [<ffffffff81bbe816>] vfs_read+0x106/0x490
[ 2652.363888]  [<ffffffff81c4db19>] ? __fget_light+0x249/0x370
[ 2652.363888]  [<ffffffff81bbecb2>] SyS_read+0x112/0x280
[ 2652.363888]  [<ffffffff81bbeba0>] ? vfs_read+0x490/0x490
[ 2652.363888]  [<ffffffff815fb1f9>] ? trace_hardirqs_on_caller+0x519/0x850
[ 2652.363888]  [<ffffffff92f64b42>] tracesys_phase2+0xdc/0xe1
[ 2652.363888] Memory state around the buggy address:
[ 2652.363888]  ffffffff97b5c880: fa fa fa fa 04 fa fa fa fa fa fa fa 00 00 00 00
[ 2652.363888]  ffffffff97b5c900: 00 00 00 00 00 fa fa fa fa fa fa fa 00 00 00 fa
[ 2652.363888] >ffffffff97b5c980: fa fa fa fa 00 00 00 fa fa fa fa fa 04 fa fa fa
[ 2652.363888]                                                        ^
[ 2652.363888]  ffffffff97b5ca00: fa fa fa fa 00 00 00 00 00 00 00 00 00 fa fa fa
[ 2652.363888]  ffffffff97b5ca80: fa fa fa fa 00 00 00 00 00 fa fa fa fa fa fa fa


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger.
@ 2015-01-22 21:46           ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2015-01-22 21:46 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Michal Marek, Thomas Gleixner,
	Ingo Molnar, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen,
	Vegard Nossum, H. Peter Anvin, x86, linux-mm, Randy Dunlap,
	Peter Zijlstra, Alexander Viro, Dave Jones, Jonathan Corbet,
	Linus Torvalds, Catalin Marinas

On 01/22/2015 12:53 AM, Andrey Ryabinin wrote:
> 2015-01-22 8:34 GMT+03:00 Andrey Ryabinin <ryabinin.a.a@gmail.com>:
>> 2015-01-22 3:22 GMT+03:00 Sasha Levin <sasha.levin@oracle.com>:
>>> On 01/21/2015 11:51 AM, Andrey Ryabinin wrote:
>>>> Changes since v8:
>>>>       - Fixed unpoisoned redzones for not-allocated-yet object
>>>>           in newly allocated slab page. (from Dmitry C.)
>>>>
>>>>       - Some minor non-function cleanups in kasan internals.
>>>>
>>>>       - Added ack from Catalin
>>>>
>>>>       - Added stack instrumentation. With this we could detect
>>>>           out of bounds accesses in stack variables. (patch 12)
>>>>
>>>>       - Added globals instrumentation - catching out of bounds in
>>>>           global varibles. (patches 13-17)
>>>>
>>>>       - Shadow moved out from vmalloc into hole between vmemmap
>>>>           and %esp fixup stacks. For globals instrumentation
>>>>           we will need shadow backing modules addresses.
>>>>           So we need some sort of a shadow memory allocator
>>>>           (something like vmmemap_populate() function, except
>>>>           that it should be available after boot).
>>>>
>>>>           __vmalloc_node_range() suits that purpose, except that
>>>>           it can't be used for allocating for shadow in vmalloc
>>>>           area because shadow in vmalloc is already 'allocated'
>>>>           to protect us from other vmalloc users. So we need
>>>>           16TB of unused addresses. And we have big enough hole
>>>>           between vmemmap and %esp fixup stacks. So I moved shadow
>>>>           there.
>>>
>>> I'm not sure which new addition caused it, but I'm getting tons of
>>> false positives from platform drivers trying to access memory they
>>> don't "own" - because they expect to find hardware there.
>>>
>>
>> To be sure, that this is really false positives, could you try with
>> patches in attachment?
> 
> Attaching properly formed patches
> 

Yup, you're right - that did the trick.

Just to keep it going, here's a funny trace where kasan is catching issues
in ubsan: :)

[ 2652.320296] BUG: AddressSanitizer: out of bounds access in strnlen+0xa7/0xb0 at addr ffffffff97b5c9e4
[ 2652.320296] Read of size 1 by task trinity-c37/36198
[ 2652.320296] Address belongs to variable types__truncate+0xd884/0xde80
[ 2652.320296] CPU: 17 PID: 36198 Comm: trinity-c37 Not tainted 3.19.0-rc5-next-20150121-sasha-00064-g3c37e35-dirty #1809
[ 2652.320296]  0000000000000000 0000000000000000 ffff88011069f9f0 ffff88011069f938
[ 2652.320296]  ffffffff92e9e917 0000000000000039 0000000000000000 ffff88011069f9d8
[ 2652.320296]  ffffffff81b4a802 ffffffff843cd580 ffff880a70f24457 ffff00066c0a0100
[ 2652.320296] Call Trace:
[ 2652.320296]  [<ffffffff92e9e917>] dump_stack+0x4f/0x7b
[ 2652.320296]  [<ffffffff81b4a802>] kasan_report_error+0x642/0x9d0
[ 2652.320296]  [<ffffffff843cd580>] ? pointer.isra.16+0xe20/0xe20
[ 2652.320296]  [<ffffffff843bc882>] ? put_dec+0x72/0x90
[ 2652.320296]  [<ffffffff81b4abf1>] __asan_report_load1_noabort+0x61/0x80
[ 2652.320296]  [<ffffffff843b9a97>] ? strnlen+0xa7/0xb0
[ 2652.363888]  [<ffffffff843b9a97>] strnlen+0xa7/0xb0
[ 2652.363888]  [<ffffffff843c605f>] string.isra.0+0x3f/0x2f0
[ 2652.363888]  [<ffffffff843cd912>] vsnprintf+0x392/0x23b0
[ 2652.363888]  [<ffffffff843cd580>] ? pointer.isra.16+0xe20/0xe20
[ 2652.363888]  [<ffffffff81547101>] ? get_parent_ip+0x11/0x50
[ 2652.363888]  [<ffffffff843cf951>] vscnprintf+0x21/0x70
[ 2652.363888]  [<ffffffff81629ee0>] ? vprintk_emit+0xe0/0x960
[ 2652.363888]  [<ffffffff81629f14>] vprintk_emit+0x114/0x960
[ 2652.363888]  [<ffffffff843cf951>] ? vscnprintf+0x21/0x70
[ 2652.363888]  [<ffffffff8162aa1f>] vprintk_default+0x1f/0x30
[ 2652.363888]  [<ffffffff92e71c7c>] printk+0x97/0xb1
[ 2652.363888]  [<ffffffff92e71be5>] ? bitmap_weight+0xb/0xb
[ 2652.363888]  [<ffffffff92ea10f5>] ? val_to_string.constprop.3+0x191/0x1e4
[ 2652.363888]  [<ffffffff92ea1c4c>] __ubsan_handle_negate_overflow+0x13e/0x184
[ 2652.363888]  [<ffffffff92ea1b0e>] ? __ubsan_handle_divrem_overflow+0x284/0x284
[ 2652.363888]  [<ffffffff81612c20>] ? do_raw_spin_trylock+0x200/0x200
[ 2652.363888]  [<ffffffff81bba468>] rw_verify_area+0x318/0x440
[ 2652.363888]  [<ffffffff81bbe816>] vfs_read+0x106/0x490
[ 2652.363888]  [<ffffffff81c4db19>] ? __fget_light+0x249/0x370
[ 2652.363888]  [<ffffffff81bbecb2>] SyS_read+0x112/0x280
[ 2652.363888]  [<ffffffff81bbeba0>] ? vfs_read+0x490/0x490
[ 2652.363888]  [<ffffffff815fb1f9>] ? trace_hardirqs_on_caller+0x519/0x850
[ 2652.363888]  [<ffffffff92f64b42>] tracesys_phase2+0xdc/0xe1
[ 2652.363888] Memory state around the buggy address:
[ 2652.363888]  ffffffff97b5c880: fa fa fa fa 04 fa fa fa fa fa fa fa 00 00 00 00
[ 2652.363888]  ffffffff97b5c900: 00 00 00 00 00 fa fa fa fa fa fa fa 00 00 00 fa
[ 2652.363888] >ffffffff97b5c980: fa fa fa fa 00 00 00 fa fa fa fa fa 04 fa fa fa
[ 2652.363888]                                                        ^
[ 2652.363888]  ffffffff97b5ca00: fa fa fa fa 00 00 00 00 00 00 00 00 00 fa fa fa
[ 2652.363888]  ffffffff97b5ca80: fa fa fa fa 00 00 00 00 00 fa fa fa fa fa fa fa


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger.
  2015-01-22 21:46           ` Sasha Levin
  (?)
@ 2015-01-23  9:50           ` y.gribov
  -1 siblings, 0 replies; 862+ messages in thread
From: y.gribov @ 2015-01-23  9:50 UTC (permalink / raw)
  To: linux-kernel

> Just to keep it going, here's a funny trace where kasan is catching issues
> in ubsan: :)

Thanks, I've filed an upstream PR for this
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64741



--
View this message in context: http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-RESEND-next-00-21-Address-sanitizer-for-kernel-kasan-dynamic-memory-error-detector-tp898880p1027990.html
Sent from the Linux Kernel mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger.
  2015-01-22 21:46           ` Sasha Levin
@ 2015-01-23 10:14             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-23 10:14 UTC (permalink / raw)
  To: Sasha Levin, Andrey Ryabinin
  Cc: LKML, Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Linus Torvalds, Catalin Marinas

On 01/23/2015 12:46 AM, Sasha Levin wrote:
> Just to keep it going, here's a funny trace where kasan is catching issues
> in ubsan: :)
> 

Thanks, it turns out to be a GCC bug:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64741

As a workaround you could put kasan_disable_local()/kasan_enable_local()
into ubsan_prologue()/ubsan_epilogue().


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger.
@ 2015-01-23 10:14             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-23 10:14 UTC (permalink / raw)
  To: Sasha Levin, Andrey Ryabinin
  Cc: LKML, Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Michal Marek, Thomas Gleixner, Ingo Molnar, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, Vegard Nossum, H. Peter Anvin, x86,
	linux-mm, Randy Dunlap, Peter Zijlstra, Alexander Viro,
	Dave Jones, Jonathan Corbet, Linus Torvalds, Catalin Marinas

On 01/23/2015 12:46 AM, Sasha Levin wrote:
> Just to keep it going, here's a funny trace where kasan is catching issues
> in ubsan: :)
> 

Thanks, it turns out to be a GCC bug:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64741

As a workaround you could put kasan_disable_local()/kasan_enable_local()
into ubsan_prologue()/ubsan_epilogue().

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 01/17] Add kernel address sanitizer infrastructure.
  2015-01-21 16:51     ` Andrey Ryabinin
  (?)
  (?)
@ 2015-01-23 12:20     ` Michal Marek
  -1 siblings, 0 replies; 862+ messages in thread
From: Michal Marek @ 2015-01-23 12:20 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Jonathan Corbet,
	Ingo Molnar, Peter Zijlstra, open, list, DOCUMENTATION,
	open list:KERNEL BUILD + fi...

On 2015-01-21 17:51, Andrey Ryabinin wrote:
> +ifeq ($(CONFIG_KASAN),y)
> +_c_flags += $(if $(patsubst n%,, \
> +		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN)), \

You can replace $(CONFIG_KASAN) with 'y' in the concatenation, because
we already know that it is set to 'y'.

Michal

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 01/17] Add kernel address sanitizer infrastructure.
  2015-01-21 16:51     ` Andrey Ryabinin
                       ` (2 preceding siblings ...)
  (?)
@ 2015-01-23 12:35     ` Michal Marek
  2015-01-23 12:48         ` Andrey Ryabinin
  -1 siblings, 1 reply; 862+ messages in thread
From: Michal Marek @ 2015-01-23 12:35 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Jonathan Corbet,
	Ingo Molnar, Peter Zijlstra, open, list, DOCUMENTATION,
	open list:KERNEL BUILD + fi...

On 2015-01-21 17:51, Andrey Ryabinin wrote:
> +ifdef CONFIG_KASAN_INLINE
> +	call_threshold := 10000
> +else
> +	call_threshold := 0
> +endif

Can you please move this to a Kconfig helper like you did with
CONFIG_KASAN_SHADOW_OFFSET? Despite occasional efforts to reduce the
size of the main Makefile, it has been growing over time. With this
patch set, we are approaching 2.6.28's record of 1669 lines.

Michal

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 01/17] Add kernel address sanitizer infrastructure.
  2015-01-23 12:35     ` Michal Marek
@ 2015-01-23 12:48         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-23 12:48 UTC (permalink / raw)
  To: Michal Marek, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Jonathan Corbet,
	Ingo Molnar, Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

On 01/23/2015 03:35 PM, Michal Marek wrote:
> On 2015-01-21 17:51, Andrey Ryabinin wrote:
>> +ifdef CONFIG_KASAN_INLINE
>> +	call_threshold := 10000
>> +else
>> +	call_threshold := 0
>> +endif
> 
> Can you please move this to a Kconfig helper like you did with
> CONFIG_KASAN_SHADOW_OFFSET? Despite occasional efforts to reduce the
> size of the main Makefile, it has been growing over time. With this
> patch set, we are approaching 2.6.28's record of 1669 lines.
> 

How about moving the whole kasan stuff into scripts/Makefile.kasan
and just include it in generic Makefile?

> Michal
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 01/17] Add kernel address sanitizer infrastructure.
@ 2015-01-23 12:48         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-23 12:48 UTC (permalink / raw)
  To: Michal Marek, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Jonathan Corbet,
	Ingo Molnar, Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

On 01/23/2015 03:35 PM, Michal Marek wrote:
> On 2015-01-21 17:51, Andrey Ryabinin wrote:
>> +ifdef CONFIG_KASAN_INLINE
>> +	call_threshold := 10000
>> +else
>> +	call_threshold := 0
>> +endif
> 
> Can you please move this to a Kconfig helper like you did with
> CONFIG_KASAN_SHADOW_OFFSET? Despite occasional efforts to reduce the
> size of the main Makefile, it has been growing over time. With this
> patch set, we are approaching 2.6.28's record of 1669 lines.
> 

How about moving the whole kasan stuff into scripts/Makefile.kasan
and just include it in generic Makefile?

> Michal
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v9 01/17] Add kernel address sanitizer infrastructure.
  2015-01-23 12:48         ` Andrey Ryabinin
  (?)
@ 2015-01-23 12:51         ` Michal Marek
  -1 siblings, 0 replies; 862+ messages in thread
From: Michal Marek @ 2015-01-23 12:51 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Jonathan Corbet,
	Ingo Molnar, Peter Zijlstra, open, list, DOCUMENTATION,
	open list:KERNEL BUILD + fi...

On 2015-01-23 13:48, Andrey Ryabinin wrote:
> On 01/23/2015 03:35 PM, Michal Marek wrote:
>> On 2015-01-21 17:51, Andrey Ryabinin wrote:
>>> +ifdef CONFIG_KASAN_INLINE
>>> +	call_threshold := 10000
>>> +else
>>> +	call_threshold := 0
>>> +endif
>>
>> Can you please move this to a Kconfig helper like you did with
>> CONFIG_KASAN_SHADOW_OFFSET? Despite occasional efforts to reduce the
>> size of the main Makefile, it has been growing over time. With this
>> patch set, we are approaching 2.6.28's record of 1669 lines.
>>
> 
> How about moving the whole kasan stuff into scripts/Makefile.kasan
> and just include it in generic Makefile?

That would be even better!

Thanks,
Michal

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v10 00/17] Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2015-01-29 15:11   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Linus Torvalds, Catalin Marinas

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2


As usual patches available in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v10


Changes since v9:
 	- Makefile changes per discussion with Michal Marek
	- Fix false positive 'wild memory access' reports that
	  sometimes could happen on freeing modules memory.


Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v8:
	- Fixed unpoisoned redzones for not-allocated-yet object
	    in newly allocated slab page. (from Dmitry C.)

	- Some minor non-function cleanups in kasan internals.

	- Added ack from Catalin

	- Added stack instrumentation. With this we could detect
	    out of bounds accesses in stack variables. (patch 12)

	- Added globals instrumentation - catching out of bounds in
	    global varibles. (patches 13-17)

	- Shadow moved out from vmalloc into hole between vmemmap
	    and %esp fixup stacks. For globals instrumentation
	    we will need shadow backing modules addresses.
	    So we need some sort of a shadow memory allocator
	    (something like vmmemap_populate() function, except
	    that it should be available after boot).

	    __vmalloc_node_range() suits that purpose, except that
	    it can't be used for allocating for shadow in vmalloc
	    area because shadow in vmalloc is already 'allocated'
	    to protect us from other vmalloc users. So we need
	    16TB of unused addresses. And we have big enough hole
	    between vmemmap and %esp fixup stacks. So I moved shadow
	    there.


Changes since v7:
        - Fix build with CONFIG_KASAN_INLINE=y from Sasha.

        - Don't poison redzone on freeing, since it is poisend already from Dmitry Chernenkov.

        - Fix altinstruction_entry for memcpy.

        - Move kasan_slab_free() call after debug_obj_free to prevent some false-positives
            with CONFIG_DEBUG_OBJECTS=y

        - Drop -pg flag for kasan internals to avoid recursion with function tracer
           enabled.

        - Added ack from Christoph.


Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Andrey Ryabinin (17):
  Add kernel address sanitizer infrastructure.
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share object_err function
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  kasan: enable stack instrumentation
  mm: vmalloc: add flag preventing guard hole allocation
  mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  kernel: add support for .init_array.* constructors
  module: fix types of device tables aliases
  kasan: enable instrumentation of global variables

 Documentation/kasan.txt                | 169 ++++++++++++
 Documentation/x86/x86_64/mm.txt        |   2 +
 Makefile                               |   3 +-
 arch/arm/kernel/module.c               |   2 +-
 arch/arm64/kernel/module.c             |   4 +-
 arch/mips/kernel/module.c              |   2 +-
 arch/parisc/kernel/module.c            |   2 +-
 arch/s390/kernel/module.c              |   2 +-
 arch/sparc/kernel/module.c             |   2 +-
 arch/unicore32/kernel/module.c         |   2 +-
 arch/x86/Kconfig                       |   1 +
 arch/x86/boot/Makefile                 |   2 +
 arch/x86/boot/compressed/Makefile      |   2 +
 arch/x86/boot/compressed/eboot.c       |   3 +-
 arch/x86/boot/compressed/misc.h        |   1 +
 arch/x86/include/asm/kasan.h           |  31 +++
 arch/x86/include/asm/page_64_types.h   |  12 +-
 arch/x86/include/asm/string_64.h       |  18 +-
 arch/x86/kernel/Makefile               |   4 +
 arch/x86/kernel/dumpstack.c            |   5 +-
 arch/x86/kernel/head64.c               |   9 +-
 arch/x86/kernel/head_64.S              |  30 ++
 arch/x86/kernel/module.c               |  14 +-
 arch/x86/kernel/setup.c                |   3 +
 arch/x86/kernel/x8664_ksyms_64.c       |  10 +-
 arch/x86/lib/memcpy_64.S               |   6 +-
 arch/x86/lib/memmove_64.S              |   4 +
 arch/x86/lib/memset_64.S               |  10 +-
 arch/x86/mm/Makefile                   |   3 +
 arch/x86/mm/kasan_init_64.c            | 205 ++++++++++++++
 arch/x86/realmode/Makefile             |   2 +-
 arch/x86/realmode/rm/Makefile          |   1 +
 arch/x86/vdso/Makefile                 |   1 +
 drivers/firmware/efi/libstub/Makefile  |   1 +
 drivers/firmware/efi/libstub/efistub.h |   4 +
 fs/dcache.c                            |   5 +
 include/asm-generic/vmlinux.lds.h      |   1 +
 include/linux/compiler-gcc4.h          |   4 +
 include/linux/compiler-gcc5.h          |   2 +
 include/linux/init_task.h              |   8 +
 include/linux/kasan.h                  |  86 ++++++
 include/linux/module.h                 |   2 +-
 include/linux/sched.h                  |   3 +
 include/linux/slab.h                   |  11 +-
 include/linux/slub_def.h               |   8 +
 include/linux/vmalloc.h                |  13 +-
 kernel/module.c                        |   2 +
 lib/Kconfig.debug                      |   2 +
 lib/Kconfig.kasan                      |  55 ++++
 lib/Makefile                           |   1 +
 lib/test_kasan.c                       | 277 +++++++++++++++++++
 mm/Makefile                            |   4 +
 mm/compaction.c                        |   2 +
 mm/kasan/Makefile                      |   8 +
 mm/kasan/kasan.c                       | 487 +++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                       |  86 ++++++
 mm/kasan/report.c                      | 251 +++++++++++++++++
 mm/kmemleak.c                          |   6 +
 mm/page_alloc.c                        |   3 +
 mm/slab_common.c                       |   5 +-
 mm/slub.c                              |  52 +++-
 mm/vmalloc.c                           |  16 +-
 scripts/Makefile.kasan                 |  26 ++
 scripts/Makefile.lib                   |  10 +
 scripts/module-common.lds              |   3 +
 65 files changed, 1964 insertions(+), 47 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

--
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
--
2.2.2


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v10 00/17] Kernel address sanitizer - runtime memory debugger.
@ 2015-01-29 15:11   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Linus Torvalds, Catalin Marinas

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2


As usual patches available in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v10


Changes since v9:
 	- Makefile changes per discussion with Michal Marek
	- Fix false positive 'wild memory access' reports that
	  sometimes could happen on freeing modules memory.


Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v8:
	- Fixed unpoisoned redzones for not-allocated-yet object
	    in newly allocated slab page. (from Dmitry C.)

	- Some minor non-function cleanups in kasan internals.

	- Added ack from Catalin

	- Added stack instrumentation. With this we could detect
	    out of bounds accesses in stack variables. (patch 12)

	- Added globals instrumentation - catching out of bounds in
	    global varibles. (patches 13-17)

	- Shadow moved out from vmalloc into hole between vmemmap
	    and %esp fixup stacks. For globals instrumentation
	    we will need shadow backing modules addresses.
	    So we need some sort of a shadow memory allocator
	    (something like vmmemap_populate() function, except
	    that it should be available after boot).

	    __vmalloc_node_range() suits that purpose, except that
	    it can't be used for allocating for shadow in vmalloc
	    area because shadow in vmalloc is already 'allocated'
	    to protect us from other vmalloc users. So we need
	    16TB of unused addresses. And we have big enough hole
	    between vmemmap and %esp fixup stacks. So I moved shadow
	    there.


Changes since v7:
        - Fix build with CONFIG_KASAN_INLINE=y from Sasha.

        - Don't poison redzone on freeing, since it is poisend already from Dmitry Chernenkov.

        - Fix altinstruction_entry for memcpy.

        - Move kasan_slab_free() call after debug_obj_free to prevent some false-positives
            with CONFIG_DEBUG_OBJECTS=y

        - Drop -pg flag for kasan internals to avoid recursion with function tracer
           enabled.

        - Added ack from Christoph.


Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places


Andrey Ryabinin (17):
  Add kernel address sanitizer infrastructure.
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share object_err function
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  kasan: enable stack instrumentation
  mm: vmalloc: add flag preventing guard hole allocation
  mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  kernel: add support for .init_array.* constructors
  module: fix types of device tables aliases
  kasan: enable instrumentation of global variables

 Documentation/kasan.txt                | 169 ++++++++++++
 Documentation/x86/x86_64/mm.txt        |   2 +
 Makefile                               |   3 +-
 arch/arm/kernel/module.c               |   2 +-
 arch/arm64/kernel/module.c             |   4 +-
 arch/mips/kernel/module.c              |   2 +-
 arch/parisc/kernel/module.c            |   2 +-
 arch/s390/kernel/module.c              |   2 +-
 arch/sparc/kernel/module.c             |   2 +-
 arch/unicore32/kernel/module.c         |   2 +-
 arch/x86/Kconfig                       |   1 +
 arch/x86/boot/Makefile                 |   2 +
 arch/x86/boot/compressed/Makefile      |   2 +
 arch/x86/boot/compressed/eboot.c       |   3 +-
 arch/x86/boot/compressed/misc.h        |   1 +
 arch/x86/include/asm/kasan.h           |  31 +++
 arch/x86/include/asm/page_64_types.h   |  12 +-
 arch/x86/include/asm/string_64.h       |  18 +-
 arch/x86/kernel/Makefile               |   4 +
 arch/x86/kernel/dumpstack.c            |   5 +-
 arch/x86/kernel/head64.c               |   9 +-
 arch/x86/kernel/head_64.S              |  30 ++
 arch/x86/kernel/module.c               |  14 +-
 arch/x86/kernel/setup.c                |   3 +
 arch/x86/kernel/x8664_ksyms_64.c       |  10 +-
 arch/x86/lib/memcpy_64.S               |   6 +-
 arch/x86/lib/memmove_64.S              |   4 +
 arch/x86/lib/memset_64.S               |  10 +-
 arch/x86/mm/Makefile                   |   3 +
 arch/x86/mm/kasan_init_64.c            | 205 ++++++++++++++
 arch/x86/realmode/Makefile             |   2 +-
 arch/x86/realmode/rm/Makefile          |   1 +
 arch/x86/vdso/Makefile                 |   1 +
 drivers/firmware/efi/libstub/Makefile  |   1 +
 drivers/firmware/efi/libstub/efistub.h |   4 +
 fs/dcache.c                            |   5 +
 include/asm-generic/vmlinux.lds.h      |   1 +
 include/linux/compiler-gcc4.h          |   4 +
 include/linux/compiler-gcc5.h          |   2 +
 include/linux/init_task.h              |   8 +
 include/linux/kasan.h                  |  86 ++++++
 include/linux/module.h                 |   2 +-
 include/linux/sched.h                  |   3 +
 include/linux/slab.h                   |  11 +-
 include/linux/slub_def.h               |   8 +
 include/linux/vmalloc.h                |  13 +-
 kernel/module.c                        |   2 +
 lib/Kconfig.debug                      |   2 +
 lib/Kconfig.kasan                      |  55 ++++
 lib/Makefile                           |   1 +
 lib/test_kasan.c                       | 277 +++++++++++++++++++
 mm/Makefile                            |   4 +
 mm/compaction.c                        |   2 +
 mm/kasan/Makefile                      |   8 +
 mm/kasan/kasan.c                       | 487 +++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                       |  86 ++++++
 mm/kasan/report.c                      | 251 +++++++++++++++++
 mm/kmemleak.c                          |   6 +
 mm/page_alloc.c                        |   3 +
 mm/slab_common.c                       |   5 +-
 mm/slub.c                              |  52 +++-
 mm/vmalloc.c                           |  16 +-
 scripts/Makefile.kasan                 |  26 ++
 scripts/Makefile.lib                   |  10 +
 scripts/module-common.lds              |   3 +
 65 files changed, 1964 insertions(+), 47 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

--
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
--
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
  2015-01-29 15:11   ` Andrey Ryabinin
  (?)
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++++++
 Makefile                              |   3 +-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  43 +++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 296 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  47 ++++++
 mm/kasan/report.c                     | 190 ++++++++++++++++++++++
 scripts/Makefile.kasan                |  24 +++
 scripts/Makefile.lib                  |  10 ++
 14 files changed, 839 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 6b69223..a9840e9 100644
--- a/Makefile
+++ b/Makefile
@@ -428,7 +428,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -797,6 +797,7 @@ ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y)
 	KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
 endif
 
+include $(srctree)/scripts/Makefile.kasan
 include $(srctree)/scripts/Makefile.extrawarn
 include ${srctree}/scripts/Makefile.lto
 
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..bb72642
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,43 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 22ee0d5..ef08da2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1664,6 +1664,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 68668f6..1c528d4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index ac79877..79f4fbc 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..895fa5f
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,296 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+#define DECLARE_ASAN_CHECK(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__attribute__((alias("__asan_load"#size)))		\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__attribute__((alias("__asan_store"#size)))		\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort)
+
+DECLARE_ASAN_CHECK(1);
+DECLARE_ASAN_CHECK(2);
+DECLARE_ASAN_CHECK(4);
+DECLARE_ASAN_CHECK(8);
+DECLARE_ASAN_CHECK(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..da0e53c
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,47 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..21a9eeb
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,190 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
new file mode 100644
index 0000000..159396a
--- /dev/null
+++ b/scripts/Makefile.kasan
@@ -0,0 +1,24 @@
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..044eb4f 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++++++
 Makefile                              |   3 +-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  43 +++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 296 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  47 ++++++
 mm/kasan/report.c                     | 190 ++++++++++++++++++++++
 scripts/Makefile.kasan                |  24 +++
 scripts/Makefile.lib                  |  10 ++
 14 files changed, 839 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 6b69223..a9840e9 100644
--- a/Makefile
+++ b/Makefile
@@ -428,7 +428,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -797,6 +797,7 @@ ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y)
 	KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
 endif
 
+include $(srctree)/scripts/Makefile.kasan
 include $(srctree)/scripts/Makefile.extrawarn
 include ${srctree}/scripts/Makefile.lto
 
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..bb72642
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,43 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 22ee0d5..ef08da2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1664,6 +1664,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 68668f6..1c528d4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index ac79877..79f4fbc 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..895fa5f
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,296 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+#define DECLARE_ASAN_CHECK(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__attribute__((alias("__asan_load"#size)))		\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__attribute__((alias("__asan_store"#size)))		\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort)
+
+DECLARE_ASAN_CHECK(1);
+DECLARE_ASAN_CHECK(2);
+DECLARE_ASAN_CHECK(4);
+DECLARE_ASAN_CHECK(8);
+DECLARE_ASAN_CHECK(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..da0e53c
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,47 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..21a9eeb
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,190 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
new file mode 100644
index 0000000..159396a
--- /dev/null
+++ b/scripts/Makefile.kasan
@@ -0,0 +1,24 @@
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..044eb4f 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt               | 169 +++++++++++++++++++
 Makefile                              |   3 +-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  43 +++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 296 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  47 ++++++
 mm/kasan/report.c                     | 190 ++++++++++++++++++++++
 scripts/Makefile.kasan                |  24 +++
 scripts/Makefile.lib                  |  10 ++
 14 files changed, 839 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..a3a9009
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 6b69223..a9840e9 100644
--- a/Makefile
+++ b/Makefile
@@ -428,7 +428,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -797,6 +797,7 @@ ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y)
 	KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
 endif
 
+include $(srctree)/scripts/Makefile.kasan
 include $(srctree)/scripts/Makefile.extrawarn
 include ${srctree}/scripts/Makefile.lto
 
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..bb72642
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,43 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+
+static inline void kasan_enable_local(void)
+{
+	current->kasan_depth++;
+}
+
+static inline void kasan_disable_local(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 22ee0d5..ef08da2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1664,6 +1664,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 68668f6..1c528d4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..10341df
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: runtime memory debugger"
+	help
+	  Enables address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index ac79877..79f4fbc 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..895fa5f
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,296 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow(addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(unsigned long start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*(u8 *)start))
+			return start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(unsigned long start,
+						unsigned long end)
+{
+	unsigned int prefix = start % 8;
+	unsigned int words;
+	unsigned long ret;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow(addr),
+			kasan_mem_to_shadow(addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow(last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely(addr < kasan_shadow_to_mem(KASAN_SHADOW_START))) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write);
+}
+
+#define DECLARE_ASAN_CHECK(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__attribute__((alias("__asan_load"#size)))		\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__attribute__((alias("__asan_store"#size)))		\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort)
+
+DECLARE_ASAN_CHECK(1);
+DECLARE_ASAN_CHECK(2);
+DECLARE_ASAN_CHECK(4);
+DECLARE_ASAN_CHECK(8);
+DECLARE_ASAN_CHECK(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__attribute__((alias("__asan_loadN")))
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__attribute__((alias("__asan_storeN")))
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..da0e53c
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,47 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct access_info {
+	unsigned long access_addr;
+	unsigned long first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+static __always_inline void kasan_report(unsigned long addr,
+					size_t size,
+					bool is_write)
+{
+	struct access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..21a9eeb
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,190 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static unsigned long find_first_bad_addr(unsigned long addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	unsigned long first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	}
+
+	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		kasan_disable_local();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_local();
+
+		if (row_is_guilty(aligned_shadow, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(aligned_shadow, shadow),
+				'^');
+
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false);                  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true);                    \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
new file mode 100644
index 0000000..159396a
--- /dev/null
+++ b/scripts/Makefile.kasan
@@ -0,0 +1,24 @@
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..044eb4f 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 02/17] x86_64: add KASan support
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Jonathan Corbet, Andy Lutomirski, open list:DOCUMENTATION

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [ffffec0000000000 - fffffc0000000000]
between vmemmap and %esp fixup stacks.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/x86/x86_64/mm.txt   |   2 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  31 ++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +-
 arch/x86/kernel/head_64.S         |  30 ++++++
 arch/x86/kernel/setup.c           |   3 +
 arch/x86/mm/Makefile              |   3 +
 arch/x86/mm/kasan_init_64.c       | 197 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   2 +
 16 files changed, 289 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 052ee64..05712ac 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
+ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB)
+... unused hole ...
 ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
 ... unused hole ...
 ffffffff80000000 - ffffffffa0000000 (=512 MB)  kernel text mapping, from phys 0
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d34ef08..e5c87b2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -85,6 +85,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 3db07f3..57bbf2f 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index ad754b4..843feb3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..8b22422
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,31 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from compiler's shadow offset +
+ * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
+ */
+#define KASAN_SHADOW_START      (KASAN_SHADOW_OFFSET + \
+					(0xffff800000000000ULL >> 3))
+/* 47 bits for kernel address -> (47 - 3) bits for shadow */
+#define KASAN_SHADOW_END        (KASAN_SHADOW_START + (1ULL << (47 - 3)))
+
+#ifndef __ASSEMBLY__
+
+extern pte_t kasan_zero_pte[];
+extern pte_t kasan_zero_pmd[];
+extern pte_t kasan_zero_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_early_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_early_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 316b34e..4fc8ca7 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..efcddfa 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_early_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_early_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..6fd514d9 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,38 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(kasan_zero_pte)
+	FILL(kasan_zero_page - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_zero_pmd)
+	FILL(kasan_zero_pte - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_zero_pud)
+	FILL(kasan_zero_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+
+#ifdef CONFIG_KASAN
+/*
+ * This page used as early shadow. We don't use empty_zero_page
+ * at early stages, stack instrumentation could write some garbage
+ * to this page.
+ * Latter we reuse it as zero shadow for large ranges of memory
+ * that allowed to access, but not instrumented by kasan
+ * (vmalloc/vmemmap ...).
+ */
+NEXT_PAGE(kasan_zero_page)
+	.skip PAGE_SIZE
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index c4648ada..27d2009 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1174,6 +1175,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index ecfdc46..c4cc740 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,6 +20,9 @@ obj-$(CONFIG_HIGHMEM)		+= highmem_32.o
 
 obj-$(CONFIG_KMEMCHECK)		+= kmemcheck/
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+
 obj-$(CONFIG_MMIOTRACE)		+= mmiotrace.o
 mmiotrace-y			:= kmmio.o pf_in.o mmio-mod.o
 obj-$(CONFIG_MMIOTRACE_TEST)	+= testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..cfb932e
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,197 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+extern unsigned char kasan_zero_page[PAGE_SIZE];
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(kasan_zero_pud)
+				| _KERNPG_TABLE);
+		start += PGDIR_SIZE;
+	}
+}
+
+static int __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
+				unsigned long end)
+{
+	pte_t *pte = pte_offset_kernel(pmd, addr);
+
+	while (addr + PAGE_SIZE <= end) {
+		WARN_ON(!pte_none(*pte));
+		set_pte(pte, __pte(__pa_nodebug(kasan_zero_page)
+					| __PAGE_KERNEL_RO));
+		addr += PAGE_SIZE;
+		pte = pte_offset_kernel(pmd, addr);
+	}
+	return 0;
+}
+
+static int __init zero_pmd_populate(pud_t *pud, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pmd_t *pmd = pmd_offset(pud, addr);
+
+	while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {
+		WARN_ON(!pmd_none(*pmd));
+		set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)
+					| __PAGE_KERNEL_RO));
+		addr += PMD_SIZE;
+		pmd = pmd_offset(pud, addr);
+	}
+	if (addr < end) {
+		if (pmd_none(*pmd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pmd(pmd, __pmd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pte_populate(pmd, addr, end);
+	}
+	return ret;
+}
+
+
+static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pud_t *pud = pud_offset(pgd, addr);
+
+	while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {
+		WARN_ON(!pud_none(*pud));
+		set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)
+					| __PAGE_KERNEL_RO));
+		addr += PUD_SIZE;
+		pud = pud_offset(pgd, addr);
+	}
+
+	if (addr < end) {
+		if (pud_none(*pud)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pud(pud, __pud(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pmd_populate(pud, addr, end);
+	}
+	return ret;
+}
+
+static int __init zero_pgd_populate(unsigned long addr, unsigned long end)
+{
+	int ret = 0;
+	pgd_t *pgd = pgd_offset_k(addr);
+
+	while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {
+		WARN_ON(!pgd_none(*pgd));
+		set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)
+					| __PAGE_KERNEL_RO));
+		addr += PGDIR_SIZE;
+		pgd = pgd_offset_k(addr);
+	}
+
+	if (addr < end) {
+		if (pgd_none(*pgd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pgd(pgd, __pgd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pud_populate(pgd, addr, end);
+	}
+	return ret;
+}
+
+
+static void __init populate_zero_shadow(unsigned long start, unsigned long end)
+{
+	if (zero_pgd_populate(start, end))
+		panic("kasan: unable to map zero shadow!");
+}
+
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	populate_zero_shadow(KASAN_SHADOW_START,
+			kasan_mem_to_shadow(PAGE_OFFSET));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	populate_zero_shadow(kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM),
+			KASAN_SHADOW_END);
+
+	memset(kasan_zero_page, 0, PAGE_SIZE);
+
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..f86070d 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdffffc0000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 02/17] x86_64: add KASan support
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Jonathan Corbet, Andy Lutomirski, open list:DOCUMENTATION

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [ffffec0000000000 - fffffc0000000000]
between vmemmap and %esp fixup stacks.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/x86/x86_64/mm.txt   |   2 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  31 ++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +-
 arch/x86/kernel/head_64.S         |  30 ++++++
 arch/x86/kernel/setup.c           |   3 +
 arch/x86/mm/Makefile              |   3 +
 arch/x86/mm/kasan_init_64.c       | 197 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   2 +
 16 files changed, 289 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 052ee64..05712ac 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
+ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB)
+... unused hole ...
 ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
 ... unused hole ...
 ffffffff80000000 - ffffffffa0000000 (=512 MB)  kernel text mapping, from phys 0
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d34ef08..e5c87b2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -85,6 +85,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 3db07f3..57bbf2f 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index ad754b4..843feb3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..8b22422
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,31 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from compiler's shadow offset +
+ * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
+ */
+#define KASAN_SHADOW_START      (KASAN_SHADOW_OFFSET + \
+					(0xffff800000000000ULL >> 3))
+/* 47 bits for kernel address -> (47 - 3) bits for shadow */
+#define KASAN_SHADOW_END        (KASAN_SHADOW_START + (1ULL << (47 - 3)))
+
+#ifndef __ASSEMBLY__
+
+extern pte_t kasan_zero_pte[];
+extern pte_t kasan_zero_pmd[];
+extern pte_t kasan_zero_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_early_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_early_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 316b34e..4fc8ca7 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..efcddfa 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_early_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_early_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..6fd514d9 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,38 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(kasan_zero_pte)
+	FILL(kasan_zero_page - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_zero_pmd)
+	FILL(kasan_zero_pte - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_zero_pud)
+	FILL(kasan_zero_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+
+#ifdef CONFIG_KASAN
+/*
+ * This page used as early shadow. We don't use empty_zero_page
+ * at early stages, stack instrumentation could write some garbage
+ * to this page.
+ * Latter we reuse it as zero shadow for large ranges of memory
+ * that allowed to access, but not instrumented by kasan
+ * (vmalloc/vmemmap ...).
+ */
+NEXT_PAGE(kasan_zero_page)
+	.skip PAGE_SIZE
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index c4648ada..27d2009 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1174,6 +1175,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index ecfdc46..c4cc740 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,6 +20,9 @@ obj-$(CONFIG_HIGHMEM)		+= highmem_32.o
 
 obj-$(CONFIG_KMEMCHECK)		+= kmemcheck/
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+
 obj-$(CONFIG_MMIOTRACE)		+= mmiotrace.o
 mmiotrace-y			:= kmmio.o pf_in.o mmio-mod.o
 obj-$(CONFIG_MMIOTRACE_TEST)	+= testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..cfb932e
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,197 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+extern unsigned char kasan_zero_page[PAGE_SIZE];
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->start));
+	unsigned long end = kasan_mem_to_shadow(
+		(unsigned long)pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(kasan_zero_pud)
+				| _KERNPG_TABLE);
+		start += PGDIR_SIZE;
+	}
+}
+
+static int __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
+				unsigned long end)
+{
+	pte_t *pte = pte_offset_kernel(pmd, addr);
+
+	while (addr + PAGE_SIZE <= end) {
+		WARN_ON(!pte_none(*pte));
+		set_pte(pte, __pte(__pa_nodebug(kasan_zero_page)
+					| __PAGE_KERNEL_RO));
+		addr += PAGE_SIZE;
+		pte = pte_offset_kernel(pmd, addr);
+	}
+	return 0;
+}
+
+static int __init zero_pmd_populate(pud_t *pud, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pmd_t *pmd = pmd_offset(pud, addr);
+
+	while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {
+		WARN_ON(!pmd_none(*pmd));
+		set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)
+					| __PAGE_KERNEL_RO));
+		addr += PMD_SIZE;
+		pmd = pmd_offset(pud, addr);
+	}
+	if (addr < end) {
+		if (pmd_none(*pmd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pmd(pmd, __pmd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pte_populate(pmd, addr, end);
+	}
+	return ret;
+}
+
+
+static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pud_t *pud = pud_offset(pgd, addr);
+
+	while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {
+		WARN_ON(!pud_none(*pud));
+		set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)
+					| __PAGE_KERNEL_RO));
+		addr += PUD_SIZE;
+		pud = pud_offset(pgd, addr);
+	}
+
+	if (addr < end) {
+		if (pud_none(*pud)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pud(pud, __pud(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pmd_populate(pud, addr, end);
+	}
+	return ret;
+}
+
+static int __init zero_pgd_populate(unsigned long addr, unsigned long end)
+{
+	int ret = 0;
+	pgd_t *pgd = pgd_offset_k(addr);
+
+	while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {
+		WARN_ON(!pgd_none(*pgd));
+		set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)
+					| __PAGE_KERNEL_RO));
+		addr += PGDIR_SIZE;
+		pgd = pgd_offset_k(addr);
+	}
+
+	if (addr < end) {
+		if (pgd_none(*pgd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pgd(pgd, __pgd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pud_populate(pgd, addr, end);
+	}
+	return ret;
+}
+
+
+static void __init populate_zero_shadow(unsigned long start, unsigned long end)
+{
+	if (zero_pgd_populate(start, end))
+		panic("kasan: unable to map zero shadow!");
+}
+
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	populate_zero_shadow(KASAN_SHADOW_START,
+			kasan_mem_to_shadow(PAGE_OFFSET));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+	populate_zero_shadow(kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM),
+			KASAN_SHADOW_END);
+
+	memset(kasan_zero_page, 0, PAGE_SIZE);
+
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 10341df..f86070d 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
+	depends on !MEMORY_HOTPLUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
@@ -15,6 +16,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdffffc0000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 03/17] mm: page_alloc: add kasan hooks on alloc and free paths
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  2 ++
 mm/kasan/report.c     | 11 +++++++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 38 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index bb72642..ab5131e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -31,6 +31,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -38,6 +41,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index b68736c..b2d3ef9 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -72,6 +73,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 895fa5f..ea86458 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index da0e53c..0f09fb2 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,8 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+
 struct access_info {
 	unsigned long access_addr;
 	unsigned long first_bad_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 21a9eeb..4e26c68 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -54,6 +54,9 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -69,6 +72,14 @@ static void print_error_description(struct access_info *info)
 
 static void print_address_description(struct access_info *info)
 {
+	unsigned long addr = info->access_addr;
+
+	if ((addr >= PAGE_OFFSET) &&
+		(addr < (unsigned long)high_memory)) {
+		struct page *page = virt_to_head_page((void *)addr);
+		dump_page(page, "kasan: bad access detected");
+	}
+
 	dump_stack();
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8d52ab1..31bc2e8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include <linux/kmemcheck.h>
+#include <linux/kasan.h>
 #include <linux/module.h>
 #include <linux/suspend.h>
 #include <linux/pagevec.h>
@@ -787,6 +788,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -970,6 +972,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 03/17] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  2 ++
 mm/kasan/report.c     | 11 +++++++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 38 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index bb72642..ab5131e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -31,6 +31,9 @@ static inline void kasan_disable_local(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -38,6 +41,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_local(void) {}
 static inline void kasan_disable_local(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index b68736c..b2d3ef9 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -72,6 +73,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 895fa5f..ea86458 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index da0e53c..0f09fb2 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,8 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+
 struct access_info {
 	unsigned long access_addr;
 	unsigned long first_bad_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 21a9eeb..4e26c68 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -54,6 +54,9 @@ static void print_error_description(struct access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -69,6 +72,14 @@ static void print_error_description(struct access_info *info)
 
 static void print_address_description(struct access_info *info)
 {
+	unsigned long addr = info->access_addr;
+
+	if ((addr >= PAGE_OFFSET) &&
+		(addr < (unsigned long)high_memory)) {
+		struct page *page = virt_to_head_page((void *)addr);
+		dump_page(page, "kasan: bad access detected");
+	}
+
 	dump_stack();
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8d52ab1..31bc2e8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include <linux/kmemcheck.h>
+#include <linux/kasan.h>
 #include <linux/module.h>
 #include <linux/suspend.h>
 #include <linux/pagevec.h>
@@ -787,6 +788,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -970,6 +972,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 04/17] mm: slub: introduce virt_to_obj function.
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 9abf04e..eca3883 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 04/17] mm: slub: introduce virt_to_obj function.
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 9abf04e..eca3883 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
+{
+	return x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 05/17] mm: slub: share object_err function
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Remove static and add function declarations to
linux/slub_def.h so it could be used by kernel
address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 3 +++
 mm/slub.c                | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index eca3883..93e4104 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,7 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 1562955..3eb73f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,7 +629,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 05/17] mm: slub: share object_err function
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Remove static and add function declarations to
linux/slub_def.h so it could be used by kernel
address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 3 +++
 mm/slub.c                | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index eca3883..93e4104 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -115,4 +115,7 @@ static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
 	return x - ((x - slab_page) % s->size);
 }
 
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 1562955..3eb73f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,7 +629,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 3eb73f5..38d34a8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.

This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 3eb73f5..38d34a8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -467,13 +467,23 @@ static int slub_debug;
 static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 /*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +513,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +689,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +784,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 07/17] mm: slub: add kernel address sanitizer support for slub allocator
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Chernenkov, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as redzone.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Dmitry Chernenkov <dmitryc@google.com>
---
 include/linux/kasan.h | 27 ++++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  5 +++
 mm/kasan/report.c     | 22 ++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 34 ++++++++++++++++--
 9 files changed, 201 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index ab5131e..d4b69fa 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -34,6 +34,18 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_poison_slab(struct page *page);
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -44,6 +56,21 @@ static inline void kasan_disable_local(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+					void *object) {}
+static inline void kasan_poison_object_data(struct kmem_cache *cache,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index ed2ffaa..76f1fee 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -325,7 +326,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -333,7 +337,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f86070d..ada0260 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 79f4fbc..3c1caa2 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index ea86458..45d58f2 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,103 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_poison_slab(struct page *page)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << compound_order(page),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_unpoison_shadow(object, cache->object_size);
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_poison_shadow(object,
+			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = round_up((unsigned long)object + cache->object_size,
+				KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 0f09fb2..f500a8a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+
 
 struct access_info {
 	unsigned long access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 4e26c68..edf1638 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -55,8 +56,11 @@ static void print_error_description(struct access_info *info)
 
 	switch (shadow_val) {
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
+	case KASAN_PAGE_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -77,6 +81,24 @@ static void print_address_description(struct access_info *info)
 	if ((addr >= PAGE_OFFSET) &&
 		(addr < (unsigned long)high_memory)) {
 		struct page *page = virt_to_head_page((void *)addr);
+
+		if (PageSlab(page)) {
+			void *object;
+			struct kmem_cache *cache = page->slab_cache;
+			void *last_object;
+
+			object = virt_to_obj(cache, page_address(page),
+					(void *)info->access_addr);
+			last_object = page_address(page) +
+				page->objects * cache->size;
+
+			if (unlikely(object > last_object))
+				object = last_object; /* we hit into padding */
+
+			object_err(cache, page, object,
+				"kasan: bad access detected");
+			return;
+		}
 		dump_page(page, "kasan: bad access detected");
 	}
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0dd9eb4..820a273 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -887,6 +887,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -1066,8 +1067,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 38d34a8..9b481f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
@@ -1269,6 +1274,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
 	memcg_kmem_put_cache(s);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1292,6 +1298,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1386,8 +1394,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
+		kasan_poison_object_data(s, object);
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1420,6 +1431,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (unlikely(s->flags & SLAB_POISON))
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
 
+	kasan_poison_slab(page);
+
 	for_each_object_idx(p, idx, s, start, page->objects) {
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
@@ -2504,6 +2517,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2530,6 +2544,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2915,6 +2931,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3287,6 +3304,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3330,12 +3349,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3351,6 +3372,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 07/17] mm: slub: add kernel address sanitizer support for slub allocator
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Chernenkov, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as redzone.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Dmitry Chernenkov <dmitryc@google.com>
---
 include/linux/kasan.h | 27 ++++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  5 +++
 mm/kasan/report.c     | 22 ++++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 34 ++++++++++++++++--
 9 files changed, 201 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index ab5131e..d4b69fa 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -34,6 +34,18 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_poison_slab(struct page *page);
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -44,6 +56,21 @@ static inline void kasan_disable_local(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+					void *object) {}
+static inline void kasan_poison_object_data(struct kmem_cache *cache,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index ed2ffaa..76f1fee 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -325,7 +326,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -333,7 +337,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f86070d..ada0260 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
+	depends on SLUB_DEBUG
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 79f4fbc..3c1caa2 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index ea86458..45d58f2 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -30,6 +30,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -261,6 +262,103 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_poison_slab(struct page *page)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << compound_order(page),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_unpoison_shadow(object, cache->object_size);
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_poison_shadow(object,
+			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = round_up((unsigned long)object + cache->object_size,
+				KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 0f09fb2..f500a8a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+
 
 struct access_info {
 	unsigned long access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 4e26c68..edf1638 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -55,8 +56,11 @@ static void print_error_description(struct access_info *info)
 
 	switch (shadow_val) {
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
+	case KASAN_PAGE_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -77,6 +81,24 @@ static void print_address_description(struct access_info *info)
 	if ((addr >= PAGE_OFFSET) &&
 		(addr < (unsigned long)high_memory)) {
 		struct page *page = virt_to_head_page((void *)addr);
+
+		if (PageSlab(page)) {
+			void *object;
+			struct kmem_cache *cache = page->slab_cache;
+			void *last_object;
+
+			object = virt_to_obj(cache, page_address(page),
+					(void *)info->access_addr);
+			last_object = page_address(page) +
+				page->objects * cache->size;
+
+			if (unlikely(object > last_object))
+				object = last_object; /* we hit into padding */
+
+			object_err(cache, page, object,
+				"kasan: bad access detected");
+			return;
+		}
 		dump_page(page, "kasan: bad access detected");
 	}
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0dd9eb4..820a273 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -887,6 +887,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -1066,8 +1067,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 38d34a8..9b481f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/kasan.h>
 
 #include <trace/events/kmem.h>
 
@@ -469,10 +470,12 @@ static int disable_higher_order_debug;
 
 static inline void metadata_access_enable(void)
 {
+	kasan_disable_local();
 }
 
 static inline void metadata_access_disable(void)
 {
+	kasan_enable_local();
 }
 
 /*
@@ -1242,11 +1245,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
@@ -1269,6 +1274,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
 	memcg_kmem_put_cache(s);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1292,6 +1298,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1386,8 +1394,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
+		kasan_poison_object_data(s, object);
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1420,6 +1431,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (unlikely(s->flags & SLAB_POISON))
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
 
+	kasan_poison_slab(page);
+
 	for_each_object_idx(p, idx, s, start, page->objects) {
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
@@ -2504,6 +2517,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2530,6 +2544,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2915,6 +2931,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3287,6 +3304,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3330,12 +3349,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3351,6 +3372,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 08/17] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2015-01-29 15:11   ` Andrey Ryabinin
  (?)
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS (VFS...)

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 1467ab9..dc400fd 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1539,6 +1541,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 08/17] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS (VFS...)

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 1467ab9..dc400fd 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1539,6 +1541,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 08/17] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS VFS...

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 1467ab9..dc400fd 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1539,6 +1541,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 09/17] kmemleak: disable kasan instrumentation for kmemleak
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 09/17] kmemleak: disable kasan instrumentation for kmemleak
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..9bda1b3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_local();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_local();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_local();
 		pointer = *ptr;
+		kasan_enable_local();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 10/17] lib: add kasan test module
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 277 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 286 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index ada0260..f3bee26 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index b1dbda7..5b11c8f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -37,6 +37,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..098c08e
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,277 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+static char global_array[10];
+
+static noinline void __init kasan_global_oob(void)
+{
+	volatile int i = 3;
+	char *p = &global_array[ARRAY_SIZE(global_array) + i];
+
+	pr_info("out-of-bounds global variable\n");
+	*(volatile char *)p;
+}
+
+static noinline void __init kasan_stack_oob(void)
+{
+	char stack_array[10];
+	volatile int i = 0;
+	char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
+
+	pr_info("out-of-bounds on stack\n");
+	*(volatile char *)p;
+}
+
+static int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	kasan_stack_oob();
+	kasan_global_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 10/17] lib: add kasan test module
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 277 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 286 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index ada0260..f3bee26 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -43,4 +43,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index b1dbda7..5b11c8f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -37,6 +37,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..098c08e
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,277 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+static char global_array[10];
+
+static noinline void __init kasan_global_oob(void)
+{
+	volatile int i = 3;
+	char *p = &global_array[ARRAY_SIZE(global_array) + i];
+
+	pr_info("out-of-bounds global variable\n");
+	*(volatile char *)p;
+}
+
+static noinline void __init kasan_stack_oob(void)
+{
+	char stack_array[10];
+	volatile int i = 0;
+	char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
+
+	pr_info("out-of-bounds on stack\n");
+	*(volatile char *)p;
+}
+
+static int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	kasan_stack_oob();
+	kasan_global_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 11/17] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Matt Fleming, H. Peter Anvin, Thomas Gleixner,
	Ingo Molnar, open list:EXTENSIBLE FIRMWA...

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c       |  3 +--
 arch/x86/boot/compressed/misc.h        |  1 +
 arch/x86/include/asm/string_64.h       | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c       | 10 ++++++++--
 arch/x86/lib/memcpy_64.S               |  6 ++++--
 arch/x86/lib/memmove_64.S              |  4 ++++
 arch/x86/lib/memset_64.S               | 10 ++++++----
 drivers/firmware/efi/libstub/efistub.h |  4 ++++
 mm/kasan/kasan.c                       | 31 ++++++++++++++++++++++++++++++-
 9 files changed, 75 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 92b9a5f..ef17683 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -13,8 +13,7 @@
 #include <asm/setup.h>
 #include <asm/desc.h>
 
-#undef memcpy			/* Use memcpy from misc.c */
-
+#include "../string.h"
 #include "eboot.h"
 
 static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
 	 * only outcome...
 	 */
 	.section .altinstructions, "a"
-	altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+	altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
 			     .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
-	altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+	altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
 			     .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
 	.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 2be1098..47437b1 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -5,6 +5,10 @@
 /* error code which can't be mistaken for valid address */
 #define EFI_ERROR	(~0UL)
 
+#undef memcpy
+#undef memset
+#undef memmove
+
 void efi_char16_printk(efi_system_table_t *, efi_char16_t *);
 
 efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, void *__image,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 45d58f2..8c0bdd6 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -44,7 +44,7 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 	shadow_start = kasan_mem_to_shadow(addr);
 	shadow_end = kasan_mem_to_shadow(addr + size);
 
-	memset((void *)shadow_start, value, shadow_end - shadow_start);
+	__memset((void *)shadow_start, value, shadow_end - shadow_start);
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size)
@@ -248,6 +248,35 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void __asan_loadN(unsigned long addr, size_t size);
+void __asan_storeN(unsigned long addr, size_t size);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	__asan_storeN((unsigned long)addr, len);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memcpy(dest, src, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 11/17] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Matt Fleming, H. Peter Anvin, Thomas Gleixner,
	Ingo Molnar, open list:EXTENSIBLE FIRMWA...

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c       |  3 +--
 arch/x86/boot/compressed/misc.h        |  1 +
 arch/x86/include/asm/string_64.h       | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c       | 10 ++++++++--
 arch/x86/lib/memcpy_64.S               |  6 ++++--
 arch/x86/lib/memmove_64.S              |  4 ++++
 arch/x86/lib/memset_64.S               | 10 ++++++----
 drivers/firmware/efi/libstub/efistub.h |  4 ++++
 mm/kasan/kasan.c                       | 31 ++++++++++++++++++++++++++++++-
 9 files changed, 75 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 92b9a5f..ef17683 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -13,8 +13,7 @@
 #include <asm/setup.h>
 #include <asm/desc.h>
 
-#undef memcpy			/* Use memcpy from misc.c */
-
+#include "../string.h"
 #include "eboot.h"
 
 static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
 	 * only outcome...
 	 */
 	.section .altinstructions, "a"
-	altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+	altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
 			     .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
-	altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+	altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
 			     .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
 	.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 2be1098..47437b1 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -5,6 +5,10 @@
 /* error code which can't be mistaken for valid address */
 #define EFI_ERROR	(~0UL)
 
+#undef memcpy
+#undef memset
+#undef memmove
+
 void efi_char16_printk(efi_system_table_t *, efi_char16_t *);
 
 efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, void *__image,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 45d58f2..8c0bdd6 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -44,7 +44,7 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
 	shadow_start = kasan_mem_to_shadow(addr);
 	shadow_end = kasan_mem_to_shadow(addr + size);
 
-	memset((void *)shadow_start, value, shadow_end - shadow_start);
+	__memset((void *)shadow_start, value, shadow_end - shadow_start);
 }
 
 void kasan_unpoison_shadow(const void *address, size_t size)
@@ -248,6 +248,35 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write);
 }
 
+void __asan_loadN(unsigned long addr, size_t size);
+void __asan_storeN(unsigned long addr, size_t size);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	__asan_storeN((unsigned long)addr, len);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memcpy(dest, src, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 12/17] kasan: enable stack instrumentation
  2015-01-29 15:11   ` Andrey Ryabinin
  (?)
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Michal Marek, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          |  8 ++++++++
 include/linux/init_task.h            |  8 ++++++++
 mm/kasan/kasan.h                     |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 scripts/Makefile.kasan               |  1 +
 7 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 4fc8ca7..057f6f6 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index cfb932e..9498ece 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -189,9 +189,17 @@ void __init kasan_init(void)
 			panic("kasan: unable to allocate shadow!");
 	}
 	populate_zero_shadow(kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM),
+			kasan_mem_to_shadow(__START_KERNEL_map));
+
+	vmemmap_populate(kasan_mem_to_shadow((unsigned long)_stext),
+			kasan_mem_to_shadow((unsigned long)_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
 			KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index d3d43ec..696d223 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -250,6 +257,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index f500a8a..9efc523 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,15 @@
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 
 struct access_info {
 	unsigned long access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index edf1638..c83e397 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -64,6 +64,12 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 159396a..0ac7d1d 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -9,6 +9,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 12/17] kasan: enable stack instrumentation
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Michal Marek, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          |  8 ++++++++
 include/linux/init_task.h            |  8 ++++++++
 mm/kasan/kasan.h                     |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 scripts/Makefile.kasan               |  1 +
 7 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 4fc8ca7..057f6f6 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index cfb932e..9498ece 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -189,9 +189,17 @@ void __init kasan_init(void)
 			panic("kasan: unable to allocate shadow!");
 	}
 	populate_zero_shadow(kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM),
+			kasan_mem_to_shadow(__START_KERNEL_map));
+
+	vmemmap_populate(kasan_mem_to_shadow((unsigned long)_stext),
+			kasan_mem_to_shadow((unsigned long)_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
 			KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index d3d43ec..696d223 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -250,6 +257,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index f500a8a..9efc523 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,15 @@
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 
 struct access_info {
 	unsigned long access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index edf1638..c83e397 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -64,6 +64,12 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 159396a..0ac7d1d 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -9,6 +9,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 12/17] kasan: enable stack instrumentation
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Michal Marek, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          |  8 ++++++++
 include/linux/init_task.h            |  8 ++++++++
 mm/kasan/kasan.h                     |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 scripts/Makefile.kasan               |  1 +
 7 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 4fc8ca7..057f6f6 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index cfb932e..9498ece 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -189,9 +189,17 @@ void __init kasan_init(void)
 			panic("kasan: unable to allocate shadow!");
 	}
 	populate_zero_shadow(kasan_mem_to_shadow(PAGE_OFFSET + MAXMEM),
+			kasan_mem_to_shadow(__START_KERNEL_map));
+
+	vmemmap_populate(kasan_mem_to_shadow((unsigned long)_stext),
+			kasan_mem_to_shadow((unsigned long)_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
 			KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index d3d43ec..696d223 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -250,6 +257,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index f500a8a..9efc523 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,15 @@
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 
 struct access_info {
 	unsigned long access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index edf1638..c83e397 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -64,6 +64,12 @@ static void print_error_description(struct access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 159396a..0ac7d1d 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -9,6 +9,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 13/17] mm: vmalloc: add flag preventing guard hole allocation
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Add a new vm_struct flag 'VM_NO_GUARD' indicating that vm area
doesn't have a guard hole.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/vmalloc.h | 9 +++++++--
 mm/vmalloc.c            | 6 ++----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b87696f..1526fe7 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -16,6 +16,7 @@ struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
 #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
 #define VM_VPAGES		0x00000010	/* buffer for pages was vmalloc'ed */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
+#define VM_NO_GUARD		0x00000040      /* don't add guard page */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -96,8 +97,12 @@ void vmalloc_sync_all(void);
 
 static inline size_t get_vm_area_size(const struct vm_struct *area)
 {
-	/* return actual size without guard page */
-	return area->size - PAGE_SIZE;
+	if (!(area->flags & VM_NO_GUARD))
+		/* return actual size without guard page */
+		return area->size - PAGE_SIZE;
+	else
+		return area->size;
+
 }
 
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 39c3388..2e74e99 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1324,10 +1324,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (unlikely(!area))
 		return NULL;
 
-	/*
-	 * We always allocate a guard page.
-	 */
-	size += PAGE_SIZE;
+	if (!(flags & VM_NO_GUARD))
+		size += PAGE_SIZE;
 
 	va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
 	if (IS_ERR(va)) {
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 13/17] mm: vmalloc: add flag preventing guard hole allocation
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Add a new vm_struct flag 'VM_NO_GUARD' indicating that vm area
doesn't have a guard hole.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/vmalloc.h | 9 +++++++--
 mm/vmalloc.c            | 6 ++----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b87696f..1526fe7 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -16,6 +16,7 @@ struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
 #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
 #define VM_VPAGES		0x00000010	/* buffer for pages was vmalloc'ed */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
+#define VM_NO_GUARD		0x00000040      /* don't add guard page */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -96,8 +97,12 @@ void vmalloc_sync_all(void);
 
 static inline size_t get_vm_area_size(const struct vm_struct *area)
 {
-	/* return actual size without guard page */
-	return area->size - PAGE_SIZE;
+	if (!(area->flags & VM_NO_GUARD))
+		/* return actual size without guard page */
+		return area->size - PAGE_SIZE;
+	else
+		return area->size;
+
 }
 
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 39c3388..2e74e99 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1324,10 +1324,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (unlikely(!area))
 		return NULL;
 
-	/*
-	 * We always allocate a guard page.
-	 */
-	size += PAGE_SIZE;
+	if (!(flags & VM_NO_GUARD))
+		size += PAGE_SIZE;
 
 	va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
 	if (IS_ERR(va)) {
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  2015-01-29 15:11   ` Andrey Ryabinin
                       ` (3 preceding siblings ...)
  (?)
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-arm-kernel

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-arm-kernel

For instrumenting global variables KASan will shadow memory backing
memory for modules. So on module loading we will need to allocate
shadow memory and map it at exact virtual address.
__vmalloc_node_range() seems like the best fit for that purpose,
except it puts a guard hole after allocated area.

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 15/17] kernel: add support for .init_array.* constructors
  2015-01-29 15:11   ` Andrey Ryabinin
  (?)
@ 2015-01-29 15:11     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Actually KASan doesn't need priorities for constructors,
so they were removed from GCC 5.0, but GCC 4.9.2 still generates
constructors with priorities.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 15/17] kernel: add support for .init_array.* constructors
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Actually KASan doesn't need priorities for constructors,
so they were removed from GCC 5.0, but GCC 4.9.2 still generates
constructors with priorities.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.2

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 15/17] kernel: add support for .init_array.* constructors
@ 2015-01-29 15:11     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Actually KASan doesn't need priorities for constructors,
so they were removed from GCC 5.0, but GCC 4.9.2 still generates
constructors with priorities.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 16/17] module: fix types of device tables aliases
  2015-01-29 15:11   ` Andrey Ryabinin
@ 2015-01-29 15:12     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:12 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Rusty Russell

MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
Normally alias should have the same type as aliased symbol.

Device tables are arrays, so they have 'struct type##_device_id[x]'
types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
	'struct type##_device_id'.

This inconsistency confuses compiler, it could make a wrong
assumption about variable's size which leads KASan to
produce a false positive report about out of bounds access.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/module.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/module.h b/include/linux/module.h
index b653d7c..7e3ccd0 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
 #ifdef MODULE
 /* Creates an alias so file2alias.c can find device table. */
 #define MODULE_DEVICE_TABLE(type, name)					\
-  extern const struct type##_device_id __mod_##type##__##name##_device_table \
+extern typeof(name) __mod_##type##__##name##_device_table \
   __attribute__ ((unused, alias(__stringify(name))))
 #else  /* !MODULE */
 #define MODULE_DEVICE_TABLE(type, name)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 16/17] module: fix types of device tables aliases
@ 2015-01-29 15:12     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:12 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Rusty Russell

MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
Normally alias should have the same type as aliased symbol.

Device tables are arrays, so they have 'struct type##_device_id[x]'
types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
	'struct type##_device_id'.

This inconsistency confuses compiler, it could make a wrong
assumption about variable's size which leads KASan to
produce a false positive report about out of bounds access.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/module.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/module.h b/include/linux/module.h
index b653d7c..7e3ccd0 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
 #ifdef MODULE
 /* Creates an alias so file2alias.c can find device table. */
 #define MODULE_DEVICE_TABLE(type, name)					\
-  extern const struct type##_device_id __mod_##type##__##name##_device_table \
+extern typeof(name) __mod_##type##__##name##_device_table \
   __attribute__ ((unused, alias(__stringify(name))))
 #else  /* !MODULE */
 #define MODULE_DEVICE_TABLE(type, name)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 17/17] kasan: enable instrumentation of global variables
  2015-01-29 15:11   ` Andrey Ryabinin
  (?)
@ 2015-01-29 15:12     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:12 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Rusty Russell, Michal Marek, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds
of global variables.

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/module.c      | 12 ++++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 10 +++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 50 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 23 ++++++++++++++++++++
 mm/kasan/report.c             | 22 +++++++++++++++++++
 scripts/Makefile.kasan        |  5 +++--
 11 files changed, 129 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 9498ece..7a20ec5 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -195,7 +195,7 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow((unsigned long)_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_END),
 			KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d4b69fa..2630169 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -46,8 +46,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_local(void) {}
@@ -71,6 +78,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index d856e96..f842027 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_memfree(void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f3bee26..6b00c65 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -7,6 +7,7 @@ config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 8c0bdd6..2a68aa3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/memblock.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -388,6 +389,55 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+
+	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+				PAGE_SIZE);
+	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
+	void *ret;
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree((void *)kasan_mem_to_shadow((unsigned long)addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9efc523..b611a74 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -11,6 +11,7 @@
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 
 /*
  * Stack redzone shadow values
@@ -21,6 +22,10 @@
 #define KASAN_STACK_RIGHT       0xF3
 #define KASAN_STACK_PARTIAL     0xF4
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
 
 struct access_info {
 	unsigned long access_addr;
@@ -30,6 +35,24 @@ struct access_info {
 	unsigned long ip;
 };
 
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct access_info *info);
 void kasan_report_user_access(struct access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index c83e397..bc15798 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +83,19 @@ static void print_error_description(struct access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(unsigned long addr)
+{
+	return (addr >= (unsigned long)_stext && addr < (unsigned long)_end)
+		|| (addr >= MODULES_VADDR  && addr < MODULES_END);
+}
+
+static inline bool init_task_stack_addr(unsigned long addr)
+{
+	return addr >= (unsigned long)&init_thread_union.stack &&
+		(addr <= (unsigned long)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct access_info *info)
 {
 	unsigned long addr = info->access_addr;
@@ -108,6 +124,12 @@ static void print_address_description(struct access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n",
+				(void *)addr);
+	}
+
 	dump_stack();
 }
 
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 0ac7d1d..df302f8 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -5,11 +5,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 17/17] kasan: enable instrumentation of global variables
@ 2015-01-29 15:12     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:12 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Rusty Russell, Michal Marek, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds
of global variables.

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/module.c      | 12 ++++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 10 +++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 50 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 23 ++++++++++++++++++++
 mm/kasan/report.c             | 22 +++++++++++++++++++
 scripts/Makefile.kasan        |  5 +++--
 11 files changed, 129 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 9498ece..7a20ec5 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -195,7 +195,7 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow((unsigned long)_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_END),
 			KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d4b69fa..2630169 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -46,8 +46,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_local(void) {}
@@ -71,6 +78,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index d856e96..f842027 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_memfree(void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f3bee26..6b00c65 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -7,6 +7,7 @@ config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 8c0bdd6..2a68aa3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/memblock.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -388,6 +389,55 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+
+	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+				PAGE_SIZE);
+	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
+	void *ret;
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree((void *)kasan_mem_to_shadow((unsigned long)addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9efc523..b611a74 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -11,6 +11,7 @@
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 
 /*
  * Stack redzone shadow values
@@ -21,6 +22,10 @@
 #define KASAN_STACK_RIGHT       0xF3
 #define KASAN_STACK_PARTIAL     0xF4
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
 
 struct access_info {
 	unsigned long access_addr;
@@ -30,6 +35,24 @@ struct access_info {
 	unsigned long ip;
 };
 
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct access_info *info);
 void kasan_report_user_access(struct access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index c83e397..bc15798 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +83,19 @@ static void print_error_description(struct access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(unsigned long addr)
+{
+	return (addr >= (unsigned long)_stext && addr < (unsigned long)_end)
+		|| (addr >= MODULES_VADDR  && addr < MODULES_END);
+}
+
+static inline bool init_task_stack_addr(unsigned long addr)
+{
+	return addr >= (unsigned long)&init_thread_union.stack &&
+		(addr <= (unsigned long)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct access_info *info)
 {
 	unsigned long addr = info->access_addr;
@@ -108,6 +124,12 @@ static void print_address_description(struct access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n",
+				(void *)addr);
+	}
+
 	dump_stack();
 }
 
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 0ac7d1d..df302f8 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -5,11 +5,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v10 17/17] kasan: enable instrumentation of global variables
@ 2015-01-29 15:12     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-29 15:12 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Rusty Russell, Michal Marek, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds
of global variables.

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/module.c      | 12 ++++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 10 +++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 50 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 23 ++++++++++++++++++++
 mm/kasan/report.c             | 22 +++++++++++++++++++
 scripts/Makefile.kasan        |  5 +++--
 11 files changed, 129 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 9498ece..7a20ec5 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -195,7 +195,7 @@ void __init kasan_init(void)
 			kasan_mem_to_shadow((unsigned long)_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow(MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow(MODULES_END),
 			KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d4b69fa..2630169 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -46,8 +46,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_local(void) {}
@@ -71,6 +78,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index d856e96..f842027 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_memfree(void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f3bee26..6b00c65 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -7,6 +7,7 @@ config KASAN
 	bool "AddressSanitizer: runtime memory debugger"
 	depends on !MEMORY_HOTPLUG
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 8c0bdd6..2a68aa3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/memblock.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -388,6 +389,55 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+
+	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+				PAGE_SIZE);
+	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
+	void *ret;
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree((void *)kasan_mem_to_shadow((unsigned long)addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DECLARE_ASAN_CHECK(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9efc523..b611a74 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -11,6 +11,7 @@
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 
 /*
  * Stack redzone shadow values
@@ -21,6 +22,10 @@
 #define KASAN_STACK_RIGHT       0xF3
 #define KASAN_STACK_PARTIAL     0xF4
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
 
 struct access_info {
 	unsigned long access_addr;
@@ -30,6 +35,24 @@ struct access_info {
 	unsigned long ip;
 };
 
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct access_info *info);
 void kasan_report_user_access(struct access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index c83e397..bc15798 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +83,19 @@ static void print_error_description(struct access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(unsigned long addr)
+{
+	return (addr >= (unsigned long)_stext && addr < (unsigned long)_end)
+		|| (addr >= MODULES_VADDR  && addr < MODULES_END);
+}
+
+static inline bool init_task_stack_addr(unsigned long addr)
+{
+	return addr >= (unsigned long)&init_thread_union.stack &&
+		(addr <= (unsigned long)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct access_info *info)
 {
 	unsigned long addr = info->access_addr;
@@ -108,6 +124,12 @@ static void print_address_description(struct access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n",
+				(void *)addr);
+	}
+
 	dump_stack();
 }
 
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 0ac7d1d..df302f8 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -5,11 +5,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
  2015-01-29 15:11     ` Andrey Ryabinin
  (?)
  (?)
@ 2015-01-29 15:39     ` Michal Marek
  -1 siblings, 0 replies; 862+ messages in thread
From: Michal Marek @ 2015-01-29 15:39 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Ingo Molnar, Peter Zijlstra, open,
	list, DOCUMENTATION, open list:KERNEL BUILD + fi...

On 2015-01-29 16:11, Andrey Ryabinin wrote:
> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> 
> KASAN uses compile-time instrumentation for checking every memory access,
> therefore GCC >= v4.9.2 required.
> 
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].

For the kbuild bits, you can add

  Acked-by: Michal Marek <mmarek@suse.cz>

Michal

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
  2015-01-29 15:11     ` Andrey Ryabinin
  (?)
@ 2015-01-29 23:12       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

On Thu, 29 Jan 2015 18:11:45 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> 
> KASAN uses compile-time instrumentation for checking every memory access,
> therefore GCC >= v4.9.2 required.
> 
> ...
>
> Based on work by Andrey Konovalov <adech.fo@gmail.com>

Can we obtain Andrey's signed-off-by: please?
 
> +void kasan_unpoison_shadow(const void *address, size_t size)
> +{
> +	kasan_poison_shadow(address, size, 0);
> +
> +	if (size & KASAN_SHADOW_MASK) {
> +		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
> +						+ size);
> +		*shadow = size & KASAN_SHADOW_MASK;
> +	}
> +}

There's a lot of typecasting happening with kasan_mem_to_shadow().  In
this patch the return value gets typecast more often than not, and the
argument gets cast quite a lot as well.  I suspect the code would turn
out better if kasan_mem_to_shadow() were to take a (const?) void* arg
and were to return a void*.

> +static __always_inline bool memory_is_poisoned_1(unsigned long addr)

What's with all the __always_inline in this file?  When I remove them
all, kasan.o .text falls from 8294 bytes down to 4543 bytes.  That's
massive, and quite possibly faster.

If there's some magical functional reason for this then can we please
get a nice prominent comment into this code apologetically explaining
it?

> +{
> +	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
> +
> +	if (unlikely(shadow_value)) {
> +		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
> +		return unlikely(last_accessible_byte >= shadow_value);
> +	}
> +
> +	return false;
> +}
> +
> 
> ...
>
> +
> +#define DECLARE_ASAN_CHECK(size)				\

DEFINE_ASAN_CHECK would be more accurate.  Because this macro expands
to definitions, not declarations.

> +	void __asan_load##size(unsigned long addr)		\
> +	{							\
> +		check_memory_region(addr, size, false);		\
> +	}							\
> +	EXPORT_SYMBOL(__asan_load##size);			\
> +	__attribute__((alias("__asan_load"#size)))		\
> +	void __asan_load##size##_noabort(unsigned long);	\
> +	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
> +	void __asan_store##size(unsigned long addr)		\
> +	{							\
> +		check_memory_region(addr, size, true);		\
> +	}							\
> +	EXPORT_SYMBOL(__asan_store##size);			\
> +	__attribute__((alias("__asan_store"#size)))		\
> +	void __asan_store##size##_noabort(unsigned long);	\
> +	EXPORT_SYMBOL(__asan_store##size##_noabort)
> +
> +DECLARE_ASAN_CHECK(1);
> +DECLARE_ASAN_CHECK(2);
> +DECLARE_ASAN_CHECK(4);
> +DECLARE_ASAN_CHECK(8);
> +DECLARE_ASAN_CHECK(16);
> +
> +void __asan_loadN(unsigned long addr, size_t size)
> +{
> +	check_memory_region(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_loadN);
> +
> +__attribute__((alias("__asan_loadN")))

Maybe we need a __alias.  Like __packed and various other helpers.

> +void __asan_loadN_noabort(unsigned long, size_t);
> +EXPORT_SYMBOL(__asan_loadN_noabort);
> +
> +void __asan_storeN(unsigned long addr, size_t size)
> +{
> +	check_memory_region(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_storeN);
> +
> +__attribute__((alias("__asan_storeN")))
> +void __asan_storeN_noabort(unsigned long, size_t);
> +EXPORT_SYMBOL(__asan_storeN_noabort);
> +
> +/* to shut up compiler complaints */
> +void __asan_handle_no_return(void) {}
> +EXPORT_SYMBOL(__asan_handle_no_return);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> new file mode 100644
> index 0000000..da0e53c
> --- /dev/null
> +++ b/mm/kasan/kasan.h
> @@ -0,0 +1,47 @@
> +#ifndef __MM_KASAN_KASAN_H
> +#define __MM_KASAN_KASAN_H
> +
> +#include <linux/kasan.h>
> +
> +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> +#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
> +
> +struct access_info {

kasan_access_info would be a better name.

> +	unsigned long access_addr;
> +	unsigned long first_bad_addr;
> +	size_t access_size;
> +	bool is_write;
> +	unsigned long ip;
> +};
> +
> +void kasan_report_error(struct access_info *info);
> +void kasan_report_user_access(struct access_info *info);
> +
> +static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
> +{
> +	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
> +}
> +
> +static inline bool kasan_enabled(void)
> +{
> +	return !current->kasan_depth;
> +}
> +
> +static __always_inline void kasan_report(unsigned long addr,
> +					size_t size,
> +					bool is_write)

Again, why the inline?  This is presumably not a hotpath and
kasan_report has sixish call sites.


> +{
> +	struct access_info info;
> +
> +	if (likely(!kasan_enabled()))
> +		return;
> +
> +	info.access_addr = addr;
> +	info.access_size = size;
> +	info.is_write = is_write;
> +	info.ip = _RET_IP_;
> +	kasan_report_error(&info);
> +}
> 
> ...
>
> +static void print_error_description(struct access_info *info)
> +{
> +	const char *bug_type = "unknown crash";
> +	u8 shadow_val;
> +
> +	info->first_bad_addr = find_first_bad_addr(info->access_addr,
> +						info->access_size);
> +
> +	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
> +
> +	switch (shadow_val) {
> +	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +		bug_type = "out of bounds access";
> +		break;
> +	}
> +
> +	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",

Sometimes it's called "kasan", sometimes "AddressSanitizer".  Wouldn't
it be better to use the same name everywhere?

> +		bug_type, (void *)info->ip,
> +		(void *)info->access_addr);
> +	pr_err("%s of size %zu by task %s/%d\n",
> +		info->is_write ? "Write" : "Read",
> +		info->access_size, current->comm, task_pid_nr(current));
> +}
> +
> +static void print_address_description(struct access_info *info)
> +{
> +	dump_stack();
> +}

dump_stack() uses KERN_INFO but the callers or
print_address_description() use KERN_ERR.  This means that at some
settings of `dmesg -n', the kasan output will have large missing
chunks.

Please test this and deide how bad it is.  A proper fix will be
somewhat messy (new_dump_stack(KERN_ERR)).

> +static bool row_is_guilty(unsigned long row, unsigned long guilty)
> +{
> +	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
> +}
> +
> +static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
> +{
> +	/* The length of ">ff00ff00ff00ff00: " is
> +	 *    3 + (BITS_PER_LONG/8)*2 chars.
> +	 */
> +	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
> +		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
> +}
> +
> +static void print_shadow_for_address(unsigned long addr)
> +{
> +	int i;
> +	unsigned long shadow = kasan_mem_to_shadow(addr);
> +	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
> +		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;

You don't *have* to initialize at the definition site.  You can do

	unsigned long aligned_shadow;
	...
	aligned_shadow = ...;

and the 80-col tricks often come out looking better.

> +	pr_err("Memory state around the buggy address:\n");
> +
> +	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
> +		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
> +		char buffer[4 + (BITS_PER_LONG/8)*2];
> +
> +		snprintf(buffer, sizeof(buffer),
> +			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
> +
> +		kasan_disable_local();
> +		print_hex_dump(KERN_ERR, buffer,
> +			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
> +			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
> +		kasan_enable_local();
> +
> +		if (row_is_guilty(aligned_shadow, shadow))
> +			pr_err("%*c\n",
> +				shadow_pointer_offset(aligned_shadow, shadow),
> +				'^');
> +
> +		aligned_shadow += SHADOW_BYTES_PER_ROW;
> +	}
> +}
> 
> ...
>


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
@ 2015-01-29 23:12       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

On Thu, 29 Jan 2015 18:11:45 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> 
> KASAN uses compile-time instrumentation for checking every memory access,
> therefore GCC >= v4.9.2 required.
> 
> ...
>
> Based on work by Andrey Konovalov <adech.fo@gmail.com>

Can we obtain Andrey's signed-off-by: please?
 
> +void kasan_unpoison_shadow(const void *address, size_t size)
> +{
> +	kasan_poison_shadow(address, size, 0);
> +
> +	if (size & KASAN_SHADOW_MASK) {
> +		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
> +						+ size);
> +		*shadow = size & KASAN_SHADOW_MASK;
> +	}
> +}

There's a lot of typecasting happening with kasan_mem_to_shadow().  In
this patch the return value gets typecast more often than not, and the
argument gets cast quite a lot as well.  I suspect the code would turn
out better if kasan_mem_to_shadow() were to take a (const?) void* arg
and were to return a void*.

> +static __always_inline bool memory_is_poisoned_1(unsigned long addr)

What's with all the __always_inline in this file?  When I remove them
all, kasan.o .text falls from 8294 bytes down to 4543 bytes.  That's
massive, and quite possibly faster.

If there's some magical functional reason for this then can we please
get a nice prominent comment into this code apologetically explaining
it?

> +{
> +	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
> +
> +	if (unlikely(shadow_value)) {
> +		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
> +		return unlikely(last_accessible_byte >= shadow_value);
> +	}
> +
> +	return false;
> +}
> +
> 
> ...
>
> +
> +#define DECLARE_ASAN_CHECK(size)				\

DEFINE_ASAN_CHECK would be more accurate.  Because this macro expands
to definitions, not declarations.

> +	void __asan_load##size(unsigned long addr)		\
> +	{							\
> +		check_memory_region(addr, size, false);		\
> +	}							\
> +	EXPORT_SYMBOL(__asan_load##size);			\
> +	__attribute__((alias("__asan_load"#size)))		\
> +	void __asan_load##size##_noabort(unsigned long);	\
> +	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
> +	void __asan_store##size(unsigned long addr)		\
> +	{							\
> +		check_memory_region(addr, size, true);		\
> +	}							\
> +	EXPORT_SYMBOL(__asan_store##size);			\
> +	__attribute__((alias("__asan_store"#size)))		\
> +	void __asan_store##size##_noabort(unsigned long);	\
> +	EXPORT_SYMBOL(__asan_store##size##_noabort)
> +
> +DECLARE_ASAN_CHECK(1);
> +DECLARE_ASAN_CHECK(2);
> +DECLARE_ASAN_CHECK(4);
> +DECLARE_ASAN_CHECK(8);
> +DECLARE_ASAN_CHECK(16);
> +
> +void __asan_loadN(unsigned long addr, size_t size)
> +{
> +	check_memory_region(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_loadN);
> +
> +__attribute__((alias("__asan_loadN")))

Maybe we need a __alias.  Like __packed and various other helpers.

> +void __asan_loadN_noabort(unsigned long, size_t);
> +EXPORT_SYMBOL(__asan_loadN_noabort);
> +
> +void __asan_storeN(unsigned long addr, size_t size)
> +{
> +	check_memory_region(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_storeN);
> +
> +__attribute__((alias("__asan_storeN")))
> +void __asan_storeN_noabort(unsigned long, size_t);
> +EXPORT_SYMBOL(__asan_storeN_noabort);
> +
> +/* to shut up compiler complaints */
> +void __asan_handle_no_return(void) {}
> +EXPORT_SYMBOL(__asan_handle_no_return);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> new file mode 100644
> index 0000000..da0e53c
> --- /dev/null
> +++ b/mm/kasan/kasan.h
> @@ -0,0 +1,47 @@
> +#ifndef __MM_KASAN_KASAN_H
> +#define __MM_KASAN_KASAN_H
> +
> +#include <linux/kasan.h>
> +
> +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> +#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
> +
> +struct access_info {

kasan_access_info would be a better name.

> +	unsigned long access_addr;
> +	unsigned long first_bad_addr;
> +	size_t access_size;
> +	bool is_write;
> +	unsigned long ip;
> +};
> +
> +void kasan_report_error(struct access_info *info);
> +void kasan_report_user_access(struct access_info *info);
> +
> +static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
> +{
> +	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
> +}
> +
> +static inline bool kasan_enabled(void)
> +{
> +	return !current->kasan_depth;
> +}
> +
> +static __always_inline void kasan_report(unsigned long addr,
> +					size_t size,
> +					bool is_write)

Again, why the inline?  This is presumably not a hotpath and
kasan_report has sixish call sites.


> +{
> +	struct access_info info;
> +
> +	if (likely(!kasan_enabled()))
> +		return;
> +
> +	info.access_addr = addr;
> +	info.access_size = size;
> +	info.is_write = is_write;
> +	info.ip = _RET_IP_;
> +	kasan_report_error(&info);
> +}
> 
> ...
>
> +static void print_error_description(struct access_info *info)
> +{
> +	const char *bug_type = "unknown crash";
> +	u8 shadow_val;
> +
> +	info->first_bad_addr = find_first_bad_addr(info->access_addr,
> +						info->access_size);
> +
> +	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
> +
> +	switch (shadow_val) {
> +	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +		bug_type = "out of bounds access";
> +		break;
> +	}
> +
> +	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",

Sometimes it's called "kasan", sometimes "AddressSanitizer".  Wouldn't
it be better to use the same name everywhere?

> +		bug_type, (void *)info->ip,
> +		(void *)info->access_addr);
> +	pr_err("%s of size %zu by task %s/%d\n",
> +		info->is_write ? "Write" : "Read",
> +		info->access_size, current->comm, task_pid_nr(current));
> +}
> +
> +static void print_address_description(struct access_info *info)
> +{
> +	dump_stack();
> +}

dump_stack() uses KERN_INFO but the callers or
print_address_description() use KERN_ERR.  This means that at some
settings of `dmesg -n', the kasan output will have large missing
chunks.

Please test this and deide how bad it is.  A proper fix will be
somewhat messy (new_dump_stack(KERN_ERR)).

> +static bool row_is_guilty(unsigned long row, unsigned long guilty)
> +{
> +	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
> +}
> +
> +static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
> +{
> +	/* The length of ">ff00ff00ff00ff00: " is
> +	 *    3 + (BITS_PER_LONG/8)*2 chars.
> +	 */
> +	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
> +		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
> +}
> +
> +static void print_shadow_for_address(unsigned long addr)
> +{
> +	int i;
> +	unsigned long shadow = kasan_mem_to_shadow(addr);
> +	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
> +		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;

You don't *have* to initialize at the definition site.  You can do

	unsigned long aligned_shadow;
	...
	aligned_shadow = ...;

and the 80-col tricks often come out looking better.

> +	pr_err("Memory state around the buggy address:\n");
> +
> +	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
> +		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
> +		char buffer[4 + (BITS_PER_LONG/8)*2];
> +
> +		snprintf(buffer, sizeof(buffer),
> +			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
> +
> +		kasan_disable_local();
> +		print_hex_dump(KERN_ERR, buffer,
> +			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
> +			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
> +		kasan_enable_local();
> +
> +		if (row_is_guilty(aligned_shadow, shadow))
> +			pr_err("%*c\n",
> +				shadow_pointer_offset(aligned_shadow, shadow),
> +				'^');
> +
> +		aligned_shadow += SHADOW_BYTES_PER_ROW;
> +	}
> +}
> 
> ...
>


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
@ 2015-01-29 23:12       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

On Thu, 29 Jan 2015 18:11:45 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> 
> KASAN uses compile-time instrumentation for checking every memory access,
> therefore GCC >= v4.9.2 required.
> 
> ...
>
> Based on work by Andrey Konovalov <adech.fo@gmail.com>

Can we obtain Andrey's signed-off-by: please?
 
> +void kasan_unpoison_shadow(const void *address, size_t size)
> +{
> +	kasan_poison_shadow(address, size, 0);
> +
> +	if (size & KASAN_SHADOW_MASK) {
> +		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
> +						+ size);
> +		*shadow = size & KASAN_SHADOW_MASK;
> +	}
> +}

There's a lot of typecasting happening with kasan_mem_to_shadow().  In
this patch the return value gets typecast more often than not, and the
argument gets cast quite a lot as well.  I suspect the code would turn
out better if kasan_mem_to_shadow() were to take a (const?) void* arg
and were to return a void*.

> +static __always_inline bool memory_is_poisoned_1(unsigned long addr)

What's with all the __always_inline in this file?  When I remove them
all, kasan.o .text falls from 8294 bytes down to 4543 bytes.  That's
massive, and quite possibly faster.

If there's some magical functional reason for this then can we please
get a nice prominent comment into this code apologetically explaining
it?

> +{
> +	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
> +
> +	if (unlikely(shadow_value)) {
> +		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
> +		return unlikely(last_accessible_byte >= shadow_value);
> +	}
> +
> +	return false;
> +}
> +
> 
> ...
>
> +
> +#define DECLARE_ASAN_CHECK(size)				\

DEFINE_ASAN_CHECK would be more accurate.  Because this macro expands
to definitions, not declarations.

> +	void __asan_load##size(unsigned long addr)		\
> +	{							\
> +		check_memory_region(addr, size, false);		\
> +	}							\
> +	EXPORT_SYMBOL(__asan_load##size);			\
> +	__attribute__((alias("__asan_load"#size)))		\
> +	void __asan_load##size##_noabort(unsigned long);	\
> +	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
> +	void __asan_store##size(unsigned long addr)		\
> +	{							\
> +		check_memory_region(addr, size, true);		\
> +	}							\
> +	EXPORT_SYMBOL(__asan_store##size);			\
> +	__attribute__((alias("__asan_store"#size)))		\
> +	void __asan_store##size##_noabort(unsigned long);	\
> +	EXPORT_SYMBOL(__asan_store##size##_noabort)
> +
> +DECLARE_ASAN_CHECK(1);
> +DECLARE_ASAN_CHECK(2);
> +DECLARE_ASAN_CHECK(4);
> +DECLARE_ASAN_CHECK(8);
> +DECLARE_ASAN_CHECK(16);
> +
> +void __asan_loadN(unsigned long addr, size_t size)
> +{
> +	check_memory_region(addr, size, false);
> +}
> +EXPORT_SYMBOL(__asan_loadN);
> +
> +__attribute__((alias("__asan_loadN")))

Maybe we need a __alias.  Like __packed and various other helpers.

> +void __asan_loadN_noabort(unsigned long, size_t);
> +EXPORT_SYMBOL(__asan_loadN_noabort);
> +
> +void __asan_storeN(unsigned long addr, size_t size)
> +{
> +	check_memory_region(addr, size, true);
> +}
> +EXPORT_SYMBOL(__asan_storeN);
> +
> +__attribute__((alias("__asan_storeN")))
> +void __asan_storeN_noabort(unsigned long, size_t);
> +EXPORT_SYMBOL(__asan_storeN_noabort);
> +
> +/* to shut up compiler complaints */
> +void __asan_handle_no_return(void) {}
> +EXPORT_SYMBOL(__asan_handle_no_return);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> new file mode 100644
> index 0000000..da0e53c
> --- /dev/null
> +++ b/mm/kasan/kasan.h
> @@ -0,0 +1,47 @@
> +#ifndef __MM_KASAN_KASAN_H
> +#define __MM_KASAN_KASAN_H
> +
> +#include <linux/kasan.h>
> +
> +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> +#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
> +
> +struct access_info {

kasan_access_info would be a better name.

> +	unsigned long access_addr;
> +	unsigned long first_bad_addr;
> +	size_t access_size;
> +	bool is_write;
> +	unsigned long ip;
> +};
> +
> +void kasan_report_error(struct access_info *info);
> +void kasan_report_user_access(struct access_info *info);
> +
> +static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
> +{
> +	return (shadow_addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
> +}
> +
> +static inline bool kasan_enabled(void)
> +{
> +	return !current->kasan_depth;
> +}
> +
> +static __always_inline void kasan_report(unsigned long addr,
> +					size_t size,
> +					bool is_write)

Again, why the inline?  This is presumably not a hotpath and
kasan_report has sixish call sites.


> +{
> +	struct access_info info;
> +
> +	if (likely(!kasan_enabled()))
> +		return;
> +
> +	info.access_addr = addr;
> +	info.access_size = size;
> +	info.is_write = is_write;
> +	info.ip = _RET_IP_;
> +	kasan_report_error(&info);
> +}
> 
> ...
>
> +static void print_error_description(struct access_info *info)
> +{
> +	const char *bug_type = "unknown crash";
> +	u8 shadow_val;
> +
> +	info->first_bad_addr = find_first_bad_addr(info->access_addr,
> +						info->access_size);
> +
> +	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
> +
> +	switch (shadow_val) {
> +	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +		bug_type = "out of bounds access";
> +		break;
> +	}
> +
> +	pr_err("BUG: AddressSanitizer: %s in %pS at addr %p\n",

Sometimes it's called "kasan", sometimes "AddressSanitizer".  Wouldn't
it be better to use the same name everywhere?

> +		bug_type, (void *)info->ip,
> +		(void *)info->access_addr);
> +	pr_err("%s of size %zu by task %s/%d\n",
> +		info->is_write ? "Write" : "Read",
> +		info->access_size, current->comm, task_pid_nr(current));
> +}
> +
> +static void print_address_description(struct access_info *info)
> +{
> +	dump_stack();
> +}

dump_stack() uses KERN_INFO but the callers or
print_address_description() use KERN_ERR.  This means that at some
settings of `dmesg -n', the kasan output will have large missing
chunks.

Please test this and deide how bad it is.  A proper fix will be
somewhat messy (new_dump_stack(KERN_ERR)).

> +static bool row_is_guilty(unsigned long row, unsigned long guilty)
> +{
> +	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
> +}
> +
> +static int shadow_pointer_offset(unsigned long row, unsigned long shadow)
> +{
> +	/* The length of ">ff00ff00ff00ff00: " is
> +	 *    3 + (BITS_PER_LONG/8)*2 chars.
> +	 */
> +	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
> +		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
> +}
> +
> +static void print_shadow_for_address(unsigned long addr)
> +{
> +	int i;
> +	unsigned long shadow = kasan_mem_to_shadow(addr);
> +	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
> +		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;

You don't *have* to initialize at the definition site.  You can do

	unsigned long aligned_shadow;
	...
	aligned_shadow = ...;

and the 80-col tricks often come out looking better.

> +	pr_err("Memory state around the buggy address:\n");
> +
> +	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
> +		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
> +		char buffer[4 + (BITS_PER_LONG/8)*2];
> +
> +		snprintf(buffer, sizeof(buffer),
> +			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
> +
> +		kasan_disable_local();
> +		print_hex_dump(KERN_ERR, buffer,
> +			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
> +			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
> +		kasan_enable_local();
> +
> +		if (row_is_guilty(aligned_shadow, shadow))
> +			pr_err("%*c\n",
> +				shadow_pointer_offset(aligned_shadow, shadow),
> +				'^');
> +
> +		aligned_shadow += SHADOW_BYTES_PER_ROW;
> +	}
> +}
> 
> ...
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
  2015-01-29 15:11     ` Andrey Ryabinin
@ 2015-01-29 23:12       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

On Thu, 29 Jan 2015 18:11:46 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> This patch adds arch specific code for kernel address sanitizer.
> 
> 16TB of virtual addressed used for shadow memory.
> It's located in range [ffffec0000000000 - fffffc0000000000]
> between vmemmap and %esp fixup stacks.
> 
> At early stage we map whole shadow region with zero page.
> Latter, after pages mapped to direct mapping address range
> we unmap zero pages from corresponding shadow (see kasan_map_shadow())
> and allocate and map a real shadow memory reusing vmemmap_populate()
> function.
> 
> Also replace __pa with __pa_nodebug before shadow initialized.
> __pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
> __phys_addr is instrumented, so __asan_load could be called before
> shadow area initialized.
> 
> ...
>
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
>  
>  config KASAN
>  	bool "AddressSanitizer: runtime memory debugger"
> +	depends on !MEMORY_HOTPLUG
>  	help
>  	  Enables address sanitizer - runtime memory debugger,
>  	  designed to find out-of-bounds accesses and use-after-free bugs.

That's a significant restriction.  It has obvious runtime implications.
It also means that `make allmodconfig' and `make allyesconfig' don't
enable kasan, so compile coverage will be impacted.

This wasn't changelogged.  What's the reasoning and what has to be done
to fix it?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
@ 2015-01-29 23:12       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

On Thu, 29 Jan 2015 18:11:46 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> This patch adds arch specific code for kernel address sanitizer.
> 
> 16TB of virtual addressed used for shadow memory.
> It's located in range [ffffec0000000000 - fffffc0000000000]
> between vmemmap and %esp fixup stacks.
> 
> At early stage we map whole shadow region with zero page.
> Latter, after pages mapped to direct mapping address range
> we unmap zero pages from corresponding shadow (see kasan_map_shadow())
> and allocate and map a real shadow memory reusing vmemmap_populate()
> function.
> 
> Also replace __pa with __pa_nodebug before shadow initialized.
> __pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
> __phys_addr is instrumented, so __asan_load could be called before
> shadow area initialized.
> 
> ...
>
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
>  
>  config KASAN
>  	bool "AddressSanitizer: runtime memory debugger"
> +	depends on !MEMORY_HOTPLUG
>  	help
>  	  Enables address sanitizer - runtime memory debugger,
>  	  designed to find out-of-bounds accesses and use-after-free bugs.

That's a significant restriction.  It has obvious runtime implications.
It also means that `make allmodconfig' and `make allyesconfig' don't
enable kasan, so compile coverage will be impacted.

This wasn't changelogged.  What's the reasoning and what has to be done
to fix it?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 04/17] mm: slub: introduce virt_to_obj function.
  2015-01-29 15:11     ` Andrey Ryabinin
@ 2015-01-29 23:12       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Thu, 29 Jan 2015 18:11:48 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> virt_to_obj takes kmem_cache address, address of slab page,
> address x pointing somewhere inside slab object,
> and returns address of the begging of object.

"beginning"

The above text may as well be placed into slub_def.h as a comment.

> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> Acked-by: Christoph Lameter <cl@linux.com>
> ---
>  include/linux/slub_def.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 9abf04e..eca3883 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
>  }
>  #endif
>  
> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
> +{
> +	return x - ((x - slab_page) % s->size);
> +}

"const void *x" would be better.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 04/17] mm: slub: introduce virt_to_obj function.
@ 2015-01-29 23:12       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Thu, 29 Jan 2015 18:11:48 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> virt_to_obj takes kmem_cache address, address of slab page,
> address x pointing somewhere inside slab object,
> and returns address of the begging of object.

"beginning"

The above text may as well be placed into slub_def.h as a comment.

> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> Acked-by: Christoph Lameter <cl@linux.com>
> ---
>  include/linux/slub_def.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 9abf04e..eca3883 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
>  }
>  #endif
>  
> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
> +{
> +	return x - ((x - slab_page) % s->size);
> +}

"const void *x" would be better.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-01-29 15:11     ` Andrey Ryabinin
@ 2015-01-29 23:12       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Thu, 29 Jan 2015 18:11:50 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Wrap access to object's metadata in external functions with
> metadata_access_enable()/metadata_access_disable() function calls.
> 
> This hooks separates payload accesses from metadata accesses
> which might be useful for different checkers (e.g. KASan).
> 
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -467,13 +467,23 @@ static int slub_debug;
>  static char *slub_debug_slabs;
>  static int disable_higher_order_debug;
>  
> +static inline void metadata_access_enable(void)
> +{
> +}
> +
> +static inline void metadata_access_disable(void)
> +{
> +}

Some code comments here would be useful.  What they do, why they exist,
etc.  The next patch fills them in with
kasan_disable_local/kasan_enable_local but that doesn't help the reader
to understand what's going on.  The fact that
kasan_disable_local/kasan_enable_local are also undocumented doesn't
help.




^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-01-29 23:12       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Thu, 29 Jan 2015 18:11:50 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> Wrap access to object's metadata in external functions with
> metadata_access_enable()/metadata_access_disable() function calls.
> 
> This hooks separates payload accesses from metadata accesses
> which might be useful for different checkers (e.g. KASan).
> 
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -467,13 +467,23 @@ static int slub_debug;
>  static char *slub_debug_slabs;
>  static int disable_higher_order_debug;
>  
> +static inline void metadata_access_enable(void)
> +{
> +}
> +
> +static inline void metadata_access_disable(void)
> +{
> +}

Some code comments here would be useful.  What they do, why they exist,
etc.  The next patch fills them in with
kasan_disable_local/kasan_enable_local but that doesn't help the reader
to understand what's going on.  The fact that
kasan_disable_local/kasan_enable_local are also undocumented doesn't
help.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 13/17] mm: vmalloc: add flag preventing guard hole allocation
  2015-01-29 15:11     ` Andrey Ryabinin
@ 2015-01-29 23:12       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm

On Thu, 29 Jan 2015 18:11:57 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> For instrumenting global variables KASan will shadow memory
> backing memory for modules. So on module loading we will need
> to allocate shadow memory and map it at exact virtual address.

I don't understand.  What does "map it at exact virtual address" mean?

> __vmalloc_node_range() seems like the best fit for that purpose,
> except it puts a guard hole after allocated area.

Why is the guard hole a problem?

More details needed in this changelog, please.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 13/17] mm: vmalloc: add flag preventing guard hole allocation
@ 2015-01-29 23:12       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm

On Thu, 29 Jan 2015 18:11:57 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> For instrumenting global variables KASan will shadow memory
> backing memory for modules. So on module loading we will need
> to allocate shadow memory and map it at exact virtual address.

I don't understand.  What does "map it at exact virtual address" mean?

> __vmalloc_node_range() seems like the best fit for that purpose,
> except it puts a guard hole after allocated area.

Why is the guard hole a problem?

More details needed in this changelog, please.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 15/17] kernel: add support for .init_array.* constructors
  2015-01-29 15:11     ` Andrey Ryabinin
  (?)
@ 2015-01-29 23:13       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Arnd Bergmann, open list:GENERIC INCLUDE/A...

On Thu, 29 Jan 2015 18:11:59 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> KASan uses constructors for initializing redzones for global
> variables. Actually KASan doesn't need priorities for constructors,
> so they were removed from GCC 5.0, but GCC 4.9.2 still generates
> constructors with priorities.

I don't understand this changelog either.  What's wrong with priorities
and what is the patch doing about it?  More details, please.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 15/17] kernel: add support for .init_array.* constructors
@ 2015-01-29 23:13       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Arnd Bergmann, open list:GENERIC INCLUDE/A...

On Thu, 29 Jan 2015 18:11:59 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> KASan uses constructors for initializing redzones for global
> variables. Actually KASan doesn't need priorities for constructors,
> so they were removed from GCC 5.0, but GCC 4.9.2 still generates
> constructors with priorities.

I don't understand this changelog either.  What's wrong with priorities
and what is the patch doing about it?  More details, please.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 15/17] kernel: add support for .init_array.* constructors
@ 2015-01-29 23:13       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Arnd Bergmann, open list:GENERIC INCLUDE/A...

On Thu, 29 Jan 2015 18:11:59 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> KASan uses constructors for initializing redzones for global
> variables. Actually KASan doesn't need priorities for constructors,
> so they were removed from GCC 5.0, but GCC 4.9.2 still generates
> constructors with priorities.

I don't understand this changelog either.  What's wrong with priorities
and what is the patch doing about it?  More details, please.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 16/17] module: fix types of device tables aliases
  2015-01-29 15:12     ` Andrey Ryabinin
@ 2015-01-29 23:13       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Rusty Russell

On Thu, 29 Jan 2015 18:12:00 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
> Normally alias should have the same type as aliased symbol.
> 
> Device tables are arrays, so they have 'struct type##_device_id[x]'
> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
> 	'struct type##_device_id'.
> 
> This inconsistency confuses compiler, it could make a wrong
> assumption about variable's size which leads KASan to
> produce a false positive report about out of bounds access.

The changelog describes the problem but doesn't describe how the patch
addresses the problem.  Some more details would be useful.

> --- a/include/linux/module.h
> +++ b/include/linux/module.h
> @@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
>  #ifdef MODULE
>  /* Creates an alias so file2alias.c can find device table. */
>  #define MODULE_DEVICE_TABLE(type, name)					\
> -  extern const struct type##_device_id __mod_##type##__##name##_device_table \
> +extern typeof(name) __mod_##type##__##name##_device_table \
>    __attribute__ ((unused, alias(__stringify(name))))

We lost the const?  If that's deliberate then why?  What are the
implications?  Do the device tables now go into rw memory?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 16/17] module: fix types of device tables aliases
@ 2015-01-29 23:13       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Rusty Russell

On Thu, 29 Jan 2015 18:12:00 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
> Normally alias should have the same type as aliased symbol.
> 
> Device tables are arrays, so they have 'struct type##_device_id[x]'
> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
> 	'struct type##_device_id'.
> 
> This inconsistency confuses compiler, it could make a wrong
> assumption about variable's size which leads KASan to
> produce a false positive report about out of bounds access.

The changelog describes the problem but doesn't describe how the patch
addresses the problem.  Some more details would be useful.

> --- a/include/linux/module.h
> +++ b/include/linux/module.h
> @@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
>  #ifdef MODULE
>  /* Creates an alias so file2alias.c can find device table. */
>  #define MODULE_DEVICE_TABLE(type, name)					\
> -  extern const struct type##_device_id __mod_##type##__##name##_device_table \
> +extern typeof(name) __mod_##type##__##name##_device_table \
>    __attribute__ ((unused, alias(__stringify(name))))

We lost the const?  If that's deliberate then why?  What are the
implications?  Do the device tables now go into rw memory?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
  2015-01-29 15:12     ` Andrey Ryabinin
  (?)
@ 2015-01-29 23:13       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

On Thu, 29 Jan 2015 18:12:01 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> This feature let us to detect accesses out of bounds
> of global variables.

global variables *within modules*, I think?  More specificity needed here.

> The idea of this is simple. Compiler increases each global variable
> by redzone size and add constructors invoking __asan_register_globals()
> function. Information about global variable (address, size,
> size with redzone ...) passed to __asan_register_globals() so we could
> poison variable's redzone.
> 
> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
> more simple. Such alignment guarantees that each shadow page backing
> modules address space correspond to only one module_alloc() allocation.
> 
> ...
>
> +int kasan_module_alloc(void *addr, size_t size)
> +{
> +
> +	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
> +				PAGE_SIZE);
> +	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
> +	void *ret;

Like this:

	size_t shadow_size;
	unsigned long shadow_start;
	void *ret;

	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, PAGE_SIZE);
	shadow_start = kasan_mem_to_shadow((unsigned long)addr);

it's much easier to read and avoids the 80-column trickery.

I do suspect that

	void *kasan_mem_to_shadow(const void *addr);

would clean up lots and lots of code.

> +	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
> +		return -EINVAL;
> +
> +	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
> +			shadow_start + shadow_size,
> +			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
> +			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
> +			__builtin_return_address(0));
> +	return ret ? 0 : -ENOMEM;
> +}
> +
> 
> ...
>
> +struct kasan_global {
> +	const void *beg;		/* Address of the beginning of the global variable. */
> +	size_t size;			/* Size of the global variable. */
> +	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
> +	const void *name;
> +	const void *module_name;	/* Name of the module where the global variable is declared. */
> +	unsigned long has_dynamic_init;	/* This needed for C++ */

This can be removed?

> +#if KASAN_ABI_VERSION >= 4
> +	struct kasan_source_location *location;
> +#endif
> +};
> 
> ...
>


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
@ 2015-01-29 23:13       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

On Thu, 29 Jan 2015 18:12:01 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> This feature let us to detect accesses out of bounds
> of global variables.

global variables *within modules*, I think?  More specificity needed here.

> The idea of this is simple. Compiler increases each global variable
> by redzone size and add constructors invoking __asan_register_globals()
> function. Information about global variable (address, size,
> size with redzone ...) passed to __asan_register_globals() so we could
> poison variable's redzone.
> 
> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
> more simple. Such alignment guarantees that each shadow page backing
> modules address space correspond to only one module_alloc() allocation.
> 
> ...
>
> +int kasan_module_alloc(void *addr, size_t size)
> +{
> +
> +	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
> +				PAGE_SIZE);
> +	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
> +	void *ret;

Like this:

	size_t shadow_size;
	unsigned long shadow_start;
	void *ret;

	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, PAGE_SIZE);
	shadow_start = kasan_mem_to_shadow((unsigned long)addr);

it's much easier to read and avoids the 80-column trickery.

I do suspect that

	void *kasan_mem_to_shadow(const void *addr);

would clean up lots and lots of code.

> +	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
> +		return -EINVAL;
> +
> +	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
> +			shadow_start + shadow_size,
> +			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
> +			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
> +			__builtin_return_address(0));
> +	return ret ? 0 : -ENOMEM;
> +}
> +
> 
> ...
>
> +struct kasan_global {
> +	const void *beg;		/* Address of the beginning of the global variable. */
> +	size_t size;			/* Size of the global variable. */
> +	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
> +	const void *name;
> +	const void *module_name;	/* Name of the module where the global variable is declared. */
> +	unsigned long has_dynamic_init;	/* This needed for C++ */

This can be removed?

> +#if KASAN_ABI_VERSION >= 4
> +	struct kasan_source_location *location;
> +#endif
> +};
> 
> ...
>


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
@ 2015-01-29 23:13       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-29 23:13 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

On Thu, 29 Jan 2015 18:12:01 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> This feature let us to detect accesses out of bounds
> of global variables.

global variables *within modules*, I think?  More specificity needed here.

> The idea of this is simple. Compiler increases each global variable
> by redzone size and add constructors invoking __asan_register_globals()
> function. Information about global variable (address, size,
> size with redzone ...) passed to __asan_register_globals() so we could
> poison variable's redzone.
> 
> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
> more simple. Such alignment guarantees that each shadow page backing
> modules address space correspond to only one module_alloc() allocation.
> 
> ...
>
> +int kasan_module_alloc(void *addr, size_t size)
> +{
> +
> +	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
> +				PAGE_SIZE);
> +	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
> +	void *ret;

Like this:

	size_t shadow_size;
	unsigned long shadow_start;
	void *ret;

	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, PAGE_SIZE);
	shadow_start = kasan_mem_to_shadow((unsigned long)addr);

it's much easier to read and avoids the 80-column trickery.

I do suspect that

	void *kasan_mem_to_shadow(const void *addr);

would clean up lots and lots of code.

> +	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
> +		return -EINVAL;
> +
> +	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
> +			shadow_start + shadow_size,
> +			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
> +			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
> +			__builtin_return_address(0));
> +	return ret ? 0 : -ENOMEM;
> +}
> +
> 
> ...
>
> +struct kasan_global {
> +	const void *beg;		/* Address of the beginning of the global variable. */
> +	size_t size;			/* Size of the global variable. */
> +	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
> +	const void *name;
> +	const void *module_name;	/* Name of the module where the global variable is declared. */
> +	unsigned long has_dynamic_init;	/* This needed for C++ */

This can be removed?

> +#if KASAN_ABI_VERSION >= 4
> +	struct kasan_source_location *location;
> +#endif
> +};
> 
> ...
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
  2015-01-29 23:12       ` Andrew Morton
@ 2015-01-30 16:04         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 16:04 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:45 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
>> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>>
>> KASAN uses compile-time instrumentation for checking every memory access,
>> therefore GCC >= v4.9.2 required.
>>
>> ...
>>
>> Based on work by Andrey Konovalov <adech.fo@gmail.com>
> 
> Can we obtain Andrey's signed-off-by: please?
>  

I'll ask.

...

>> +static __always_inline bool memory_is_poisoned_1(unsigned long addr)
> 
> What's with all the __always_inline in this file?  When I remove them
> all, kasan.o .text falls from 8294 bytes down to 4543 bytes.  That's
> massive, and quite possibly faster.
> 
> If there's some magical functional reason for this then can we please
> get a nice prominent comment into this code apologetically explaining
> it?
> 

The main reason is performance. __always_inline especially needed for check_memory_region()
and memory_is_poisoned() to optimize away switch in memory_is_poisoned():

	if (__builtin_constant_p(size)) {
		switch (size) {
		case 1:
			return memory_is_poisoned_1(addr);
		case 2:
			return memory_is_poisoned_2(addr);
		case 4:
			return memory_is_poisoned_4(addr);
		case 8:
			return memory_is_poisoned_8(addr);
		case 16:
			return memory_is_poisoned_16(addr);
		default:
			BUILD_BUG();
		}
	}

Always inlining memory_is_poisoned_x() gives additionally about 7%-10%.

According to my simple testing __always_inline gives about 20% versus
not inlined version of kasan.c


...

>> +
>> +void __asan_loadN(unsigned long addr, size_t size)
>> +{
>> +	check_memory_region(addr, size, false);
>> +}
>> +EXPORT_SYMBOL(__asan_loadN);
>> +
>> +__attribute__((alias("__asan_loadN")))
> 
> Maybe we need a __alias.  Like __packed and various other helpers.
> 

Ok.

....

>> +
>> +static __always_inline void kasan_report(unsigned long addr,
>> +					size_t size,
>> +					bool is_write)
> 
> Again, why the inline?  This is presumably not a hotpath and
> kasan_report has sixish call sites.
> 

The reason of __always_inline here is to get correct _RET_IP_.
I could pass it from above and drop always inline here.

> 
>> +{
>> +	struct access_info info;
>> +
>> +	if (likely(!kasan_enabled()))
>> +		return;
>> +
>> +	info.access_addr = addr;
>> +	info.access_size = size;
>> +	info.is_write = is_write;
>> +	info.ip = _RET_IP_;
>> +	kasan_report_error(&info);
>> +}
>>
...

>> +
>> +static void print_address_description(struct access_info *info)
>> +{
>> +	dump_stack();
>> +}
> 
> dump_stack() uses KERN_INFO but the callers or
> print_address_description() use KERN_ERR.  This means that at some
> settings of `dmesg -n', the kasan output will have large missing
> chunks.
> 
> Please test this and deide how bad it is.  A proper fix will be
> somewhat messy (new_dump_stack(KERN_ERR)).
> 

This new_dump_stack() could be useful in other places.
E.g. object_err()/slab_err() in SLUB also use pr_err() + dump_stack() combination.




^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 01/17] Add kernel address sanitizer infrastructure.
@ 2015-01-30 16:04         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 16:04 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:45 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
>> fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>>
>> KASAN uses compile-time instrumentation for checking every memory access,
>> therefore GCC >= v4.9.2 required.
>>
>> ...
>>
>> Based on work by Andrey Konovalov <adech.fo@gmail.com>
> 
> Can we obtain Andrey's signed-off-by: please?
>  

I'll ask.

...

>> +static __always_inline bool memory_is_poisoned_1(unsigned long addr)
> 
> What's with all the __always_inline in this file?  When I remove them
> all, kasan.o .text falls from 8294 bytes down to 4543 bytes.  That's
> massive, and quite possibly faster.
> 
> If there's some magical functional reason for this then can we please
> get a nice prominent comment into this code apologetically explaining
> it?
> 

The main reason is performance. __always_inline especially needed for check_memory_region()
and memory_is_poisoned() to optimize away switch in memory_is_poisoned():

	if (__builtin_constant_p(size)) {
		switch (size) {
		case 1:
			return memory_is_poisoned_1(addr);
		case 2:
			return memory_is_poisoned_2(addr);
		case 4:
			return memory_is_poisoned_4(addr);
		case 8:
			return memory_is_poisoned_8(addr);
		case 16:
			return memory_is_poisoned_16(addr);
		default:
			BUILD_BUG();
		}
	}

Always inlining memory_is_poisoned_x() gives additionally about 7%-10%.

According to my simple testing __always_inline gives about 20% versus
not inlined version of kasan.c


...

>> +
>> +void __asan_loadN(unsigned long addr, size_t size)
>> +{
>> +	check_memory_region(addr, size, false);
>> +}
>> +EXPORT_SYMBOL(__asan_loadN);
>> +
>> +__attribute__((alias("__asan_loadN")))
> 
> Maybe we need a __alias.  Like __packed and various other helpers.
> 

Ok.

....

>> +
>> +static __always_inline void kasan_report(unsigned long addr,
>> +					size_t size,
>> +					bool is_write)
> 
> Again, why the inline?  This is presumably not a hotpath and
> kasan_report has sixish call sites.
> 

The reason of __always_inline here is to get correct _RET_IP_.
I could pass it from above and drop always inline here.

> 
>> +{
>> +	struct access_info info;
>> +
>> +	if (likely(!kasan_enabled()))
>> +		return;
>> +
>> +	info.access_addr = addr;
>> +	info.access_size = size;
>> +	info.is_write = is_write;
>> +	info.ip = _RET_IP_;
>> +	kasan_report_error(&info);
>> +}
>>
...

>> +
>> +static void print_address_description(struct access_info *info)
>> +{
>> +	dump_stack();
>> +}
> 
> dump_stack() uses KERN_INFO but the callers or
> print_address_description() use KERN_ERR.  This means that at some
> settings of `dmesg -n', the kasan output will have large missing
> chunks.
> 
> Please test this and deide how bad it is.  A proper fix will be
> somewhat messy (new_dump_stack(KERN_ERR)).
> 

This new_dump_stack() could be useful in other places.
E.g. object_err()/slab_err() in SLUB also use pr_err() + dump_stack() combination.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
  2015-01-29 23:12       ` Andrew Morton
@ 2015-01-30 16:15         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 16:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:46 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> This patch adds arch specific code for kernel address sanitizer.
>>
>> 16TB of virtual addressed used for shadow memory.
>> It's located in range [ffffec0000000000 - fffffc0000000000]
>> between vmemmap and %esp fixup stacks.
>>
>> At early stage we map whole shadow region with zero page.
>> Latter, after pages mapped to direct mapping address range
>> we unmap zero pages from corresponding shadow (see kasan_map_shadow())
>> and allocate and map a real shadow memory reusing vmemmap_populate()
>> function.
>>
>> Also replace __pa with __pa_nodebug before shadow initialized.
>> __pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
>> __phys_addr is instrumented, so __asan_load could be called before
>> shadow area initialized.
>>
>> ...
>>
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
>>  
>>  config KASAN
>>  	bool "AddressSanitizer: runtime memory debugger"
>> +	depends on !MEMORY_HOTPLUG
>>  	help
>>  	  Enables address sanitizer - runtime memory debugger,
>>  	  designed to find out-of-bounds accesses and use-after-free bugs.
> 
> That's a significant restriction.  It has obvious runtime implications.
> It also means that `make allmodconfig' and `make allyesconfig' don't
> enable kasan, so compile coverage will be impacted.
> 
> This wasn't changelogged.  What's the reasoning and what has to be done
> to fix it?
> 

Yes, this is runtime dependency. Hot adding memory won't work.
Since we don't have shadow for hotplugged memory, kernel will crash on the first access to it.
To fix this we need to allocate shadow for new memory.

Perhaps it would be better to have a runtime warning instead of Kconfig dependecy?



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
@ 2015-01-30 16:15         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 16:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:46 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> This patch adds arch specific code for kernel address sanitizer.
>>
>> 16TB of virtual addressed used for shadow memory.
>> It's located in range [ffffec0000000000 - fffffc0000000000]
>> between vmemmap and %esp fixup stacks.
>>
>> At early stage we map whole shadow region with zero page.
>> Latter, after pages mapped to direct mapping address range
>> we unmap zero pages from corresponding shadow (see kasan_map_shadow())
>> and allocate and map a real shadow memory reusing vmemmap_populate()
>> function.
>>
>> Also replace __pa with __pa_nodebug before shadow initialized.
>> __pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
>> __phys_addr is instrumented, so __asan_load could be called before
>> shadow area initialized.
>>
>> ...
>>
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
>>  
>>  config KASAN
>>  	bool "AddressSanitizer: runtime memory debugger"
>> +	depends on !MEMORY_HOTPLUG
>>  	help
>>  	  Enables address sanitizer - runtime memory debugger,
>>  	  designed to find out-of-bounds accesses and use-after-free bugs.
> 
> That's a significant restriction.  It has obvious runtime implications.
> It also means that `make allmodconfig' and `make allyesconfig' don't
> enable kasan, so compile coverage will be impacted.
> 
> This wasn't changelogged.  What's the reasoning and what has to be done
> to fix it?
> 

Yes, this is runtime dependency. Hot adding memory won't work.
Since we don't have shadow for hotplugged memory, kernel will crash on the first access to it.
To fix this we need to allocate shadow for new memory.

Perhaps it would be better to have a runtime warning instead of Kconfig dependecy?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 04/17] mm: slub: introduce virt_to_obj function.
  2015-01-29 23:12       ` Andrew Morton
@ 2015-01-30 16:17         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 16:17 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:48 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> virt_to_obj takes kmem_cache address, address of slab page,
>> address x pointing somewhere inside slab object,
>> and returns address of the begging of object.
> 
> "beginning"
> 
> The above text may as well be placed into slub_def.h as a comment.
> 

Ok.

>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> Acked-by: Christoph Lameter <cl@linux.com>
>> ---
>>  include/linux/slub_def.h | 5 +++++
>>  1 file changed, 5 insertions(+)
>>
>> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
>> index 9abf04e..eca3883 100644
>> --- a/include/linux/slub_def.h
>> +++ b/include/linux/slub_def.h
>> @@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
>>  }
>>  #endif
>>  
>> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
>> +{
>> +	return x - ((x - slab_page) % s->size);
>> +}
> 
> "const void *x" would be better.
> 

Yep.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 04/17] mm: slub: introduce virt_to_obj function.
@ 2015-01-30 16:17         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 16:17 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:48 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> virt_to_obj takes kmem_cache address, address of slab page,
>> address x pointing somewhere inside slab object,
>> and returns address of the begging of object.
> 
> "beginning"
> 
> The above text may as well be placed into slub_def.h as a comment.
> 

Ok.

>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> Acked-by: Christoph Lameter <cl@linux.com>
>> ---
>>  include/linux/slub_def.h | 5 +++++
>>  1 file changed, 5 insertions(+)
>>
>> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
>> index 9abf04e..eca3883 100644
>> --- a/include/linux/slub_def.h
>> +++ b/include/linux/slub_def.h
>> @@ -110,4 +110,9 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
>>  }
>>  #endif
>>  
>> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_page, void *x)
>> +{
>> +	return x - ((x - slab_page) % s->size);
>> +}
> 
> "const void *x" would be better.
> 

Yep.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-01-29 23:12       ` Andrew Morton
@ 2015-01-30 17:05         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:05 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:50 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> Wrap access to object's metadata in external functions with
>> metadata_access_enable()/metadata_access_disable() function calls.
>>
>> This hooks separates payload accesses from metadata accesses
>> which might be useful for different checkers (e.g. KASan).
>>
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -467,13 +467,23 @@ static int slub_debug;
>>  static char *slub_debug_slabs;
>>  static int disable_higher_order_debug;
>>  
>> +static inline void metadata_access_enable(void)
>> +{
>> +}
>> +
>> +static inline void metadata_access_disable(void)
>> +{
>> +}
> 
> Some code comments here would be useful.  What they do, why they exist,
> etc.  The next patch fills them in with
> kasan_disable_local/kasan_enable_local but that doesn't help the reader
> to understand what's going on.  The fact that
> kasan_disable_local/kasan_enable_local are also undocumented doesn't
> help.
> 

Ok, How about this?

/*
 * This hooks separate payload access from metadata access.
 * Useful for memory checkers that have to know when slub
 * accesses metadata.
 */



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-01-30 17:05         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:05 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:50 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> Wrap access to object's metadata in external functions with
>> metadata_access_enable()/metadata_access_disable() function calls.
>>
>> This hooks separates payload accesses from metadata accesses
>> which might be useful for different checkers (e.g. KASan).
>>
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -467,13 +467,23 @@ static int slub_debug;
>>  static char *slub_debug_slabs;
>>  static int disable_higher_order_debug;
>>  
>> +static inline void metadata_access_enable(void)
>> +{
>> +}
>> +
>> +static inline void metadata_access_disable(void)
>> +{
>> +}
> 
> Some code comments here would be useful.  What they do, why they exist,
> etc.  The next patch fills them in with
> kasan_disable_local/kasan_enable_local but that doesn't help the reader
> to understand what's going on.  The fact that
> kasan_disable_local/kasan_enable_local are also undocumented doesn't
> help.
> 

Ok, How about this?

/*
 * This hooks separate payload access from metadata access.
 * Useful for memory checkers that have to know when slub
 * accesses metadata.
 */


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 15/17] kernel: add support for .init_array.* constructors
  2015-01-29 23:13       ` Andrew Morton
@ 2015-01-30 17:21         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Arnd Bergmann, open list:GENERIC INCLUDE/A...

On 01/30/2015 02:13 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:59 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> KASan uses constructors for initializing redzones for global
>> variables. Actually KASan doesn't need priorities for constructors,
>> so they were removed from GCC 5.0, but GCC 4.9.2 still generates
>> constructors with priorities.
> 
> I don't understand this changelog either.  What's wrong with priorities
> and what is the patch doing about it?  More details, please.
> 

Currently kernel ignore constructors with priorities (e.g. .init_array.00099).
Kernel understand only constructors with default priority ( .init_array ).

This patch adds support for constructors with priorities.

For kernel image we put pointers to constructors between __ctors_start/__ctors_end
and do_ctors() will call them.

For modules  - .init_array.* sections merged into .init_array section.
Module code properly handles constructors in .init_array section.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 15/17] kernel: add support for .init_array.* constructors
@ 2015-01-30 17:21         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Arnd Bergmann, open list:GENERIC INCLUDE/A...

On 01/30/2015 02:13 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:59 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> KASan uses constructors for initializing redzones for global
>> variables. Actually KASan doesn't need priorities for constructors,
>> so they were removed from GCC 5.0, but GCC 4.9.2 still generates
>> constructors with priorities.
> 
> I don't understand this changelog either.  What's wrong with priorities
> and what is the patch doing about it?  More details, please.
> 

Currently kernel ignore constructors with priorities (e.g. .init_array.00099).
Kernel understand only constructors with default priority ( .init_array ).

This patch adds support for constructors with priorities.

For kernel image we put pointers to constructors between __ctors_start/__ctors_end
and do_ctors() will call them.

For modules  - .init_array.* sections merged into .init_array section.
Module code properly handles constructors in .init_array section.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 16/17] module: fix types of device tables aliases
  2015-01-29 23:13       ` Andrew Morton
@ 2015-01-30 17:44         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:44 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Rusty Russell

On 01/30/2015 02:13 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:12:00 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
>> Normally alias should have the same type as aliased symbol.
>>
>> Device tables are arrays, so they have 'struct type##_device_id[x]'
>> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
>> 	'struct type##_device_id'.
>>
>> This inconsistency confuses compiler, it could make a wrong
>> assumption about variable's size which leads KASan to
>> produce a false positive report about out of bounds access.
> 
> The changelog describes the problem but doesn't describe how the patch
> addresses the problem.  Some more details would be useful.
> 

For every global variable compiler calls __asan_register_globals()
passing information about global variable (address, size, size with redzone, name ...)
__asan_register_globals() poison symbols redzone so we could detect out of bounds access.

If we have alias to symbol __asan_register_globals() will be called as for symbol so for alias.
Compiler determines size of variable by its type.
Alias and symbol have the same address, but if alias have the wrong size we will
poison part of memory that actually belongs to the symbol, not the redzone.


>> --- a/include/linux/module.h
>> +++ b/include/linux/module.h
>> @@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
>>  #ifdef MODULE
>>  /* Creates an alias so file2alias.c can find device table. */
>>  #define MODULE_DEVICE_TABLE(type, name)					\
>> -  extern const struct type##_device_id __mod_##type##__##name##_device_table \
>> +extern typeof(name) __mod_##type##__##name##_device_table \
>>    __attribute__ ((unused, alias(__stringify(name))))
> 
> We lost the const?  If that's deliberate then why?  What are the
> implications?  Do the device tables now go into rw memory?
> 

Lack of const is unintentional, but this should be harmless because
this is just an alias to device table.

I'll add const back.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 16/17] module: fix types of device tables aliases
@ 2015-01-30 17:44         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:44 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Rusty Russell

On 01/30/2015 02:13 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:12:00 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
>> Normally alias should have the same type as aliased symbol.
>>
>> Device tables are arrays, so they have 'struct type##_device_id[x]'
>> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
>> 	'struct type##_device_id'.
>>
>> This inconsistency confuses compiler, it could make a wrong
>> assumption about variable's size which leads KASan to
>> produce a false positive report about out of bounds access.
> 
> The changelog describes the problem but doesn't describe how the patch
> addresses the problem.  Some more details would be useful.
> 

For every global variable compiler calls __asan_register_globals()
passing information about global variable (address, size, size with redzone, name ...)
__asan_register_globals() poison symbols redzone so we could detect out of bounds access.

If we have alias to symbol __asan_register_globals() will be called as for symbol so for alias.
Compiler determines size of variable by its type.
Alias and symbol have the same address, but if alias have the wrong size we will
poison part of memory that actually belongs to the symbol, not the redzone.


>> --- a/include/linux/module.h
>> +++ b/include/linux/module.h
>> @@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
>>  #ifdef MODULE
>>  /* Creates an alias so file2alias.c can find device table. */
>>  #define MODULE_DEVICE_TABLE(type, name)					\
>> -  extern const struct type##_device_id __mod_##type##__##name##_device_table \
>> +extern typeof(name) __mod_##type##__##name##_device_table \
>>    __attribute__ ((unused, alias(__stringify(name))))
> 
> We lost the const?  If that's deliberate then why?  What are the
> implications?  Do the device tables now go into rw memory?
> 

Lack of const is unintentional, but this should be harmless because
this is just an alias to device table.

I'll add const back.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
  2015-01-29 23:13       ` Andrew Morton
@ 2015-01-30 17:47         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

On 01/30/2015 02:13 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:12:01 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> This feature let us to detect accesses out of bounds
>> of global variables.
> 
> global variables *within modules*, I think?  More specificity needed here.

Within modules and within kernel image. Handling modules just the most
tricky part of this.

> 
>> The idea of this is simple. Compiler increases each global variable
>> by redzone size and add constructors invoking __asan_register_globals()
>> function. Information about global variable (address, size,
>> size with redzone ...) passed to __asan_register_globals() so we could
>> poison variable's redzone.
>>
>> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
>> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
>> more simple. Such alignment guarantees that each shadow page backing
>> modules address space correspond to only one module_alloc() allocation.
>>
>> ...
>>
>> +int kasan_module_alloc(void *addr, size_t size)
>> +{
>> +
>> +	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
>> +				PAGE_SIZE);
>> +	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
>> +	void *ret;
> 
> Like this:
> 
> 	size_t shadow_size;
> 	unsigned long shadow_start;
> 	void *ret;
> 
> 	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, PAGE_SIZE);
> 	shadow_start = kasan_mem_to_shadow((unsigned long)addr);
> 
> it's much easier to read and avoids the 80-column trickery.
> 
> I do suspect that
> 
> 	void *kasan_mem_to_shadow(const void *addr);
> 
> would clean up lots and lots of code.
> 

Agreed.

>> +	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
>> +		return -EINVAL;
>> +
>> +	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
>> +			shadow_start + shadow_size,
>> +			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
>> +			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
>> +			__builtin_return_address(0));
>> +	return ret ? 0 : -ENOMEM;
>> +}
>> +
>>
>> ...
>>
>> +struct kasan_global {
>> +	const void *beg;		/* Address of the beginning of the global variable. */
>> +	size_t size;			/* Size of the global variable. */
>> +	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
>> +	const void *name;
>> +	const void *module_name;	/* Name of the module where the global variable is declared. */
>> +	unsigned long has_dynamic_init;	/* This needed for C++ */
> 
> This can be removed?
> 

No, compiler dictates layout of this struct. That probably deserves a comment.

>> +#if KASAN_ABI_VERSION >= 4
>> +	struct kasan_source_location *location;
>> +#endif
>> +};
>>
>> ...
>>
> 
> 


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
@ 2015-01-30 17:47         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

On 01/30/2015 02:13 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:12:01 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> This feature let us to detect accesses out of bounds
>> of global variables.
> 
> global variables *within modules*, I think?  More specificity needed here.

Within modules and within kernel image. Handling modules just the most
tricky part of this.

> 
>> The idea of this is simple. Compiler increases each global variable
>> by redzone size and add constructors invoking __asan_register_globals()
>> function. Information about global variable (address, size,
>> size with redzone ...) passed to __asan_register_globals() so we could
>> poison variable's redzone.
>>
>> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
>> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
>> more simple. Such alignment guarantees that each shadow page backing
>> modules address space correspond to only one module_alloc() allocation.
>>
>> ...
>>
>> +int kasan_module_alloc(void *addr, size_t size)
>> +{
>> +
>> +	size_t shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
>> +				PAGE_SIZE);
>> +	unsigned long shadow_start = kasan_mem_to_shadow((unsigned long)addr);
>> +	void *ret;
> 
> Like this:
> 
> 	size_t shadow_size;
> 	unsigned long shadow_start;
> 	void *ret;
> 
> 	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, PAGE_SIZE);
> 	shadow_start = kasan_mem_to_shadow((unsigned long)addr);
> 
> it's much easier to read and avoids the 80-column trickery.
> 
> I do suspect that
> 
> 	void *kasan_mem_to_shadow(const void *addr);
> 
> would clean up lots and lots of code.
> 

Agreed.

>> +	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
>> +		return -EINVAL;
>> +
>> +	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
>> +			shadow_start + shadow_size,
>> +			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
>> +			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
>> +			__builtin_return_address(0));
>> +	return ret ? 0 : -ENOMEM;
>> +}
>> +
>>
>> ...
>>
>> +struct kasan_global {
>> +	const void *beg;		/* Address of the beginning of the global variable. */
>> +	size_t size;			/* Size of the global variable. */
>> +	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
>> +	const void *name;
>> +	const void *module_name;	/* Name of the module where the global variable is declared. */
>> +	unsigned long has_dynamic_init;	/* This needed for C++ */
> 
> This can be removed?
> 

No, compiler dictates layout of this struct. That probably deserves a comment.

>> +#if KASAN_ABI_VERSION >= 4
>> +	struct kasan_source_location *location;
>> +#endif
>> +};
>>
>> ...
>>
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 13/17] mm: vmalloc: add flag preventing guard hole allocation
  2015-01-29 23:12       ` Andrew Morton
@ 2015-01-30 17:51         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:57 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> For instrumenting global variables KASan will shadow memory
>> backing memory for modules. So on module loading we will need
>> to allocate shadow memory and map it at exact virtual address.
> 
> I don't understand.  What does "map it at exact virtual address" mean?
> 

I mean that if module_alloc() returned address x, than
shadow memory should be mapped exactly at address kasan_mem_to_shadow(x).

>> __vmalloc_node_range() seems like the best fit for that purpose,
>> except it puts a guard hole after allocated area.
> 
> Why is the guard hole a problem?
> 

Because of guard hole in shadow some future allocations of shadow memory
will fail. Requested address ( kasan_mem_to_shadow(x) ) will be already occupied
by guard hole of previous allocation.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 13/17] mm: vmalloc: add flag preventing guard hole allocation
@ 2015-01-30 17:51         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 17:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm

On 01/30/2015 02:12 AM, Andrew Morton wrote:
> On Thu, 29 Jan 2015 18:11:57 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> 
>> For instrumenting global variables KASan will shadow memory
>> backing memory for modules. So on module loading we will need
>> to allocate shadow memory and map it at exact virtual address.
> 
> I don't understand.  What does "map it at exact virtual address" mean?
> 

I mean that if module_alloc() returned address x, than
shadow memory should be mapped exactly at address kasan_mem_to_shadow(x).

>> __vmalloc_node_range() seems like the best fit for that purpose,
>> except it puts a guard hole after allocated area.
> 
> Why is the guard hole a problem?
> 

Because of guard hole in shadow some future allocations of shadow memory
will fail. Requested address ( kasan_mem_to_shadow(x) ) will be already occupied
by guard hole of previous allocation.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
  2015-01-30 16:15         ` Andrey Ryabinin
@ 2015-01-30 21:35           ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 21:35 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

On Fri, 30 Jan 2015 19:15:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> >> --- a/lib/Kconfig.kasan
> >> +++ b/lib/Kconfig.kasan
> >> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
> >>  
> >>  config KASAN
> >>  	bool "AddressSanitizer: runtime memory debugger"
> >> +	depends on !MEMORY_HOTPLUG
> >>  	help
> >>  	  Enables address sanitizer - runtime memory debugger,
> >>  	  designed to find out-of-bounds accesses and use-after-free bugs.
> > 
> > That's a significant restriction.  It has obvious runtime implications.
> > It also means that `make allmodconfig' and `make allyesconfig' don't
> > enable kasan, so compile coverage will be impacted.
> > 
> > This wasn't changelogged.  What's the reasoning and what has to be done
> > to fix it?
> > 
> 
> Yes, this is runtime dependency. Hot adding memory won't work.
> Since we don't have shadow for hotplugged memory, kernel will crash on the first access to it.
> To fix this we need to allocate shadow for new memory.

This definitely should be covered in the changelog.

In general, please take most (all?) review questions as requests to add
content to the changelog and/or to add code comments - if a reviewer
didn't understand something then other readers are likely to be
wondering the same thing.

> Perhaps it would be better to have a runtime warning instead of Kconfig dependecy?

mmm...  yes, that sounds better.  Maybe print a warning at startup and
then disable memory hot-add?  I expect that if the user has enabled
kasan and mem-hotplug at the same time, he/she would prefer that
hotplug be disabled than kasan.


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
@ 2015-01-30 21:35           ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 21:35 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

On Fri, 30 Jan 2015 19:15:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> >> --- a/lib/Kconfig.kasan
> >> +++ b/lib/Kconfig.kasan
> >> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
> >>  
> >>  config KASAN
> >>  	bool "AddressSanitizer: runtime memory debugger"
> >> +	depends on !MEMORY_HOTPLUG
> >>  	help
> >>  	  Enables address sanitizer - runtime memory debugger,
> >>  	  designed to find out-of-bounds accesses and use-after-free bugs.
> > 
> > That's a significant restriction.  It has obvious runtime implications.
> > It also means that `make allmodconfig' and `make allyesconfig' don't
> > enable kasan, so compile coverage will be impacted.
> > 
> > This wasn't changelogged.  What's the reasoning and what has to be done
> > to fix it?
> > 
> 
> Yes, this is runtime dependency. Hot adding memory won't work.
> Since we don't have shadow for hotplugged memory, kernel will crash on the first access to it.
> To fix this we need to allocate shadow for new memory.

This definitely should be covered in the changelog.

In general, please take most (all?) review questions as requests to add
content to the changelog and/or to add code comments - if a reviewer
didn't understand something then other readers are likely to be
wondering the same thing.

> Perhaps it would be better to have a runtime warning instead of Kconfig dependecy?

mmm...  yes, that sounds better.  Maybe print a warning at startup and
then disable memory hot-add?  I expect that if the user has enabled
kasan and mem-hotplug at the same time, he/she would prefer that
hotplug be disabled than kasan.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
  2015-01-30 16:15         ` Andrey Ryabinin
@ 2015-01-30 21:37           ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 21:37 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

On Fri, 30 Jan 2015 19:15:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> >> --- a/lib/Kconfig.kasan
> >> +++ b/lib/Kconfig.kasan
> >> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
> >>  
> >>  config KASAN
> >>  	bool "AddressSanitizer: runtime memory debugger"
> >> +	depends on !MEMORY_HOTPLUG
> >>  	help
> >>  	  Enables address sanitizer - runtime memory debugger,
> >>  	  designed to find out-of-bounds accesses and use-after-free bugs.
> > 
> > That's a significant restriction.  It has obvious runtime implications.
> > It also means that `make allmodconfig' and `make allyesconfig' don't
> > enable kasan, so compile coverage will be impacted.
> > 
> > This wasn't changelogged.  What's the reasoning and what has to be done
> > to fix it?
> > 
> 
> Yes, this is runtime dependency. Hot adding memory won't work.
> Since we don't have shadow for hotplugged memory, kernel will crash on the first access to it.
> To fix this we need to allocate shadow for new memory.
> 
> Perhaps it would be better to have a runtime warning instead of Kconfig dependecy?

Is there a plan to get mem-hotplug working with kasan, btw?  It doesn't
strike me as very important/urgent.  Please add a sentence about this
to the changelog as well.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
@ 2015-01-30 21:37           ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 21:37 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

On Fri, 30 Jan 2015 19:15:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> >> --- a/lib/Kconfig.kasan
> >> +++ b/lib/Kconfig.kasan
> >> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
> >>  
> >>  config KASAN
> >>  	bool "AddressSanitizer: runtime memory debugger"
> >> +	depends on !MEMORY_HOTPLUG
> >>  	help
> >>  	  Enables address sanitizer - runtime memory debugger,
> >>  	  designed to find out-of-bounds accesses and use-after-free bugs.
> > 
> > That's a significant restriction.  It has obvious runtime implications.
> > It also means that `make allmodconfig' and `make allyesconfig' don't
> > enable kasan, so compile coverage will be impacted.
> > 
> > This wasn't changelogged.  What's the reasoning and what has to be done
> > to fix it?
> > 
> 
> Yes, this is runtime dependency. Hot adding memory won't work.
> Since we don't have shadow for hotplugged memory, kernel will crash on the first access to it.
> To fix this we need to allocate shadow for new memory.
> 
> Perhaps it would be better to have a runtime warning instead of Kconfig dependecy?

Is there a plan to get mem-hotplug working with kasan, btw?  It doesn't
strike me as very important/urgent.  Please add a sentence about this
to the changelog as well.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-01-30 17:05         ` Andrey Ryabinin
@ 2015-01-30 21:42           ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 21:42 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Fri, 30 Jan 2015 20:05:13 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -467,13 +467,23 @@ static int slub_debug;
> >>  static char *slub_debug_slabs;
> >>  static int disable_higher_order_debug;
> >>  
> >> +static inline void metadata_access_enable(void)
> >> +{
> >> +}
> >> +
> >> +static inline void metadata_access_disable(void)
> >> +{
> >> +}
> > 
> > Some code comments here would be useful.  What they do, why they exist,
> > etc.  The next patch fills them in with
> > kasan_disable_local/kasan_enable_local but that doesn't help the reader
> > to understand what's going on.  The fact that
> > kasan_disable_local/kasan_enable_local are also undocumented doesn't
> > help.
> > 
> 
> Ok, How about this?
> 
> /*
>  * This hooks separate payload access from metadata access.
>  * Useful for memory checkers that have to know when slub
>  * accesses metadata.
>  */

"These hooks".

I still don't understand :( Maybe I'm having a more-stupid-than-usual
day.  How can a function "separate access"?  What does this mean?  More
details, please.  I think I've only once seen a comment which had too
much info!



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-01-30 21:42           ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 21:42 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Fri, 30 Jan 2015 20:05:13 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -467,13 +467,23 @@ static int slub_debug;
> >>  static char *slub_debug_slabs;
> >>  static int disable_higher_order_debug;
> >>  
> >> +static inline void metadata_access_enable(void)
> >> +{
> >> +}
> >> +
> >> +static inline void metadata_access_disable(void)
> >> +{
> >> +}
> > 
> > Some code comments here would be useful.  What they do, why they exist,
> > etc.  The next patch fills them in with
> > kasan_disable_local/kasan_enable_local but that doesn't help the reader
> > to understand what's going on.  The fact that
> > kasan_disable_local/kasan_enable_local are also undocumented doesn't
> > help.
> > 
> 
> Ok, How about this?
> 
> /*
>  * This hooks separate payload access from metadata access.
>  * Useful for memory checkers that have to know when slub
>  * accesses metadata.
>  */

"These hooks".

I still don't understand :( Maybe I'm having a more-stupid-than-usual
day.  How can a function "separate access"?  What does this mean?  More
details, please.  I think I've only once seen a comment which had too
much info!


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
  2015-01-30 17:47         ` Andrey Ryabinin
@ 2015-01-30 21:45           ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 21:45 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

On Fri, 30 Jan 2015 20:47:13 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> >> +struct kasan_global {
> >> +	const void *beg;		/* Address of the beginning of the global variable. */
> >> +	size_t size;			/* Size of the global variable. */
> >> +	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
> >> +	const void *name;
> >> +	const void *module_name;	/* Name of the module where the global variable is declared. */
> >> +	unsigned long has_dynamic_init;	/* This needed for C++ */
> > 
> > This can be removed?
> > 
> 
> No, compiler dictates layout of this struct. That probably deserves a comment.

I see.  A link to the relevant gcc doc would be good.

Perhaps the compiler provides a header file so clients of this feature
don't need to write their own?

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
@ 2015-01-30 21:45           ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 21:45 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

On Fri, 30 Jan 2015 20:47:13 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> >> +struct kasan_global {
> >> +	const void *beg;		/* Address of the beginning of the global variable. */
> >> +	size_t size;			/* Size of the global variable. */
> >> +	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
> >> +	const void *name;
> >> +	const void *module_name;	/* Name of the module where the global variable is declared. */
> >> +	unsigned long has_dynamic_init;	/* This needed for C++ */
> > 
> > This can be removed?
> > 
> 
> No, compiler dictates layout of this struct. That probably deserves a comment.

I see.  A link to the relevant gcc doc would be good.

Perhaps the compiler provides a header file so clients of this feature
don't need to write their own?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-01-30 21:42           ` Andrew Morton
@ 2015-01-30 23:11             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 23:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

2015-01-31 0:42 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Fri, 30 Jan 2015 20:05:13 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>> >> --- a/mm/slub.c
>> >> +++ b/mm/slub.c
>> >> @@ -467,13 +467,23 @@ static int slub_debug;
>> >>  static char *slub_debug_slabs;
>> >>  static int disable_higher_order_debug;
>> >>
>> >> +static inline void metadata_access_enable(void)
>> >> +{
>> >> +}
>> >> +
>> >> +static inline void metadata_access_disable(void)
>> >> +{
>> >> +}
>> >
>> > Some code comments here would be useful.  What they do, why they exist,
>> > etc.  The next patch fills them in with
>> > kasan_disable_local/kasan_enable_local but that doesn't help the reader
>> > to understand what's going on.  The fact that
>> > kasan_disable_local/kasan_enable_local are also undocumented doesn't
>> > help.
>> >
>>
>> Ok, How about this?
>>
>> /*
>>  * This hooks separate payload access from metadata access.
>>  * Useful for memory checkers that have to know when slub
>>  * accesses metadata.
>>  */
>
> "These hooks".
>
> I still don't understand :( Maybe I'm having a more-stupid-than-usual
> day.

I think it's me being stupid today ;) I'll try to explain better.

> How can a function "separate access"?  What does this mean?  More
> details, please.  I think I've only once seen a comment which had too
> much info!
>

slub could access memory marked by kasan as inaccessible (object's metadata).
Kasan shouldn't print report in that case because this access is valid.
Disabling instrumentation of slub.c code is not enough to achieve this
because slub passes pointer to object's metadata into memchr_inv().

We can't disable instrumentation for memchr_inv() because this is quite
generic function.

So metadata_access_enable/metadata_access_disable wrap some
places in slub.c where access to object's metadata starts/end.
And kasan_disable_local/kasan_enable_local just disable/enable
error reporting in this places.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-01-30 23:11             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 23:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

2015-01-31 0:42 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Fri, 30 Jan 2015 20:05:13 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>> >> --- a/mm/slub.c
>> >> +++ b/mm/slub.c
>> >> @@ -467,13 +467,23 @@ static int slub_debug;
>> >>  static char *slub_debug_slabs;
>> >>  static int disable_higher_order_debug;
>> >>
>> >> +static inline void metadata_access_enable(void)
>> >> +{
>> >> +}
>> >> +
>> >> +static inline void metadata_access_disable(void)
>> >> +{
>> >> +}
>> >
>> > Some code comments here would be useful.  What they do, why they exist,
>> > etc.  The next patch fills them in with
>> > kasan_disable_local/kasan_enable_local but that doesn't help the reader
>> > to understand what's going on.  The fact that
>> > kasan_disable_local/kasan_enable_local are also undocumented doesn't
>> > help.
>> >
>>
>> Ok, How about this?
>>
>> /*
>>  * This hooks separate payload access from metadata access.
>>  * Useful for memory checkers that have to know when slub
>>  * accesses metadata.
>>  */
>
> "These hooks".
>
> I still don't understand :( Maybe I'm having a more-stupid-than-usual
> day.

I think it's me being stupid today ;) I'll try to explain better.

> How can a function "separate access"?  What does this mean?  More
> details, please.  I think I've only once seen a comment which had too
> much info!
>

slub could access memory marked by kasan as inaccessible (object's metadata).
Kasan shouldn't print report in that case because this access is valid.
Disabling instrumentation of slub.c code is not enough to achieve this
because slub passes pointer to object's metadata into memchr_inv().

We can't disable instrumentation for memchr_inv() because this is quite
generic function.

So metadata_access_enable/metadata_access_disable wrap some
places in slub.c where access to object's metadata starts/end.
And kasan_disable_local/kasan_enable_local just disable/enable
error reporting in this places.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-01-30 23:11             ` Andrey Ryabinin
@ 2015-01-30 23:16               ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 23:16 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Sat, 31 Jan 2015 03:11:55 +0400 Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:

> >> > kasan_disable_local/kasan_enable_local are also undocumented doesn't
> >> > help.
> >> >
> >>
> >> Ok, How about this?
> >>
> >> /*
> >>  * This hooks separate payload access from metadata access.
> >>  * Useful for memory checkers that have to know when slub
> >>  * accesses metadata.
> >>  */
> >
> > "These hooks".
> >
> > I still don't understand :( Maybe I'm having a more-stupid-than-usual
> > day.
> 
> I think it's me being stupid today ;) I'll try to explain better.
> 
> > How can a function "separate access"?  What does this mean?  More
> > details, please.  I think I've only once seen a comment which had too
> > much info!
> >
> 
> slub could access memory marked by kasan as inaccessible (object's metadata).
> Kasan shouldn't print report in that case because this access is valid.
> Disabling instrumentation of slub.c code is not enough to achieve this
> because slub passes pointer to object's metadata into memchr_inv().
> 
> We can't disable instrumentation for memchr_inv() because this is quite
> generic function.
> 
> So metadata_access_enable/metadata_access_disable wrap some
> places in slub.c where access to object's metadata starts/end.
> And kasan_disable_local/kasan_enable_local just disable/enable
> error reporting in this places.

ooh, I see.  Something like this?

/*
 * slub is about to manipulate internal object metadata.  This memory lies
 * outside the range of the allocated object, so accessing it would normally
 * be reported by kasan as a bounds error.  metadata_access_enable() is used
 * to tell kasan that these accesses are OK.
 */

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-01-30 23:16               ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-01-30 23:16 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

On Sat, 31 Jan 2015 03:11:55 +0400 Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:

> >> > kasan_disable_local/kasan_enable_local are also undocumented doesn't
> >> > help.
> >> >
> >>
> >> Ok, How about this?
> >>
> >> /*
> >>  * This hooks separate payload access from metadata access.
> >>  * Useful for memory checkers that have to know when slub
> >>  * accesses metadata.
> >>  */
> >
> > "These hooks".
> >
> > I still don't understand :( Maybe I'm having a more-stupid-than-usual
> > day.
> 
> I think it's me being stupid today ;) I'll try to explain better.
> 
> > How can a function "separate access"?  What does this mean?  More
> > details, please.  I think I've only once seen a comment which had too
> > much info!
> >
> 
> slub could access memory marked by kasan as inaccessible (object's metadata).
> Kasan shouldn't print report in that case because this access is valid.
> Disabling instrumentation of slub.c code is not enough to achieve this
> because slub passes pointer to object's metadata into memchr_inv().
> 
> We can't disable instrumentation for memchr_inv() because this is quite
> generic function.
> 
> So metadata_access_enable/metadata_access_disable wrap some
> places in slub.c where access to object's metadata starts/end.
> And kasan_disable_local/kasan_enable_local just disable/enable
> error reporting in this places.

ooh, I see.  Something like this?

/*
 * slub is about to manipulate internal object metadata.  This memory lies
 * outside the range of the allocated object, so accessing it would normally
 * be reported by kasan as a bounds error.  metadata_access_enable() is used
 * to tell kasan that these accesses are OK.
 */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
  2015-01-30 21:45           ` Andrew Morton
@ 2015-01-30 23:18             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 23:18 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

2015-01-31 0:45 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Fri, 30 Jan 2015 20:47:13 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>> >> +struct kasan_global {
>> >> +  const void *beg;                /* Address of the beginning of the global variable. */
>> >> +  size_t size;                    /* Size of the global variable. */
>> >> +  size_t size_with_redzone;       /* Size of the variable + size of the red zone. 32 bytes aligned */
>> >> +  const void *name;
>> >> +  const void *module_name;        /* Name of the module where the global variable is declared. */
>> >> +  unsigned long has_dynamic_init; /* This needed for C++ */
>> >
>> > This can be removed?
>> >
>>
>> No, compiler dictates layout of this struct. That probably deserves a comment.
>
> I see.  A link to the relevant gcc doc would be good.
>

There is no doc, only gcc source code.

> Perhaps the compiler provides a header file so clients of this feature
> don't need to write their own?
>

Nope.
Actually, we are the only client of this feature outside gcc code.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 17/17] kasan: enable instrumentation of global variables
@ 2015-01-30 23:18             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 23:18 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Rusty Russell,
	Michal Marek, open list:KERNEL BUILD + fi...

2015-01-31 0:45 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Fri, 30 Jan 2015 20:47:13 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>> >> +struct kasan_global {
>> >> +  const void *beg;                /* Address of the beginning of the global variable. */
>> >> +  size_t size;                    /* Size of the global variable. */
>> >> +  size_t size_with_redzone;       /* Size of the variable + size of the red zone. 32 bytes aligned */
>> >> +  const void *name;
>> >> +  const void *module_name;        /* Name of the module where the global variable is declared. */
>> >> +  unsigned long has_dynamic_init; /* This needed for C++ */
>> >
>> > This can be removed?
>> >
>>
>> No, compiler dictates layout of this struct. That probably deserves a comment.
>
> I see.  A link to the relevant gcc doc would be good.
>

There is no doc, only gcc source code.

> Perhaps the compiler provides a header file so clients of this feature
> don't need to write their own?
>

Nope.
Actually, we are the only client of this feature outside gcc code.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-01-30 23:16               ` Andrew Morton
@ 2015-01-30 23:19                 ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 23:19 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

2015-01-31 2:16 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Sat, 31 Jan 2015 03:11:55 +0400 Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
>> >> > kasan_disable_local/kasan_enable_local are also undocumented doesn't
>> >> > help.
>> >> >
>> >>
>> >> Ok, How about this?
>> >>
>> >> /*
>> >>  * This hooks separate payload access from metadata access.
>> >>  * Useful for memory checkers that have to know when slub
>> >>  * accesses metadata.
>> >>  */
>> >
>> > "These hooks".
>> >
>> > I still don't understand :( Maybe I'm having a more-stupid-than-usual
>> > day.
>>
>> I think it's me being stupid today ;) I'll try to explain better.
>>
>> > How can a function "separate access"?  What does this mean?  More
>> > details, please.  I think I've only once seen a comment which had too
>> > much info!
>> >
>>
>> slub could access memory marked by kasan as inaccessible (object's metadata).
>> Kasan shouldn't print report in that case because this access is valid.
>> Disabling instrumentation of slub.c code is not enough to achieve this
>> because slub passes pointer to object's metadata into memchr_inv().
>>
>> We can't disable instrumentation for memchr_inv() because this is quite
>> generic function.
>>
>> So metadata_access_enable/metadata_access_disable wrap some
>> places in slub.c where access to object's metadata starts/end.
>> And kasan_disable_local/kasan_enable_local just disable/enable
>> error reporting in this places.
>
> ooh, I see.  Something like this?
>

Yes! Thank you, this looks much better.

> /*
>  * slub is about to manipulate internal object metadata.  This memory lies
>  * outside the range of the allocated object, so accessing it would normally
>  * be reported by kasan as a bounds error.  metadata_access_enable() is used
>  * to tell kasan that these accesses are OK.
>  */

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-01-30 23:19                 ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 23:19 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Pekka Enberg, David Rientjes

2015-01-31 2:16 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Sat, 31 Jan 2015 03:11:55 +0400 Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
>> >> > kasan_disable_local/kasan_enable_local are also undocumented doesn't
>> >> > help.
>> >> >
>> >>
>> >> Ok, How about this?
>> >>
>> >> /*
>> >>  * This hooks separate payload access from metadata access.
>> >>  * Useful for memory checkers that have to know when slub
>> >>  * accesses metadata.
>> >>  */
>> >
>> > "These hooks".
>> >
>> > I still don't understand :( Maybe I'm having a more-stupid-than-usual
>> > day.
>>
>> I think it's me being stupid today ;) I'll try to explain better.
>>
>> > How can a function "separate access"?  What does this mean?  More
>> > details, please.  I think I've only once seen a comment which had too
>> > much info!
>> >
>>
>> slub could access memory marked by kasan as inaccessible (object's metadata).
>> Kasan shouldn't print report in that case because this access is valid.
>> Disabling instrumentation of slub.c code is not enough to achieve this
>> because slub passes pointer to object's metadata into memchr_inv().
>>
>> We can't disable instrumentation for memchr_inv() because this is quite
>> generic function.
>>
>> So metadata_access_enable/metadata_access_disable wrap some
>> places in slub.c where access to object's metadata starts/end.
>> And kasan_disable_local/kasan_enable_local just disable/enable
>> error reporting in this places.
>
> ooh, I see.  Something like this?
>

Yes! Thank you, this looks much better.

> /*
>  * slub is about to manipulate internal object metadata.  This memory lies
>  * outside the range of the allocated object, so accessing it would normally
>  * be reported by kasan as a bounds error.  metadata_access_enable() is used
>  * to tell kasan that these accesses are OK.
>  */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
  2015-01-30 21:37           ` Andrew Morton
@ 2015-01-30 23:27             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 23:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

2015-01-31 0:37 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Fri, 30 Jan 2015 19:15:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>> >> --- a/lib/Kconfig.kasan
>> >> +++ b/lib/Kconfig.kasan
>> >> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
>> >>
>> >>  config KASAN
>> >>    bool "AddressSanitizer: runtime memory debugger"
>> >> +  depends on !MEMORY_HOTPLUG
>> >>    help
>> >>      Enables address sanitizer - runtime memory debugger,
>> >>      designed to find out-of-bounds accesses and use-after-free bugs.
>> >
>> > That's a significant restriction.  It has obvious runtime implications.
>> > It also means that `make allmodconfig' and `make allyesconfig' don't
>> > enable kasan, so compile coverage will be impacted.
>> >
>> > This wasn't changelogged.  What's the reasoning and what has to be done
>> > to fix it?
>> >
>>
>> Yes, this is runtime dependency. Hot adding memory won't work.
>> Since we don't have shadow for hotplugged memory, kernel will crash on the first access to it.
>> To fix this we need to allocate shadow for new memory.
>>
>> Perhaps it would be better to have a runtime warning instead of Kconfig dependecy?
>
> Is there a plan to get mem-hotplug working with kasan, btw?  It doesn't
> strike me as very important/urgent.  Please add a sentence about this
> to the changelog as well.
>

I don't have a strict plan for this. I could work on this, but not now

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v10 02/17] x86_64: add KASan support
@ 2015-01-30 23:27             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-01-30 23:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, LKML, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jonathan Corbet,
	Andy Lutomirski, open list:DOCUMENTATION

2015-01-31 0:37 GMT+03:00 Andrew Morton <akpm@linux-foundation.org>:
> On Fri, 30 Jan 2015 19:15:42 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>> >> --- a/lib/Kconfig.kasan
>> >> +++ b/lib/Kconfig.kasan
>> >> @@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
>> >>
>> >>  config KASAN
>> >>    bool "AddressSanitizer: runtime memory debugger"
>> >> +  depends on !MEMORY_HOTPLUG
>> >>    help
>> >>      Enables address sanitizer - runtime memory debugger,
>> >>      designed to find out-of-bounds accesses and use-after-free bugs.
>> >
>> > That's a significant restriction.  It has obvious runtime implications.
>> > It also means that `make allmodconfig' and `make allyesconfig' don't
>> > enable kasan, so compile coverage will be impacted.
>> >
>> > This wasn't changelogged.  What's the reasoning and what has to be done
>> > to fix it?
>> >
>>
>> Yes, this is runtime dependency. Hot adding memory won't work.
>> Since we don't have shadow for hotplugged memory, kernel will crash on the first access to it.
>> To fix this we need to allocate shadow for new memory.
>>
>> Perhaps it would be better to have a runtime warning instead of Kconfig dependecy?
>
> Is there a plan to get mem-hotplug working with kasan, btw?  It doesn't
> strike me as very important/urgent.  Please add a sentence about this
> to the changelog as well.
>

I don't have a strict plan for this. I could work on this, but not now

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v11 00/19] Kernel address sanitizer - runtime memory debugger.
  2014-07-09 11:29 ` Andrey Ryabinin
@ 2015-02-03 17:42   ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Linus Torvalds, Catalin Marinas

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches also available in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v11

Changes since v10:
	- Address comments from Andrew.
	   Note: I didn't fix log level inconsistency between pr_err()/dump_stack()
	   yet. This doesn't seems like super important right now, I don't want to bloat
	   this patchset even more. I think it would be better to do this in separate series,
	   since if we wan't to fix this, we will need to fix slub code too (object_err()
	   which is used by KASan).


Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v9:
	- Makefile changes per discussion with Michal Marek
	- Fixed false positive reports that could happen on module_freeing.

Changes since v8:
	- Fixed unpoisoned redzones for not-allocated-yet object
	    in newly allocated slab page. (from Dmitry C.)

	- Some minor non-function cleanups in kasan internals.

	- Added ack from Catalin

	- Added stack instrumentation. With this we could detect
	    out of bounds accesses in stack variables. (patch 12)

	- Added globals instrumentation - catching out of bounds in
	    global varibles. (patches 13-17)

	- Shadow moved out from vmalloc into hole between vmemmap
	    and %esp fixup stacks. For globals instrumentation
	    we will need shadow backing modules addresses.
	    So we need some sort of a shadow memory allocator
	    (something like vmmemap_populate() function, except
	    that it should be available after boot).

	    __vmalloc_node_range() suits that purpose, except that
	    it can't be used for allocating for shadow in vmalloc
	    area because shadow in vmalloc is already 'allocated'
	    to protect us from other vmalloc users. So we need
	    16TB of unused addresses. And we have big enough hole
	    between vmemmap and %esp fixup stacks. So I moved shadow
	    there.


Changes since v7:
        - Fix build with CONFIG_KASAN_INLINE=y from Sasha.

        - Don't poison redzone on freeing, since it is poisend already from Dmitry Chernenkov.

        - Fix altinstruction_entry for memcpy.

        - Move kasan_slab_free() call after debug_obj_free to prevent some false-positives
            with CONFIG_DEBUG_OBJECTS=y

        - Drop -pg flag for kasan internals to avoid recursion with function tracer
           enabled.

        - Added ack from Christoph.


Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places

Andrey Ryabinin (19):
  compiler: introduce __alias(symbol) shortcut
  Add kernel address sanitizer infrastructure.
  kasan: disable memory hotplug
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share object_err function
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  kasan: enable stack instrumentation
  mm: vmalloc: add flag preventing guard hole allocation
  mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  kernel: add support for .init_array.* constructors
  module: fix types of device tables aliases
  kasan: enable instrumentation of global variables

 Documentation/kasan.txt                | 170 +++++++++++
 Documentation/x86/x86_64/mm.txt        |   2 +
 Makefile                               |   3 +-
 arch/arm/kernel/module.c               |   2 +-
 arch/arm64/kernel/module.c             |   4 +-
 arch/mips/kernel/module.c              |   2 +-
 arch/parisc/kernel/module.c            |   2 +-
 arch/s390/kernel/module.c              |   2 +-
 arch/sparc/kernel/module.c             |   2 +-
 arch/unicore32/kernel/module.c         |   2 +-
 arch/x86/Kconfig                       |   1 +
 arch/x86/boot/Makefile                 |   2 +
 arch/x86/boot/compressed/Makefile      |   2 +
 arch/x86/boot/compressed/eboot.c       |   3 +-
 arch/x86/boot/compressed/misc.h        |   1 +
 arch/x86/include/asm/kasan.h           |  31 ++
 arch/x86/include/asm/page_64_types.h   |  12 +-
 arch/x86/include/asm/string_64.h       |  18 +-
 arch/x86/kernel/Makefile               |   4 +
 arch/x86/kernel/dumpstack.c            |   5 +-
 arch/x86/kernel/head64.c               |   9 +-
 arch/x86/kernel/head_64.S              |  30 ++
 arch/x86/kernel/module.c               |  14 +-
 arch/x86/kernel/setup.c                |   3 +
 arch/x86/kernel/x8664_ksyms_64.c       |  10 +-
 arch/x86/lib/memcpy_64.S               |   6 +-
 arch/x86/lib/memmove_64.S              |   4 +
 arch/x86/lib/memset_64.S               |  10 +-
 arch/x86/mm/Makefile                   |   3 +
 arch/x86/mm/kasan_init_64.c            | 206 +++++++++++++
 arch/x86/realmode/Makefile             |   2 +-
 arch/x86/realmode/rm/Makefile          |   1 +
 arch/x86/vdso/Makefile                 |   1 +
 drivers/firmware/efi/libstub/Makefile  |   1 +
 drivers/firmware/efi/libstub/efistub.h |   4 +
 fs/dcache.c                            |   5 +
 include/asm-generic/vmlinux.lds.h      |   1 +
 include/linux/compiler-gcc.h           |   1 +
 include/linux/compiler-gcc4.h          |   4 +
 include/linux/compiler-gcc5.h          |   2 +
 include/linux/init_task.h              |   8 +
 include/linux/kasan.h                  |  89 ++++++
 include/linux/module.h                 |   2 +-
 include/linux/sched.h                  |   3 +
 include/linux/slab.h                   |  11 +-
 include/linux/slub_def.h               |  19 ++
 include/linux/vmalloc.h                |  13 +-
 kernel/module.c                        |   2 +
 lib/Kconfig.debug                      |   2 +
 lib/Kconfig.kasan                      |  54 ++++
 lib/Makefile                           |   1 +
 lib/test_kasan.c                       | 277 ++++++++++++++++++
 mm/Makefile                            |   4 +
 mm/compaction.c                        |   2 +
 mm/kasan/Makefile                      |   8 +
 mm/kasan/kasan.c                       | 516 +++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                       |  75 +++++
 mm/kasan/report.c                      | 269 +++++++++++++++++
 mm/kmemleak.c                          |   6 +
 mm/page_alloc.c                        |   3 +
 mm/slab_common.c                       |   5 +-
 mm/slub.c                              |  58 +++-
 mm/vmalloc.c                           |  16 +-
 scripts/Makefile.kasan                 |  26 ++
 scripts/Makefile.lib                   |  10 +
 scripts/module-common.lds              |   3 +
 66 files changed, 2022 insertions(+), 47 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

--
-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
-- 
2.2.2


^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v11 00/19] Kernel address sanitizer - runtime memory debugger.
@ 2015-02-03 17:42   ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Michal Marek,
	Thomas Gleixner, Ingo Molnar, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Dave Hansen,
	Andi Kleen, Vegard Nossum, H. Peter Anvin, x86, linux-mm,
	Randy Dunlap, Peter Zijlstra, Alexander Viro, Dave Jones,
	Jonathan Corbet, Linus Torvalds, Catalin Marinas

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v4.9.2

Patches also available in git:

	git://github.com/aryabinin/linux --branch=kasan/kasan_v11

Changes since v10:
	- Address comments from Andrew.
	   Note: I didn't fix log level inconsistency between pr_err()/dump_stack()
	   yet. This doesn't seems like super important right now, I don't want to bloat
	   this patchset even more. I think it would be better to do this in separate series,
	   since if we wan't to fix this, we will need to fix slub code too (object_err()
	   which is used by KASan).


Historical background of address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others):
	https://code.google.com/p/address-sanitizer/wiki/FoundBugs
	https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
	https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed here:
	https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some. It's somewhat expected
	that when we boot the kernel and run a trivial workload, we do not
	find hundreds of bugs -- most of the harmful bugs in kernel codebase
	were already fixed the hard way (the kernel is quite stable, right).
	Based on our experience with user-space version of the tool, most of
	the bugs will be discovered by continuously testing new code (new bugs
	discovered the easy way), running fuzzers (that can discover existing
	bugs that are not hit frequently enough) and running end-to-end tests
	of production systems.

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of uninitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port.

	Thanks"


Comparison with other debugging features:
=======================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

 no debug:	87380  16384  16384    30.00    41624.72

 kasan inline:	87380  16384  16384    30.00    12870.54

 kasan outline:	87380  16384  16384    30.00    10586.39

 kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:
===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
         {
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
         }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.


Changelog for previous versions:
===============================

Changes since v9:
	- Makefile changes per discussion with Michal Marek
	- Fixed false positive reports that could happen on module_freeing.

Changes since v8:
	- Fixed unpoisoned redzones for not-allocated-yet object
	    in newly allocated slab page. (from Dmitry C.)

	- Some minor non-function cleanups in kasan internals.

	- Added ack from Catalin

	- Added stack instrumentation. With this we could detect
	    out of bounds accesses in stack variables. (patch 12)

	- Added globals instrumentation - catching out of bounds in
	    global varibles. (patches 13-17)

	- Shadow moved out from vmalloc into hole between vmemmap
	    and %esp fixup stacks. For globals instrumentation
	    we will need shadow backing modules addresses.
	    So we need some sort of a shadow memory allocator
	    (something like vmmemap_populate() function, except
	    that it should be available after boot).

	    __vmalloc_node_range() suits that purpose, except that
	    it can't be used for allocating for shadow in vmalloc
	    area because shadow in vmalloc is already 'allocated'
	    to protect us from other vmalloc users. So we need
	    16TB of unused addresses. And we have big enough hole
	    between vmemmap and %esp fixup stacks. So I moved shadow
	    there.


Changes since v7:
        - Fix build with CONFIG_KASAN_INLINE=y from Sasha.

        - Don't poison redzone on freeing, since it is poisend already from Dmitry Chernenkov.

        - Fix altinstruction_entry for memcpy.

        - Move kasan_slab_free() call after debug_obj_free to prevent some false-positives
            with CONFIG_DEBUG_OBJECTS=y

        - Drop -pg flag for kasan internals to avoid recursion with function tracer
           enabled.

        - Added ack from Christoph.


Changes since v6:
   - New patch 'x86_64: kasan: add interceptors for memset/memmove/memcpy functions'
        Recently instrumentation of builtin functions calls (memset/memmove/memcpy)
        was removed in GCC 5.0. So to check the memory accessed by such functions,
        we now need interceptors for them.

   - Added kasan's die notifier which prints a hint message before General protection fault,
       explaining that GPF could be caused by NULL-ptr dereference or user memory access.

   - Minor refactoring in 3/n patch. Rename kasan_map_shadow() to kasan_init() and call it
     from setup_arch() instead of zone_sizes_init().

   - Slightly tweak kasan's report layout.

   - Update changelog for 1/n patch.

Changes since v5:
    - Added  __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
        GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
         Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
      and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
      kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
      unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
        No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
        GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
        in order to avoid breaking kasan with future GCC update.
        Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
       https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
             https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real
      memory.

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places

Andrey Ryabinin (19):
  compiler: introduce __alias(symbol) shortcut
  Add kernel address sanitizer infrastructure.
  kasan: disable memory hotplug
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share object_err function
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  kasan: enable stack instrumentation
  mm: vmalloc: add flag preventing guard hole allocation
  mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  kernel: add support for .init_array.* constructors
  module: fix types of device tables aliases
  kasan: enable instrumentation of global variables

 Documentation/kasan.txt                | 170 +++++++++++
 Documentation/x86/x86_64/mm.txt        |   2 +
 Makefile                               |   3 +-
 arch/arm/kernel/module.c               |   2 +-
 arch/arm64/kernel/module.c             |   4 +-
 arch/mips/kernel/module.c              |   2 +-
 arch/parisc/kernel/module.c            |   2 +-
 arch/s390/kernel/module.c              |   2 +-
 arch/sparc/kernel/module.c             |   2 +-
 arch/unicore32/kernel/module.c         |   2 +-
 arch/x86/Kconfig                       |   1 +
 arch/x86/boot/Makefile                 |   2 +
 arch/x86/boot/compressed/Makefile      |   2 +
 arch/x86/boot/compressed/eboot.c       |   3 +-
 arch/x86/boot/compressed/misc.h        |   1 +
 arch/x86/include/asm/kasan.h           |  31 ++
 arch/x86/include/asm/page_64_types.h   |  12 +-
 arch/x86/include/asm/string_64.h       |  18 +-
 arch/x86/kernel/Makefile               |   4 +
 arch/x86/kernel/dumpstack.c            |   5 +-
 arch/x86/kernel/head64.c               |   9 +-
 arch/x86/kernel/head_64.S              |  30 ++
 arch/x86/kernel/module.c               |  14 +-
 arch/x86/kernel/setup.c                |   3 +
 arch/x86/kernel/x8664_ksyms_64.c       |  10 +-
 arch/x86/lib/memcpy_64.S               |   6 +-
 arch/x86/lib/memmove_64.S              |   4 +
 arch/x86/lib/memset_64.S               |  10 +-
 arch/x86/mm/Makefile                   |   3 +
 arch/x86/mm/kasan_init_64.c            | 206 +++++++++++++
 arch/x86/realmode/Makefile             |   2 +-
 arch/x86/realmode/rm/Makefile          |   1 +
 arch/x86/vdso/Makefile                 |   1 +
 drivers/firmware/efi/libstub/Makefile  |   1 +
 drivers/firmware/efi/libstub/efistub.h |   4 +
 fs/dcache.c                            |   5 +
 include/asm-generic/vmlinux.lds.h      |   1 +
 include/linux/compiler-gcc.h           |   1 +
 include/linux/compiler-gcc4.h          |   4 +
 include/linux/compiler-gcc5.h          |   2 +
 include/linux/init_task.h              |   8 +
 include/linux/kasan.h                  |  89 ++++++
 include/linux/module.h                 |   2 +-
 include/linux/sched.h                  |   3 +
 include/linux/slab.h                   |  11 +-
 include/linux/slub_def.h               |  19 ++
 include/linux/vmalloc.h                |  13 +-
 kernel/module.c                        |   2 +
 lib/Kconfig.debug                      |   2 +
 lib/Kconfig.kasan                      |  54 ++++
 lib/Makefile                           |   1 +
 lib/test_kasan.c                       | 277 ++++++++++++++++++
 mm/Makefile                            |   4 +
 mm/compaction.c                        |   2 +
 mm/kasan/Makefile                      |   8 +
 mm/kasan/kasan.c                       | 516 +++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                       |  75 +++++
 mm/kasan/report.c                      | 269 +++++++++++++++++
 mm/kmemleak.c                          |   6 +
 mm/page_alloc.c                        |   3 +
 mm/slab_common.c                       |   5 +-
 mm/slub.c                              |  58 +++-
 mm/vmalloc.c                           |  16 +-
 scripts/Makefile.kasan                 |  26 ++
 scripts/Makefile.lib                   |  10 +
 scripts/module-common.lds              |   3 +
 66 files changed, 2022 insertions(+), 47 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

--
-- 
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Jones <davej@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* [PATCH v11 01/19] compiler: introduce __alias(symbol) shortcut
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:42     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

To be consistent with other compiler attributes
introduce __alias(symbol) macro expanding into
__attribute__((alias(#symbol)))

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/compiler-gcc.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index 02ae99e..cdf13ca 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -66,6 +66,7 @@
 #define __deprecated			__attribute__((deprecated))
 #define __packed			__attribute__((packed))
 #define __weak				__attribute__((weak))
+#define __alias(symbol)		__attribute__((alias(#symbol)))
 
 /*
  * it doesn't make sense on ARM (currently the only user of __naked) to trace
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 01/19] compiler: introduce __alias(symbol) shortcut
@ 2015-02-03 17:42     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

To be consistent with other compiler attributes
introduce __alias(symbol) macro expanding into
__attribute__((alias(#symbol)))

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/compiler-gcc.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index 02ae99e..cdf13ca 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -66,6 +66,7 @@
 #define __deprecated			__attribute__((deprecated))
 #define __packed			__attribute__((packed))
 #define __weak				__attribute__((weak))
+#define __alias(symbol)		__attribute__((alias(#symbol)))
 
 /*
  * it doesn't make sense on ARM (currently the only user of __naked) to trace
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
  2015-02-03 17:42   ` Andrey Ryabinin
  (?)
@ 2015-02-03 17:42     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
---
 Documentation/kasan.txt               | 170 +++++++++++++++++++
 Makefile                              |   3 +-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  46 ++++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 302 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  34 ++++
 mm/kasan/report.c                     | 209 +++++++++++++++++++++++
 scripts/Makefile.kasan                |  24 +++
 scripts/Makefile.lib                  |  10 ++
 14 files changed, 855 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..f0645a8
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,170 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+	return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 6b69223..a9840e9 100644
--- a/Makefile
+++ b/Makefile
@@ -428,7 +428,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -797,6 +797,7 @@ ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y)
 	KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
 endif
 
+include $(srctree)/scripts/Makefile.kasan
 include $(srctree)/scripts/Makefile.extrawarn
 include ${srctree}/scripts/Makefile.lto
 
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..9102fda
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,46 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_OFFSET;
+}
+
+/* Enable reporting bugs after kasan_disable_current() */
+static inline void kasan_enable_current(void)
+{
+	current->kasan_depth++;
+}
+
+/* Disable reporting bugs for current task */
+static inline void kasan_disable_current(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_current(void) {}
+static inline void kasan_disable_current(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 22ee0d5..ef08da2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1664,6 +1664,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 68668f6..1c528d4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..e5b3fbe
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "KASan: runtime memory debugger"
+	help
+	  Enables kernel address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index ac79877..79f4fbc 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..6dc1aa7
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,302 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	void *shadow_start, *shadow_end;
+
+	shadow_start = kasan_mem_to_shadow(address);
+	shadow_end = kasan_mem_to_shadow(address + size);
+
+	memset(shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+
+/*
+ * All functions below always inlined so compiler could
+ * perform better optimizations in each of __asan_loadX/__assn_storeX
+ * depending on memory access size X.
+ */
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(const u8 *start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*start))
+			return (unsigned long)start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(const void *start,
+						const void *end)
+{
+	unsigned int words;
+	unsigned long ret;
+	unsigned int prefix = (unsigned long)start % 8;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow((void *)addr),
+			kasan_mem_to_shadow((void *)addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct kasan_access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely((void *)addr <
+		kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+		info.access_addr = (void *)addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write, _RET_IP_);
+}
+
+#define DEFINE_ASAN_LOAD_STORE(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__alias(__asan_load##size)				\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__alias(__asan_store##size)				\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort)
+
+DEFINE_ASAN_LOAD_STORE(1);
+DEFINE_ASAN_LOAD_STORE(2);
+DEFINE_ASAN_LOAD_STORE(4);
+DEFINE_ASAN_LOAD_STORE(8);
+DEFINE_ASAN_LOAD_STORE(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__alias(__asan_loadN)
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__alias(__asan_storeN)
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..648b9c0
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,34 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct kasan_access_info {
+	const void *access_addr;
+	const void *first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct kasan_access_info *info);
+void kasan_report_user_access(struct kasan_access_info *info);
+
+static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
+{
+	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
+		<< KASAN_SHADOW_SCALE_SHIFT);
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+void kasan_report(unsigned long addr, size_t size,
+		bool is_write, unsigned long ip);
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..5835d69
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,209 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static const void *find_first_bad_addr(const void *addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	const void *first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct kasan_access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	}
+
+	pr_err("BUG: KASan: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct kasan_access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(const void *row, const void *guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(const void *row, const void *shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(const void *addr)
+{
+	int i;
+	const void *shadow = kasan_mem_to_shadow(addr);
+	const void *shadow_row;
+
+	shadow_row = (void *)round_down((unsigned long)shadow,
+					SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		const void *kaddr = kasan_shadow_to_mem(shadow_row);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%p: " : " %p: ", kaddr);
+
+		kasan_disable_current();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			shadow_row, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_current();
+
+		if (row_is_guilty(shadow_row, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(shadow_row, shadow),
+				'^');
+
+		shadow_row += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct kasan_access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct kasan_access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: KASan: user-memory-access on address %p\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report(unsigned long addr, size_t size,
+		bool is_write, unsigned long ip)
+{
+	struct kasan_access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = (void *)addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = ip;
+	kasan_report_error(&info);
+}
+
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false, _RET_IP_);	  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true, _RET_IP_);	   \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
new file mode 100644
index 0000000..159396a
--- /dev/null
+++ b/scripts/Makefile.kasan
@@ -0,0 +1,24 @@
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..044eb4f 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
@ 2015-02-03 17:42     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
---
 Documentation/kasan.txt               | 170 +++++++++++++++++++
 Makefile                              |   3 +-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  46 ++++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 302 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  34 ++++
 mm/kasan/report.c                     | 209 +++++++++++++++++++++++
 scripts/Makefile.kasan                |  24 +++
 scripts/Makefile.lib                  |  10 ++
 14 files changed, 855 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..f0645a8
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,170 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+	return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 6b69223..a9840e9 100644
--- a/Makefile
+++ b/Makefile
@@ -428,7 +428,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -797,6 +797,7 @@ ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y)
 	KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
 endif
 
+include $(srctree)/scripts/Makefile.kasan
 include $(srctree)/scripts/Makefile.extrawarn
 include ${srctree}/scripts/Makefile.lto
 
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..9102fda
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,46 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_OFFSET;
+}
+
+/* Enable reporting bugs after kasan_disable_current() */
+static inline void kasan_enable_current(void)
+{
+	current->kasan_depth++;
+}
+
+/* Disable reporting bugs for current task */
+static inline void kasan_disable_current(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_current(void) {}
+static inline void kasan_disable_current(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 22ee0d5..ef08da2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1664,6 +1664,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 68668f6..1c528d4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..e5b3fbe
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "KASan: runtime memory debugger"
+	help
+	  Enables kernel address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index ac79877..79f4fbc 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..6dc1aa7
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,302 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	void *shadow_start, *shadow_end;
+
+	shadow_start = kasan_mem_to_shadow(address);
+	shadow_end = kasan_mem_to_shadow(address + size);
+
+	memset(shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+
+/*
+ * All functions below always inlined so compiler could
+ * perform better optimizations in each of __asan_loadX/__assn_storeX
+ * depending on memory access size X.
+ */
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(const u8 *start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*start))
+			return (unsigned long)start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(const void *start,
+						const void *end)
+{
+	unsigned int words;
+	unsigned long ret;
+	unsigned int prefix = (unsigned long)start % 8;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow((void *)addr),
+			kasan_mem_to_shadow((void *)addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct kasan_access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely((void *)addr <
+		kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+		info.access_addr = (void *)addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write, _RET_IP_);
+}
+
+#define DEFINE_ASAN_LOAD_STORE(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__alias(__asan_load##size)				\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__alias(__asan_store##size)				\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort)
+
+DEFINE_ASAN_LOAD_STORE(1);
+DEFINE_ASAN_LOAD_STORE(2);
+DEFINE_ASAN_LOAD_STORE(4);
+DEFINE_ASAN_LOAD_STORE(8);
+DEFINE_ASAN_LOAD_STORE(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__alias(__asan_loadN)
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__alias(__asan_storeN)
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..648b9c0
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,34 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct kasan_access_info {
+	const void *access_addr;
+	const void *first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct kasan_access_info *info);
+void kasan_report_user_access(struct kasan_access_info *info);
+
+static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
+{
+	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
+		<< KASAN_SHADOW_SCALE_SHIFT);
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+void kasan_report(unsigned long addr, size_t size,
+		bool is_write, unsigned long ip);
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..5835d69
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,209 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static const void *find_first_bad_addr(const void *addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	const void *first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct kasan_access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	}
+
+	pr_err("BUG: KASan: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct kasan_access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(const void *row, const void *guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(const void *row, const void *shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(const void *addr)
+{
+	int i;
+	const void *shadow = kasan_mem_to_shadow(addr);
+	const void *shadow_row;
+
+	shadow_row = (void *)round_down((unsigned long)shadow,
+					SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		const void *kaddr = kasan_shadow_to_mem(shadow_row);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%p: " : " %p: ", kaddr);
+
+		kasan_disable_current();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			shadow_row, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_current();
+
+		if (row_is_guilty(shadow_row, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(shadow_row, shadow),
+				'^');
+
+		shadow_row += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct kasan_access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct kasan_access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: KASan: user-memory-access on address %p\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report(unsigned long addr, size_t size,
+		bool is_write, unsigned long ip)
+{
+	struct kasan_access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = (void *)addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = ip;
+	kasan_report_error(&info);
+}
+
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false, _RET_IP_);	  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true, _RET_IP_);	   \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
new file mode 100644
index 0000000..159396a
--- /dev/null
+++ b/scripts/Makefile.kasan
@@ -0,0 +1,24 @@
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..044eb4f 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
@ 2015-02-03 17:42     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Jonathan Corbet, Michal Marek, Ingo Molnar,
	Peter Zijlstra, open list:DOCUMENTATION,
	open list:KERNEL BUILD + fi...

Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.

KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC >= v4.9.2 required.

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
and uses direct mapping with a scale and offset to translate a memory
address to its corresponding shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
     }
where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are inaccessible.
Different negative values used to distinguish between different kinds of
inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

Historical background of the address sanitizer from Dmitry Vyukov <dvyukov@google.com>:
	"We've developed the set of tools, AddressSanitizer (Asan),
	ThreadSanitizer and MemorySanitizer, for user space. We actively use
	them for testing inside of Google (continuous testing, fuzzing,
	running prod services). To date the tools have found more than 10'000
	scary bugs in Chromium, Google internal codebase and various
	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
	lots of others): [2] [3] [4].
	The tools are part of both gcc and clang compilers.

	We have not yet done massive testing under the Kernel AddressSanitizer
	(it's kind of chicken and egg problem, you need it to be upstream to
	start applying it extensively). To date it has found about 50 bugs.
	Bugs that we've found in upstream kernel are listed in [5].
	We've also found ~20 bugs in out internal version of the kernel. Also
	people from Samsung and Oracle have found some.

	[...]

	As others noted, the main feature of AddressSanitizer is its
	performance due to inline compiler instrumentation and simple linear
	shadow memory. User-space Asan has ~2x slowdown on computational
	programs and ~2x memory consumption increase. Taking into account that
	kernel usually consumes only small fraction of CPU and memory when
	running real user-space programs, I would expect that kernel Asan will
	have ~10-30% slowdown and similar memory consumption increase (when we
	finish all tuning).

	I agree that Asan can well replace kmemcheck. We have plans to start
	working on Kernel MemorySanitizer that finds uses of unitialized
	memory. Asan+Msan will provide feature-parity with kmemcheck. As
	others noted, Asan will unlikely replace debug slab and pagealloc that
	can be enabled at runtime. Asan uses compiler instrumentation, so even
	if it is disabled, it still incurs visible overheads.

	Asan technology is easily portable to other architectures. Compiler
	instrumentation is fully portable. Runtime has some arch-dependent
	parts like shadow mapping and atomic operation interception. They are
	relatively easy to port."

Comparison with other debugging features:
========================================

KMEMCHECK:
	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of uninitialized
	  memory reads.

	  Some brief performance testing showed that kasan could be x500-x600 times
	  faster than kmemcheck:

$ netperf -l 30
		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
		Recv   Send    Send
		Socket Socket  Message  Elapsed
		Size   Size    Size     Time     Throughput
		bytes  bytes   bytes    secs.    10^6bits/sec

no debug:	87380  16384  16384    30.00    41624.72

kasan inline:	87380  16384  16384    30.00    12870.54

kasan outline:	87380  16384  16384    30.00    10586.39

kmemcheck: 	87380  16384  16384    30.03      20.23

	- Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1.
	  KASan doesn't have such limitation.

DEBUG_PAGEALLOC:
	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies

Based on work by Andrey Konovalov <adech.fo@gmail.com>

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
---
 Documentation/kasan.txt               | 170 +++++++++++++++++++
 Makefile                              |   3 +-
 drivers/firmware/efi/libstub/Makefile |   1 +
 include/linux/kasan.h                 |  46 ++++++
 include/linux/sched.h                 |   3 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  43 +++++
 mm/Makefile                           |   1 +
 mm/kasan/Makefile                     |   8 +
 mm/kasan/kasan.c                      | 302 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  34 ++++
 mm/kasan/report.c                     | 209 +++++++++++++++++++++++
 scripts/Makefile.kasan                |  24 +++
 scripts/Makefile.lib                  |  10 ++
 14 files changed, 855 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c
 create mode 100644 scripts/Makefile.kasan

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..f0645a8
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,170 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC >= 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+at least 'slub_debug=U' in the boot cmdline.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := n
+
+        For all files in one directory:
+                KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+                                                 ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+	return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Makefile b/Makefile
index 6b69223..a9840e9 100644
--- a/Makefile
+++ b/Makefile
@@ -428,7 +428,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -797,6 +797,7 @@ ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y)
 	KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
 endif
 
+include $(srctree)/scripts/Makefile.kasan
 include $(srctree)/scripts/Makefile.extrawarn
 include ${srctree}/scripts/Makefile.lto
 
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index b14bc2b..c5533c7 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -19,6 +19,7 @@ KBUILD_CFLAGS			:= $(cflags-y) \
 				   $(call cc-option,-fno-stack-protector)
 
 GCOV_PROFILE			:= n
+KASAN_SANITIZE			:= n
 
 lib-y				:= efi-stub-helper.o
 lib-$(CONFIG_EFI_ARMSTUB)	+= arm-stub.o fdt.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..9102fda
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,46 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_OFFSET;
+}
+
+/* Enable reporting bugs after kasan_disable_current() */
+static inline void kasan_enable_current(void)
+{
+	current->kasan_depth++;
+}
+
+/* Disable reporting bugs for current task */
+static inline void kasan_disable_current(void)
+{
+	current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_current(void) {}
+static inline void kasan_disable_current(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 22ee0d5..ef08da2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1664,6 +1664,9 @@ struct task_struct {
 	unsigned long timer_slack_ns;
 	unsigned long default_timer_slack_ns;
 
+#ifdef CONFIG_KASAN
+	unsigned int kasan_depth;
+#endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	/* Index of current stored address in ret_stack */
 	int curr_ret_stack;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 68668f6..1c528d4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -651,6 +651,8 @@ config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..e5b3fbe
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,43 @@
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "KASan: runtime memory debugger"
+	help
+	  Enables kernel address sanitizer - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  This is strictly debugging feature. It consumes about 1/8
+	  of available memory and brings about ~x3 performance slowdown.
+	  For better error detection enable CONFIG_STACKTRACE,
+	  and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+	hex
+
+choice
+	prompt "Instrumentation type"
+	depends on KASAN
+	default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+	bool "Outline instrumentation"
+	help
+	  Before every memory access compiler insert function call
+	  __asan_load*/__asan_store*. These functions performs check
+	  of shadow memory. This is slower than inline instrumentation,
+	  however it doesn't bloat size of kernel's .text section so
+	  much as inline does.
+
+config KASAN_INLINE
+	bool "Inline instrumentation"
+	help
+	  Compiler directly inserts code checking shadow memory before
+	  memory accesses. This is faster than outline (in some workloads
+	  it gives about x2 boost over outline instrumentation), but
+	  make kernel's .text size much bigger.
+
+endchoice
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index ac79877..79f4fbc 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -49,6 +49,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..6dc1aa7
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,302 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	void *shadow_start, *shadow_end;
+
+	shadow_start = kasan_mem_to_shadow(address);
+	shadow_end = kasan_mem_to_shadow(address + size);
+
+	memset(shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+	kasan_poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+
+/*
+ * All functions below always inlined so compiler could
+ * perform better optimizations in each of __asan_loadX/__assn_storeX
+ * depending on memory access size X.
+ */
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(shadow_value)) {
+		s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+		return unlikely(last_accessible_byte >= shadow_value);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 1))
+			return true;
+
+		if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 3))
+			return true;
+
+		if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		if (memory_is_poisoned_1(addr + 7))
+			return true;
+
+		if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+			return false;
+
+		return unlikely(*(u8 *)shadow_addr);
+	}
+
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+	u32 *shadow_addr = (u32 *)kasan_mem_to_shadow((void *)addr);
+
+	if (unlikely(*shadow_addr)) {
+		u16 shadow_first_bytes = *(u16 *)shadow_addr;
+		s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+		if (unlikely(shadow_first_bytes))
+			return true;
+
+		if (likely(!last_byte))
+			return false;
+
+		return memory_is_poisoned_1(addr + 15);
+	}
+
+	return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(const u8 *start,
+					size_t size)
+{
+	while (size) {
+		if (unlikely(*start))
+			return (unsigned long)start;
+		start++;
+		size--;
+	}
+
+	return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(const void *start,
+						const void *end)
+{
+	unsigned int words;
+	unsigned long ret;
+	unsigned int prefix = (unsigned long)start % 8;
+
+	if (end - start <= 16)
+		return bytes_is_zero(start, end - start);
+
+	if (prefix) {
+		prefix = 8 - prefix;
+		ret = bytes_is_zero(start, prefix);
+		if (unlikely(ret))
+			return ret;
+		start += prefix;
+	}
+
+	words = (end - start) / 8;
+	while (words) {
+		if (unlikely(*(u64 *)start))
+			return bytes_is_zero(start, 8);
+		start += 8;
+		words--;
+	}
+
+	return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+						size_t size)
+{
+	unsigned long ret;
+
+	ret = memory_is_zero(kasan_mem_to_shadow((void *)addr),
+			kasan_mem_to_shadow((void *)addr + size - 1) + 1);
+
+	if (unlikely(ret)) {
+		unsigned long last_byte = addr + size - 1;
+		s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte);
+
+		if (unlikely(ret != (unsigned long)last_shadow ||
+			((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+			return true;
+	}
+	return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+	if (__builtin_constant_p(size)) {
+		switch (size) {
+		case 1:
+			return memory_is_poisoned_1(addr);
+		case 2:
+			return memory_is_poisoned_2(addr);
+		case 4:
+			return memory_is_poisoned_4(addr);
+		case 8:
+			return memory_is_poisoned_8(addr);
+		case 16:
+			return memory_is_poisoned_16(addr);
+		default:
+			BUILD_BUG();
+		}
+	}
+
+	return memory_is_poisoned_n(addr, size);
+}
+
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	struct kasan_access_info info;
+
+	if (unlikely(size == 0))
+		return;
+
+	if (unlikely((void *)addr <
+		kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+		info.access_addr = (void *)addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (likely(!memory_is_poisoned(addr, size)))
+		return;
+
+	kasan_report(addr, size, write, _RET_IP_);
+}
+
+#define DEFINE_ASAN_LOAD_STORE(size)				\
+	void __asan_load##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, false);		\
+	}							\
+	EXPORT_SYMBOL(__asan_load##size);			\
+	__alias(__asan_load##size)				\
+	void __asan_load##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_load##size##_noabort);		\
+	void __asan_store##size(unsigned long addr)		\
+	{							\
+		check_memory_region(addr, size, true);		\
+	}							\
+	EXPORT_SYMBOL(__asan_store##size);			\
+	__alias(__asan_store##size)				\
+	void __asan_store##size##_noabort(unsigned long);	\
+	EXPORT_SYMBOL(__asan_store##size##_noabort)
+
+DEFINE_ASAN_LOAD_STORE(1);
+DEFINE_ASAN_LOAD_STORE(2);
+DEFINE_ASAN_LOAD_STORE(4);
+DEFINE_ASAN_LOAD_STORE(8);
+DEFINE_ASAN_LOAD_STORE(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__alias(__asan_loadN)
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__alias(__asan_storeN)
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..648b9c0
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,34 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+struct kasan_access_info {
+	const void *access_addr;
+	const void *first_bad_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+void kasan_report_error(struct kasan_access_info *info);
+void kasan_report_user_access(struct kasan_access_info *info);
+
+static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
+{
+	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
+		<< KASAN_SHADOW_SCALE_SHIFT);
+}
+
+static inline bool kasan_enabled(void)
+{
+	return !current->kasan_depth;
+}
+
+void kasan_report(unsigned long addr, size_t size,
+		bool is_write, unsigned long ip);
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..5835d69
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,209 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <adech.fo@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static const void *find_first_bad_addr(const void *addr, size_t size)
+{
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+	const void *first_bad_addr = addr;
+
+	while (!shadow_val && first_bad_addr < addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+static void print_error_description(struct kasan_access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val;
+
+	info->first_bad_addr = find_first_bad_addr(info->access_addr,
+						info->access_size);
+
+	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "out of bounds access";
+		break;
+	}
+
+	pr_err("BUG: KASan: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+}
+
+static void print_address_description(struct kasan_access_info *info)
+{
+	dump_stack();
+}
+
+static bool row_is_guilty(const void *row, const void *guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(const void *row, const void *shadow)
+{
+	/* The length of ">ff00ff00ff00ff00: " is
+	 *    3 + (BITS_PER_LONG/8)*2 chars.
+	 */
+	return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(const void *addr)
+{
+	int i;
+	const void *shadow = kasan_mem_to_shadow(addr);
+	const void *shadow_row;
+
+	shadow_row = (void *)round_down((unsigned long)shadow,
+					SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		const void *kaddr = kasan_shadow_to_mem(shadow_row);
+		char buffer[4 + (BITS_PER_LONG/8)*2];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%p: " : " %p: ", kaddr);
+
+		kasan_disable_current();
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			shadow_row, SHADOW_BYTES_PER_ROW, 0);
+		kasan_enable_current();
+
+		if (row_is_guilty(shadow_row, shadow))
+			pr_err("%*c\n",
+				shadow_pointer_offset(shadow_row, shadow),
+				'^');
+
+		shadow_row += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+void kasan_report_error(struct kasan_access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->first_bad_addr);
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report_user_access(struct kasan_access_info *info)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&report_lock, flags);
+	pr_err("================================="
+		"=================================\n");
+	pr_err("BUG: KASan: user-memory-access on address %p\n",
+		info->access_addr);
+	pr_err("%s of size %zu by task %s/%d\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->comm, task_pid_nr(current));
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	spin_unlock_irqrestore(&report_lock, flags);
+}
+
+void kasan_report(unsigned long addr, size_t size,
+		bool is_write, unsigned long ip)
+{
+	struct kasan_access_info info;
+
+	if (likely(!kasan_enabled()))
+		return;
+
+	info.access_addr = (void *)addr;
+	info.access_size = size;
+	info.is_write = is_write;
+	info.ip = ip;
+	kasan_report_error(&info);
+}
+
+
+#define DEFINE_ASAN_REPORT_LOAD(size)                     \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{                                                         \
+	kasan_report(addr, size, false, _RET_IP_);	  \
+}                                                         \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size)                     \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{                                                          \
+	kasan_report(addr, size, true, _RET_IP_);	   \
+}                                                          \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+	kasan_report(addr, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
new file mode 100644
index 0000000..159396a
--- /dev/null
+++ b/scripts/Makefile.kasan
@@ -0,0 +1,24 @@
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+	call_threshold := 10000
+else
+	call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(CFLAGS_KASAN_MINIMAL),)
+        $(warning Cannot use CONFIG_KASAN: \
+            -fsanitize=kernel-address is not supported by compiler)
+else
+    ifeq ($(CFLAGS_KASAN),)
+        $(warning CONFIG_KASAN: compiler does not support all options.\
+            Trying minimal configuration)
+        CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+    endif
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 5117552..044eb4f 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 03/19] kasan: disable memory hotplug
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:42     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Currently memory hotplug won't work with KASan.
As we don't have shadow for hotplugged memory,
kernel will crash on the first access to it.
To make this work we will need to allocate shadow
for new memory.

At some future point proper memory hotplug support
will be implemented. Until then, print a warning at startup and
disable memory hot-add.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kasan/kasan.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 6dc1aa7..def8110 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -20,6 +20,7 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/memblock.h>
+#include <linux/memory.h>
 #include <linux/mm.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
@@ -300,3 +301,23 @@ EXPORT_SYMBOL(__asan_storeN_noabort);
 /* to shut up compiler complaints */
 void __asan_handle_no_return(void) {}
 EXPORT_SYMBOL(__asan_handle_no_return);
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+static int kasan_mem_notifier(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	return (action == MEM_GOING_ONLINE) ? NOTIFY_BAD : NOTIFY_OK;
+}
+
+static int __init kasan_memhotplug_init(void)
+{
+	pr_err("WARNING: KASan doesn't support memory hot-add\n");
+	pr_err("Memory hot-add will be disabled\n");
+
+	hotplug_memory_notifier(kasan_mem_notifier, 0);
+
+	return 0;
+}
+
+module_init(kasan_memhotplug_init);
+#endif
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 03/19] kasan: disable memory hotplug
@ 2015-02-03 17:42     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Currently memory hotplug won't work with KASan.
As we don't have shadow for hotplugged memory,
kernel will crash on the first access to it.
To make this work we will need to allocate shadow
for new memory.

At some future point proper memory hotplug support
will be implemented. Until then, print a warning at startup and
disable memory hot-add.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/kasan/kasan.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 6dc1aa7..def8110 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -20,6 +20,7 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/memblock.h>
+#include <linux/memory.h>
 #include <linux/mm.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
@@ -300,3 +301,23 @@ EXPORT_SYMBOL(__asan_storeN_noabort);
 /* to shut up compiler complaints */
 void __asan_handle_no_return(void) {}
 EXPORT_SYMBOL(__asan_handle_no_return);
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+static int kasan_mem_notifier(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	return (action == MEM_GOING_ONLINE) ? NOTIFY_BAD : NOTIFY_OK;
+}
+
+static int __init kasan_memhotplug_init(void)
+{
+	pr_err("WARNING: KASan doesn't support memory hot-add\n");
+	pr_err("Memory hot-add will be disabled\n");
+
+	hotplug_memory_notifier(kasan_mem_notifier, 0);
+
+	return 0;
+}
+
+module_init(kasan_memhotplug_init);
+#endif
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 04/19] x86_64: add KASan support
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:42     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Jonathan Corbet, Andy Lutomirski, open list:DOCUMENTATION

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [ffffec0000000000 - fffffc0000000000]
between vmemmap and %esp fixup stacks.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/x86/x86_64/mm.txt   |   2 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  31 ++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +-
 arch/x86/kernel/head_64.S         |  30 ++++++
 arch/x86/kernel/setup.c           |   3 +
 arch/x86/mm/Makefile              |   3 +
 arch/x86/mm/kasan_init_64.c       | 199 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   1 +
 16 files changed, 290 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 052ee64..05712ac 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
+ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB)
+... unused hole ...
 ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
 ... unused hole ...
 ffffffff80000000 - ffffffffa0000000 (=512 MB)  kernel text mapping, from phys 0
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d34ef08..e5c87b2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -85,6 +85,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 3db07f3..57bbf2f 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index ad754b4..843feb3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..8b22422
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,31 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from compiler's shadow offset +
+ * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
+ */
+#define KASAN_SHADOW_START      (KASAN_SHADOW_OFFSET + \
+					(0xffff800000000000ULL >> 3))
+/* 47 bits for kernel address -> (47 - 3) bits for shadow */
+#define KASAN_SHADOW_END        (KASAN_SHADOW_START + (1ULL << (47 - 3)))
+
+#ifndef __ASSEMBLY__
+
+extern pte_t kasan_zero_pte[];
+extern pte_t kasan_zero_pmd[];
+extern pte_t kasan_zero_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_early_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_early_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 316b34e..4fc8ca7 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..efcddfa 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_early_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_early_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..6fd514d9 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,38 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(kasan_zero_pte)
+	FILL(kasan_zero_page - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_zero_pmd)
+	FILL(kasan_zero_pte - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_zero_pud)
+	FILL(kasan_zero_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+
+#ifdef CONFIG_KASAN
+/*
+ * This page used as early shadow. We don't use empty_zero_page
+ * at early stages, stack instrumentation could write some garbage
+ * to this page.
+ * Latter we reuse it as zero shadow for large ranges of memory
+ * that allowed to access, but not instrumented by kasan
+ * (vmalloc/vmemmap ...).
+ */
+NEXT_PAGE(kasan_zero_page)
+	.skip PAGE_SIZE
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index c4648ada..27d2009 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1174,6 +1175,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index ecfdc46..c4cc740 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,6 +20,9 @@ obj-$(CONFIG_HIGHMEM)		+= highmem_32.o
 
 obj-$(CONFIG_KMEMCHECK)		+= kmemcheck/
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+
 obj-$(CONFIG_MMIOTRACE)		+= mmiotrace.o
 mmiotrace-y			:= kmmio.o pf_in.o mmio-mod.o
 obj-$(CONFIG_MMIOTRACE_TEST)	+= testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..3e4d9a1
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,199 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+extern unsigned char kasan_zero_page[PAGE_SIZE];
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start;
+	unsigned long end;
+
+	start = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->start));
+	end = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(kasan_zero_pud)
+				| _KERNPG_TABLE);
+		start += PGDIR_SIZE;
+	}
+}
+
+static int __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
+				unsigned long end)
+{
+	pte_t *pte = pte_offset_kernel(pmd, addr);
+
+	while (addr + PAGE_SIZE <= end) {
+		WARN_ON(!pte_none(*pte));
+		set_pte(pte, __pte(__pa_nodebug(kasan_zero_page)
+					| __PAGE_KERNEL_RO));
+		addr += PAGE_SIZE;
+		pte = pte_offset_kernel(pmd, addr);
+	}
+	return 0;
+}
+
+static int __init zero_pmd_populate(pud_t *pud, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pmd_t *pmd = pmd_offset(pud, addr);
+
+	while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {
+		WARN_ON(!pmd_none(*pmd));
+		set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)
+					| __PAGE_KERNEL_RO));
+		addr += PMD_SIZE;
+		pmd = pmd_offset(pud, addr);
+	}
+	if (addr < end) {
+		if (pmd_none(*pmd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pmd(pmd, __pmd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pte_populate(pmd, addr, end);
+	}
+	return ret;
+}
+
+
+static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pud_t *pud = pud_offset(pgd, addr);
+
+	while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {
+		WARN_ON(!pud_none(*pud));
+		set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)
+					| __PAGE_KERNEL_RO));
+		addr += PUD_SIZE;
+		pud = pud_offset(pgd, addr);
+	}
+
+	if (addr < end) {
+		if (pud_none(*pud)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pud(pud, __pud(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pmd_populate(pud, addr, end);
+	}
+	return ret;
+}
+
+static int __init zero_pgd_populate(unsigned long addr, unsigned long end)
+{
+	int ret = 0;
+	pgd_t *pgd = pgd_offset_k(addr);
+
+	while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {
+		WARN_ON(!pgd_none(*pgd));
+		set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)
+					| __PAGE_KERNEL_RO));
+		addr += PGDIR_SIZE;
+		pgd = pgd_offset_k(addr);
+	}
+
+	if (addr < end) {
+		if (pgd_none(*pgd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pgd(pgd, __pgd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pud_populate(pgd, addr, end);
+	}
+	return ret;
+}
+
+
+static void __init populate_zero_shadow(const void *start, const void *end)
+{
+	if (zero_pgd_populate((unsigned long)start, (unsigned long)end))
+		panic("kasan: unable to map zero shadow!");
+}
+
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	populate_zero_shadow((void *)KASAN_SHADOW_START,
+			kasan_mem_to_shadow((void *)PAGE_OFFSET));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+
+	populate_zero_shadow(kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
+				(void *)KASAN_SHADOW_END);
+
+	memset(kasan_zero_page, 0, PAGE_SIZE);
+
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index e5b3fbe..0052b1b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -15,6 +15,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdffffc0000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 04/19] x86_64: add KASan support
@ 2015-02-03 17:42     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Jonathan Corbet, Andy Lutomirski, open list:DOCUMENTATION

This patch adds arch specific code for kernel address sanitizer.

16TB of virtual addressed used for shadow memory.
It's located in range [ffffec0000000000 - fffffc0000000000]
between vmemmap and %esp fixup stacks.

At early stage we map whole shadow region with zero page.
Latter, after pages mapped to direct mapping address range
we unmap zero pages from corresponding shadow (see kasan_map_shadow())
and allocate and map a real shadow memory reusing vmemmap_populate()
function.

Also replace __pa with __pa_nodebug before shadow initialized.
__pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/x86/x86_64/mm.txt   |   2 +
 arch/x86/Kconfig                  |   1 +
 arch/x86/boot/Makefile            |   2 +
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/include/asm/kasan.h      |  31 ++++++
 arch/x86/kernel/Makefile          |   2 +
 arch/x86/kernel/dumpstack.c       |   5 +-
 arch/x86/kernel/head64.c          |   9 +-
 arch/x86/kernel/head_64.S         |  30 ++++++
 arch/x86/kernel/setup.c           |   3 +
 arch/x86/mm/Makefile              |   3 +
 arch/x86/mm/kasan_init_64.c       | 199 ++++++++++++++++++++++++++++++++++++++
 arch/x86/realmode/Makefile        |   2 +-
 arch/x86/realmode/rm/Makefile     |   1 +
 arch/x86/vdso/Makefile            |   1 +
 lib/Kconfig.kasan                 |   1 +
 16 files changed, 290 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 052ee64..05712ac 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
+ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB)
+... unused hole ...
 ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
 ... unused hole ...
 ffffffff80000000 - ffffffffa0000000 (=512 MB)  kernel text mapping, from phys 0
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d34ef08..e5c87b2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -85,6 +85,7 @@ config X86
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_KASAN if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 3db07f3..57bbf2f 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
 # The number is the same as you would ordinarily press at bootup.
 
+KASAN_SANITIZE := n
+
 SVGA_MODE	:= -DSVGA_MODE=NORMAL_VGA
 
 targets		:= vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index ad754b4..843feb3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -16,6 +16,8 @@
 #	(see scripts/Makefile.lib size_append)
 #	compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
 
+KASAN_SANITIZE := n
+
 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..8b22422
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,31 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from compiler's shadow offset +
+ * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
+ */
+#define KASAN_SHADOW_START      (KASAN_SHADOW_OFFSET + \
+					(0xffff800000000000ULL >> 3))
+/* 47 bits for kernel address -> (47 - 3) bits for shadow */
+#define KASAN_SHADOW_END        (KASAN_SHADOW_START + (1ULL << (47 - 3)))
+
+#ifndef __ASSEMBLY__
+
+extern pte_t kasan_zero_pte[];
+extern pte_t kasan_zero_pmd[];
+extern pte_t kasan_zero_pud[];
+
+#ifdef CONFIG_KASAN
+void __init kasan_map_early_shadow(pgd_t *pgd);
+void __init kasan_init(void);
+#else
+static inline void kasan_map_early_shadow(pgd_t *pgd) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 316b34e..4fc8ca7 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -16,6 +16,8 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
+KASAN_SANITIZE_head$(BITS).o := n
+
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
 obj-y			:= process_$(BITS).o signal.o entry_$(BITS).o
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index b74ebc7..cf3df1d 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -265,7 +265,10 @@ int __die(const char *str, struct pt_regs *regs, long err)
 	printk("SMP ");
 #endif
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	printk("DEBUG_PAGEALLOC");
+	printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+	printk("KASAN");
 #endif
 	printk("\n");
 	if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index eda1a86..efcddfa 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
 #include <asm/bios_ebda.h>
 #include <asm/bootparam_utils.h>
 #include <asm/microcode.h>
+#include <asm/kasan.h>
 
 /*
  * Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)
 
 	next_early_pgt = 0;
 
-	write_cr3(__pa(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_level4_pgt));
 }
 
 /* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
 		return -1;
 
 again:
@@ -158,6 +159,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* Kill off the identity-map trampoline */
 	reset_early_page_tables();
 
+	kasan_map_early_shadow(early_level4_pgt);
+
 	/* clear bss before set_intr_gate with early_idt_handler */
 	clear_bss();
 
@@ -179,6 +182,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	/* set init_level4_pgt kernel high mapping*/
 	init_level4_pgt[511] = early_level4_pgt[511];
 
+	kasan_map_early_shadow(init_level4_pgt);
+
 	x86_64_start_reservations(real_mode_data);
 }
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a468c0a..6fd514d9 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -514,8 +514,38 @@ ENTRY(phys_base)
 	/* This must match the first entry in level2_kernel_pgt */
 	.quad   0x0000000000000000
 
+#ifdef CONFIG_KASAN
+#define FILL(VAL, COUNT)				\
+	.rept (COUNT) ;					\
+	.quad	(VAL) ;					\
+	.endr
+
+NEXT_PAGE(kasan_zero_pte)
+	FILL(kasan_zero_page - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_zero_pmd)
+	FILL(kasan_zero_pte - __START_KERNEL_map + _KERNPG_TABLE, 512)
+NEXT_PAGE(kasan_zero_pud)
+	FILL(kasan_zero_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512)
+
+#undef FILL
+#endif
+
+
 #include "../../x86/xen/xen-head.S"
 	
 	__PAGE_ALIGNED_BSS
 NEXT_PAGE(empty_zero_page)
 	.skip PAGE_SIZE
+
+#ifdef CONFIG_KASAN
+/*
+ * This page used as early shadow. We don't use empty_zero_page
+ * at early stages, stack instrumentation could write some garbage
+ * to this page.
+ * Latter we reuse it as zero shadow for large ranges of memory
+ * that allowed to access, but not instrumented by kasan
+ * (vmalloc/vmemmap ...).
+ */
+NEXT_PAGE(kasan_zero_page)
+	.skip PAGE_SIZE
+#endif
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index c4648ada..27d2009 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/bugs.h>
+#include <asm/kasan.h>
 
 #include <asm/vsyscall.h>
 #include <asm/cpu.h>
@@ -1174,6 +1175,8 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
+	kasan_init();
+
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
 		mmu_cr4_features = read_cr4();
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index ecfdc46..c4cc740 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,6 +20,9 @@ obj-$(CONFIG_HIGHMEM)		+= highmem_32.o
 
 obj-$(CONFIG_KMEMCHECK)		+= kmemcheck/
 
+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+
 obj-$(CONFIG_MMIOTRACE)		+= mmiotrace.o
 mmiotrace-y			:= kmmio.o pf_in.o mmio-mod.o
 obj-$(CONFIG_MMIOTRACE_TEST)	+= testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..3e4d9a1
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,199 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+extern unsigned char kasan_zero_page[PAGE_SIZE];
+
+static int __init map_range(struct range *range)
+{
+	unsigned long start;
+	unsigned long end;
+
+	start = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->start));
+	end = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->end));
+
+	/*
+	 * end + 1 here is intentional. We check several shadow bytes in advance
+	 * to slightly speed up fastpath. In some rare cases we could cross
+	 * boundary of mapped shadow, so we just map some more here.
+	 */
+	return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		pgd_clear(pgd_offset_k(start));
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgd)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+
+	for (i = pgd_index(start); start < end; i++) {
+		pgd[i] = __pgd(__pa_nodebug(kasan_zero_pud)
+				| _KERNPG_TABLE);
+		start += PGDIR_SIZE;
+	}
+}
+
+static int __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
+				unsigned long end)
+{
+	pte_t *pte = pte_offset_kernel(pmd, addr);
+
+	while (addr + PAGE_SIZE <= end) {
+		WARN_ON(!pte_none(*pte));
+		set_pte(pte, __pte(__pa_nodebug(kasan_zero_page)
+					| __PAGE_KERNEL_RO));
+		addr += PAGE_SIZE;
+		pte = pte_offset_kernel(pmd, addr);
+	}
+	return 0;
+}
+
+static int __init zero_pmd_populate(pud_t *pud, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pmd_t *pmd = pmd_offset(pud, addr);
+
+	while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {
+		WARN_ON(!pmd_none(*pmd));
+		set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)
+					| __PAGE_KERNEL_RO));
+		addr += PMD_SIZE;
+		pmd = pmd_offset(pud, addr);
+	}
+	if (addr < end) {
+		if (pmd_none(*pmd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pmd(pmd, __pmd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pte_populate(pmd, addr, end);
+	}
+	return ret;
+}
+
+
+static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
+				unsigned long end)
+{
+	int ret = 0;
+	pud_t *pud = pud_offset(pgd, addr);
+
+	while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {
+		WARN_ON(!pud_none(*pud));
+		set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)
+					| __PAGE_KERNEL_RO));
+		addr += PUD_SIZE;
+		pud = pud_offset(pgd, addr);
+	}
+
+	if (addr < end) {
+		if (pud_none(*pud)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pud(pud, __pud(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pmd_populate(pud, addr, end);
+	}
+	return ret;
+}
+
+static int __init zero_pgd_populate(unsigned long addr, unsigned long end)
+{
+	int ret = 0;
+	pgd_t *pgd = pgd_offset_k(addr);
+
+	while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {
+		WARN_ON(!pgd_none(*pgd));
+		set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)
+					| __PAGE_KERNEL_RO));
+		addr += PGDIR_SIZE;
+		pgd = pgd_offset_k(addr);
+	}
+
+	if (addr < end) {
+		if (pgd_none(*pgd)) {
+			void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+			if (!p)
+				return -ENOMEM;
+			set_pgd(pgd, __pgd(__pa_nodebug(p) | _KERNPG_TABLE));
+		}
+		ret = zero_pud_populate(pgd, addr, end);
+	}
+	return ret;
+}
+
+
+static void __init populate_zero_shadow(const void *start, const void *end)
+{
+	if (zero_pgd_populate((unsigned long)start, (unsigned long)end))
+		panic("kasan: unable to map zero shadow!");
+}
+
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+			     unsigned long val,
+			     void *data)
+{
+	if (val == DIE_GPF) {
+		pr_emerg("CONFIG_KASAN_INLINE enabled");
+		pr_emerg("GPF could be caused by NULL-ptr deref or user memory access");
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+	.notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_init(void)
+{
+	int i;
+
+#ifdef CONFIG_KASAN_INLINE
+	register_die_notifier(&kasan_die_notifier);
+#endif
+
+	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+	load_cr3(early_level4_pgt);
+
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	populate_zero_shadow((void *)KASAN_SHADOW_START,
+			kasan_mem_to_shadow((void *)PAGE_OFFSET));
+
+	for (i = 0; i < E820_X_MAX; i++) {
+		if (pfn_mapped[i].end == 0)
+			break;
+
+		if (map_range(&pfn_mapped[i]))
+			panic("kasan: unable to allocate shadow!");
+	}
+
+	populate_zero_shadow(kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
+				(void *)KASAN_SHADOW_END);
+
+	memset(kasan_zero_page, 0, PAGE_SIZE);
+
+	load_cr3(init_level4_pgt);
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
 # for more details.
 #
 #
-
+KASAN_SANITIZE := n
 subdir- := rm
 
 obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
 # for more details.
 #
 #
+KASAN_SANITIZE := n
 
 always := realmode.bin realmode.relocs
 
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 5a4affe..2aacd7c 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
 #
 
 KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
 
 VDSO64-$(CONFIG_X86_64)		:= y
 VDSOX32-$(CONFIG_X86_X32_ABI)	:= y
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index e5b3fbe..0052b1b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -15,6 +15,7 @@ config KASAN
 
 config KASAN_SHADOW_OFFSET
 	hex
+	default 0xdffffc0000000000 if X86_64
 
 choice
 	prompt "Instrumentation type"
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 05/19] mm: page_alloc: add kasan hooks on alloc and free paths
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:42     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  2 ++
 mm/kasan/report.c     | 11 +++++++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 38 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9102fda..f00c15c 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -34,6 +34,9 @@ static inline void kasan_disable_current(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -41,6 +44,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_current(void) {}
 static inline void kasan_disable_current(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index b68736c..b2d3ef9 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -72,6 +73,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index def8110..b516eb8 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -254,6 +254,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write, _RET_IP_);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 #define DEFINE_ASAN_LOAD_STORE(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 648b9c0..d3c90d5 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,8 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+
 struct kasan_access_info {
 	const void *access_addr;
 	const void *first_bad_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 5835d69..fab8e78 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -54,6 +54,9 @@ static void print_error_description(struct kasan_access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -69,6 +72,14 @@ static void print_error_description(struct kasan_access_info *info)
 
 static void print_address_description(struct kasan_access_info *info)
 {
+	const void *addr = info->access_addr;
+
+	if ((addr >= (void *)PAGE_OFFSET) &&
+		(addr < high_memory)) {
+		struct page *page = virt_to_head_page(addr);
+		dump_page(page, "kasan: bad access detected");
+	}
+
 	dump_stack();
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8d52ab1..31bc2e8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include <linux/kmemcheck.h>
+#include <linux/kasan.h>
 #include <linux/module.h>
 #include <linux/suspend.h>
 #include <linux/pagevec.h>
@@ -787,6 +788,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -970,6 +972,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 05/19] mm: page_alloc: add kasan hooks on alloc and free paths
@ 2015-02-03 17:42     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/compaction.c       |  2 ++
 mm/kasan/kasan.c      | 14 ++++++++++++++
 mm/kasan/kasan.h      |  2 ++
 mm/kasan/report.c     | 11 +++++++++++
 mm/page_alloc.c       |  3 +++
 6 files changed, 38 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9102fda..f00c15c 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -34,6 +34,9 @@ static inline void kasan_disable_current(void)
 
 void kasan_unpoison_shadow(const void *address, size_t size);
 
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -41,6 +44,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 static inline void kasan_enable_current(void) {}
 static inline void kasan_disable_current(void) {}
 
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index b68736c..b2d3ef9 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 #include <linux/balloon_compaction.h>
 #include <linux/page-isolation.h>
+#include <linux/kasan.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -72,6 +73,7 @@ static void map_pages(struct list_head *list)
 	list_for_each_entry(page, list, lru) {
 		arch_alloc_page(page, 0);
 		kernel_map_pages(page, 1, 1);
+		kasan_alloc_pages(page, 0);
 	}
 }
 
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index def8110..b516eb8 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -254,6 +254,20 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write, _RET_IP_);
 }
 
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				KASAN_FREE_PAGE);
+}
+
 #define DEFINE_ASAN_LOAD_STORE(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 648b9c0..d3c90d5 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,8 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+
 struct kasan_access_info {
 	const void *access_addr;
 	const void *first_bad_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 5835d69..fab8e78 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -54,6 +54,9 @@ static void print_error_description(struct kasan_access_info *info)
 	shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
 	switch (shadow_val) {
+	case KASAN_FREE_PAGE:
+		bug_type = "use after free";
+		break;
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -69,6 +72,14 @@ static void print_error_description(struct kasan_access_info *info)
 
 static void print_address_description(struct kasan_access_info *info)
 {
+	const void *addr = info->access_addr;
+
+	if ((addr >= (void *)PAGE_OFFSET) &&
+		(addr < high_memory)) {
+		struct page *page = virt_to_head_page(addr);
+		dump_page(page, "kasan: bad access detected");
+	}
+
 	dump_stack();
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8d52ab1..31bc2e8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include <linux/kmemcheck.h>
+#include <linux/kasan.h>
 #include <linux/module.h>
 #include <linux/suspend.h>
 #include <linux/pagevec.h>
@@ -787,6 +788,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
+	kasan_free_pages(page, order);
 
 	if (PageAnon(page))
 		page->mapping = NULL;
@@ -970,6 +972,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
 		prep_zero_page(page, order, gfp_flags);
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 06/19] mm: slub: introduce virt_to_obj function.
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:42     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the beginning of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 9abf04e..db7d5de 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,20 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+
+/**
+ * virt_to_obj - returns address of the beginning of object.
+ * @s: object's kmem_cache
+ * @slab_page: address of slab page
+ * @x: address within object memory range
+ *
+ * Returns address of the beginning of object
+ */
+static inline void *virt_to_obj(struct kmem_cache *s,
+				const void *slab_page,
+				const void *x)
+{
+	return (void *)x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 06/19] mm: slub: introduce virt_to_obj function.
@ 2015-02-03 17:42     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the beginning of object.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 9abf04e..db7d5de 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,4 +110,20 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 }
 #endif
 
+
+/**
+ * virt_to_obj - returns address of the beginning of object.
+ * @s: object's kmem_cache
+ * @slab_page: address of slab page
+ * @x: address within object memory range
+ *
+ * Returns address of the beginning of object
+ */
+static inline void *virt_to_obj(struct kmem_cache *s,
+				const void *slab_page,
+				const void *x)
+{
+	return (void *)x - ((x - slab_page) % s->size);
+}
+
 #endif /* _LINUX_SLUB_DEF_H */
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 07/19] mm: slub: share object_err function
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Remove static and add function declarations to
linux/slub_def.h so it could be used by kernel
address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 3 +++
 mm/slub.c                | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index db7d5de..3388511 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -126,4 +126,7 @@ static inline void *virt_to_obj(struct kmem_cache *s,
 	return (void *)x - ((x - slab_page) % s->size);
 }
 
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 1562955..3eb73f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,7 +629,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 07/19] mm: slub: share object_err function
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

Remove static and add function declarations to
linux/slub_def.h so it could be used by kernel
address sanitizer.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/slub_def.h | 3 +++
 mm/slub.c                | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index db7d5de..3388511 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -126,4 +126,7 @@ static inline void *virt_to_obj(struct kmem_cache *s,
 	return (void *)x - ((x - slab_page) % s->size);
 }
 
+void object_err(struct kmem_cache *s, struct page *page,
+		u8 *object, char *reason);
+
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 1562955..3eb73f5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -629,7 +629,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	dump_stack();
 }
 
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
 	slab_bug(s, "%s", reason);
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 08/19] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

It's ok for slub to access memory that marked by kasan as inaccessible
(object's metadata). Kasan shouldn't print report in that case because
these accesses are valid. Disabling instrumentation of slub.c code is
not enough to achieve this because slub passes pointer to object's
metadata into external functions like memchr_inv().

We don't want to disable instrumentation for memchr_inv() because
this is quite generic function, and we don't want to miss bugs.

metadata_access_enable/metadata_access_disable used to tell
KASan where accesses to metadata starts/end, so we
could temporarily disable KASan reports.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 3eb73f5..390972f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -20,6 +20,7 @@
 #include <linux/proc_fs.h>
 #include <linux/notifier.h>
 #include <linux/seq_file.h>
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/cpu.h>
 #include <linux/cpuset.h>
@@ -468,12 +469,30 @@ static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
 /*
+ * slub is about to manipulate internal object metadata.  This memory lies
+ * outside the range of the allocated object, so accessing it would normally
+ * be reported by kasan as a bounds error.  metadata_access_enable() is used
+ * to tell kasan that these accesses are OK.
+ */
+static inline void metadata_access_enable(void)
+{
+	kasan_disable_current();
+}
+
+static inline void metadata_access_disable(void)
+{
+	kasan_enable_current();
+}
+
+/*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +522,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +698,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +793,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 08/19] mm: slub: introduce metadata_access_enable()/metadata_access_disable()
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

It's ok for slub to access memory that marked by kasan as inaccessible
(object's metadata). Kasan shouldn't print report in that case because
these accesses are valid. Disabling instrumentation of slub.c code is
not enough to achieve this because slub passes pointer to object's
metadata into external functions like memchr_inv().

We don't want to disable instrumentation for memchr_inv() because
this is quite generic function, and we don't want to miss bugs.

metadata_access_enable/metadata_access_disable used to tell
KASan where accesses to metadata starts/end, so we
could temporarily disable KASan reports.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 mm/slub.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 3eb73f5..390972f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -20,6 +20,7 @@
 #include <linux/proc_fs.h>
 #include <linux/notifier.h>
 #include <linux/seq_file.h>
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/cpu.h>
 #include <linux/cpuset.h>
@@ -468,12 +469,30 @@ static char *slub_debug_slabs;
 static int disable_higher_order_debug;
 
 /*
+ * slub is about to manipulate internal object metadata.  This memory lies
+ * outside the range of the allocated object, so accessing it would normally
+ * be reported by kasan as a bounds error.  metadata_access_enable() is used
+ * to tell kasan that these accesses are OK.
+ */
+static inline void metadata_access_enable(void)
+{
+	kasan_disable_current();
+}
+
+static inline void metadata_access_disable(void)
+{
+	kasan_enable_current();
+}
+
+/*
  * Object debugging
  */
 static void print_section(char *text, u8 *addr, unsigned int length)
 {
+	metadata_access_enable();
 	print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
 			length, 1);
+	metadata_access_disable();
 }
 
 static struct track *get_track(struct kmem_cache *s, void *object,
@@ -503,7 +522,9 @@ static void set_track(struct kmem_cache *s, void *object,
 		trace.max_entries = TRACK_ADDRS_COUNT;
 		trace.entries = p->addrs;
 		trace.skip = 3;
+		metadata_access_enable();
 		save_stack_trace(&trace);
+		metadata_access_disable();
 
 		/* See rant in lockdep.c */
 		if (trace.nr_entries != 0 &&
@@ -677,7 +698,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	u8 *fault;
 	u8 *end;
 
+	metadata_access_enable();
 	fault = memchr_inv(start, value, bytes);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 
@@ -770,7 +793,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 	if (!remainder)
 		return 1;
 
+	metadata_access_enable();
 	fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+	metadata_access_disable();
 	if (!fault)
 		return 1;
 	while (end > fault && end[-1] == POISON_INUSE)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 09/19] mm: slub: add kernel address sanitizer support for slub allocator
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Chernenkov, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as redzone.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Dmitry Chernenkov <dmitryc@google.com>
---
 include/linux/kasan.h | 27 ++++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  5 +++
 mm/kasan/report.c     | 21 +++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 31 ++++++++++++++--
 9 files changed, 197 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f00c15c..d5310ee 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -37,6 +37,18 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_poison_slab(struct page *page);
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -47,6 +59,21 @@ static inline void kasan_disable_current(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+					void *object) {}
+static inline void kasan_poison_object_data(struct kmem_cache *cache,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index ed2ffaa..76f1fee 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -325,7 +326,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -333,7 +337,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 0052b1b..a11ac02 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "KASan: runtime memory debugger"
+	depends on SLUB_DEBUG
 	help
 	  Enables kernel address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 79f4fbc..3c1caa2 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index b516eb8..dc83f07 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -31,6 +31,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -268,6 +269,103 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_poison_slab(struct page *page)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << compound_order(page),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_unpoison_shadow(object, cache->object_size);
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_poison_shadow(object,
+			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = round_up((unsigned long)object + cache->object_size,
+				KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 #define DEFINE_ASAN_LOAD_STORE(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index d3c90d5..5b052ab 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+
 
 struct kasan_access_info {
 	const void *access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index fab8e78..2760edb 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -55,8 +56,11 @@ static void print_error_description(struct kasan_access_info *info)
 
 	switch (shadow_val) {
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
+	case KASAN_PAGE_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -77,6 +81,23 @@ static void print_address_description(struct kasan_access_info *info)
 	if ((addr >= (void *)PAGE_OFFSET) &&
 		(addr < high_memory)) {
 		struct page *page = virt_to_head_page(addr);
+
+		if (PageSlab(page)) {
+			void *object;
+			struct kmem_cache *cache = page->slab_cache;
+			void *last_object;
+
+			object = virt_to_obj(cache, page_address(page), addr);
+			last_object = page_address(page) +
+				page->objects * cache->size;
+
+			if (unlikely(object > last_object))
+				object = last_object; /* we hit into padding */
+
+			object_err(cache, page, object,
+				"kasan: bad access detected");
+			return;
+		}
 		dump_page(page, "kasan: bad access detected");
 	}
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0dd9eb4..820a273 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -887,6 +887,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -1066,8 +1067,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 390972f..9185e1d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1251,11 +1251,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
@@ -1278,6 +1280,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
 	memcg_kmem_put_cache(s);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1301,6 +1304,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1395,8 +1400,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
+		kasan_poison_object_data(s, object);
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1429,6 +1437,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (unlikely(s->flags & SLAB_POISON))
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
 
+	kasan_poison_slab(page);
+
 	for_each_object_idx(p, idx, s, start, page->objects) {
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
@@ -2513,6 +2523,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2539,6 +2550,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2924,6 +2937,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3296,6 +3310,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3339,12 +3355,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3360,6 +3378,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 09/19] mm: slub: add kernel address sanitizer support for slub allocator
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Chernenkov, Dmitry Vyukov,
	Konstantin Serebryany, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Pekka Enberg, David Rientjes

With this patch kasan will be able to catch bugs in memory allocated
by slub.
Initially all objects in newly allocated slab page, marked as redzone.
Later, when allocation of slub object happens, requested by caller
number of bytes marked as accessible, and the rest of the object
(including slub's metadata) marked as redzone (inaccessible).

We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible.

Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Dmitry Chernenkov <dmitryc@google.com>
---
 include/linux/kasan.h | 27 ++++++++++++++
 include/linux/slab.h  | 11 ++++--
 lib/Kconfig.kasan     |  1 +
 mm/Makefile           |  3 ++
 mm/kasan/kasan.c      | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  5 +++
 mm/kasan/report.c     | 21 +++++++++++
 mm/slab_common.c      |  5 ++-
 mm/slub.c             | 31 ++++++++++++++--
 9 files changed, 197 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f00c15c..d5310ee 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -37,6 +37,18 @@ void kasan_unpoison_shadow(const void *address, size_t size);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
+void kasan_poison_slab(struct page *page);
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
 #else /* CONFIG_KASAN */
 
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
@@ -47,6 +59,21 @@ static inline void kasan_disable_current(void) {}
 static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
+static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+					void *object) {}
+static inline void kasan_poison_object_data(struct kmem_cache *cache,
+					void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index ed2ffaa..76f1fee 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
 				(unsigned long)ZERO_SIZE_PTR)
 
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 
 struct mem_cgroup;
 /*
@@ -325,7 +326,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 		gfp_t flags, size_t size)
 {
-	return kmem_cache_alloc(s, flags);
+	void *ret = kmem_cache_alloc(s, flags);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 
 static __always_inline void *
@@ -333,7 +337,10 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 			      gfp_t gfpflags,
 			      int node, size_t size)
 {
-	return kmem_cache_alloc_node(s, gfpflags, node);
+	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
+	return ret;
 }
 #endif /* CONFIG_TRACING */
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 0052b1b..a11ac02 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,7 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "KASan: runtime memory debugger"
+	depends on SLUB_DEBUG
 	help
 	  Enables kernel address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/Makefile b/mm/Makefile
index 79f4fbc..3c1caa2 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
 # Makefile for the linux memory manager.
 #
 
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index b516eb8..dc83f07 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -31,6 +31,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
@@ -268,6 +269,103 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
+void kasan_poison_slab(struct page *page)
+{
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << compound_order(page),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_unpoison_shadow(object, cache->object_size);
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_poison_shadow(object,
+			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+			KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+	kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+	unsigned long size = cache->object_size;
+	unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+		return;
+
+	kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(object == NULL))
+		return;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = round_up((unsigned long)object + cache->object_size,
+				KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(object, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_KMALLOC_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+	struct page *page;
+	unsigned long redzone_start;
+	unsigned long redzone_end;
+
+	if (unlikely(ptr == NULL))
+		return;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	kasan_unpoison_shadow(ptr, size);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		kasan_kmalloc_large(object, size);
+	else
+		kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+	struct page *page = virt_to_page(ptr);
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			KASAN_FREE_PAGE);
+}
+
 #define DEFINE_ASAN_LOAD_STORE(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index d3c90d5..5b052ab 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,6 +7,11 @@
 #define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
 
 #define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+
 
 struct kasan_access_info {
 	const void *access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index fab8e78..2760edb 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -24,6 +24,7 @@
 #include <linux/kasan.h>
 
 #include "kasan.h"
+#include "../slab.h"
 
 /* Shadow layout customization. */
 #define SHADOW_BYTES_PER_BLOCK 1
@@ -55,8 +56,11 @@ static void print_error_description(struct kasan_access_info *info)
 
 	switch (shadow_val) {
 	case KASAN_FREE_PAGE:
+	case KASAN_KMALLOC_FREE:
 		bug_type = "use after free";
 		break;
+	case KASAN_PAGE_REDZONE:
+	case KASAN_KMALLOC_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -77,6 +81,23 @@ static void print_address_description(struct kasan_access_info *info)
 	if ((addr >= (void *)PAGE_OFFSET) &&
 		(addr < high_memory)) {
 		struct page *page = virt_to_head_page(addr);
+
+		if (PageSlab(page)) {
+			void *object;
+			struct kmem_cache *cache = page->slab_cache;
+			void *last_object;
+
+			object = virt_to_obj(cache, page_address(page), addr);
+			last_object = page_address(page) +
+				page->objects * cache->size;
+
+			if (unlikely(object > last_object))
+				object = last_object; /* we hit into padding */
+
+			object_err(cache, page, object,
+				"kasan: bad access detected");
+			return;
+		}
 		dump_page(page, "kasan: bad access detected");
 	}
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0dd9eb4..820a273 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -887,6 +887,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_kmem_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
+	kasan_kmalloc_large(ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -1066,8 +1067,10 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 	if (p)
 		ks = ksize(p);
 
-	if (ks >= new_size)
+	if (ks >= new_size) {
+		kasan_krealloc((void *)p, new_size);
 		return (void *)p;
+	}
 
 	ret = kmalloc_track_caller(new_size, flags);
 	if (ret && p)
diff --git a/mm/slub.c b/mm/slub.c
index 390972f..9185e1d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1251,11 +1251,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	kmemleak_alloc(ptr, size, 1, flags);
+	kasan_kmalloc_large(ptr, size);
 }
 
 static inline void kfree_hook(const void *x)
 {
 	kmemleak_free(x);
+	kasan_kfree_large(x);
 }
 
 static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
@@ -1278,6 +1280,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
 	kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
 	kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
 	memcg_kmem_put_cache(s);
+	kasan_slab_alloc(s, object);
 }
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -1301,6 +1304,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 #endif
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
+
+	kasan_slab_free(s, x);
 }
 
 /*
@@ -1395,8 +1400,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
-	if (unlikely(s->ctor))
+	if (unlikely(s->ctor)) {
+		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
+		kasan_poison_object_data(s, object);
+	}
 }
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1429,6 +1437,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (unlikely(s->flags & SLAB_POISON))
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
 
+	kasan_poison_slab(page);
+
 	for_each_object_idx(p, idx, s, start, page->objects) {
 		setup_object(s, page, p);
 		if (likely(idx < page->objects))
@@ -2513,6 +2523,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2539,6 +2550,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
+
+	kasan_kmalloc(s, ret, size);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2924,6 +2937,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
+	kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
 	init_kmem_cache_node(n);
 	inc_slabs_node(kmem_cache_node, node, page->objects);
 
@@ -3296,6 +3310,8 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc);
@@ -3339,12 +3355,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
+	kasan_kmalloc(s, ret, size);
+
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
 {
 	struct page *page;
 
@@ -3360,6 +3378,15 @@ size_t ksize(const void *object)
 
 	return slab_ksize(page->slab_cache);
 }
+
+size_t ksize(const void *object)
+{
+	size_t size = __ksize(object);
+	/* We assume that ksize callers could use whole allocated area,
+	   so we need unpoison this area. */
+	kasan_krealloc(object, size);
+	return size;
+}
 EXPORT_SYMBOL(ksize);
 
 void kfree(const void *x)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 10/19] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
  2015-02-03 17:42   ` Andrey Ryabinin
  (?)
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS (VFS...)

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 1467ab9..dc400fd 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1539,6 +1541,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 10/19] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS (VFS...)

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 1467ab9..dc400fd 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1539,6 +1541,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 10/19] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Alexander Viro, open list:FILESYSTEMS VFS...

We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in dentry_string_cmp().
When CONFIG_DCACHE_WORD_ACCESS=y dentry_string_cmp may access
few bytes beyound requested in kmalloc() size.

dentry_string_cmp() relates on that fact that dentry allocated
using kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 fs/dcache.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/dcache.c b/fs/dcache.c
index 1467ab9..dc400fd 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,8 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/list_lru.h>
+#include <linux/kasan.h>
+
 #include "internal.h"
 #include "mount.h"
 
@@ -1539,6 +1541,9 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 		}
 		atomic_set(&p->u.count, 1);
 		dname = p->name;
+		if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+			kasan_unpoison_shadow(dname,
+				round_up(name->len + 1,	sizeof(unsigned long)));
 	} else  {
 		dname = dentry->d_iname;
 	}	
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 11/19] kmemleak: disable kasan instrumentation for kmemleak
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..5405aff 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_current();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_current();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_current();
 		pointer = *ptr;
+		kasan_enable_current();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 11/19] kmemleak: disable kasan instrumentation for kmemleak
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Catalin Marinas

kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 3cda50c..5405aff 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
 #include <asm/processor.h>
 #include <linux/atomic.h>
 
+#include <linux/kasan.h>
 #include <linux/kmemcheck.h>
 #include <linux/kmemleak.h>
 #include <linux/memory_hotplug.h>
@@ -1113,7 +1114,10 @@ static bool update_checksum(struct kmemleak_object *object)
 	if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
 		return false;
 
+	kasan_disable_current();
 	object->checksum = crc32(0, (void *)object->pointer, object->size);
+	kasan_enable_current();
+
 	return object->checksum != old_csum;
 }
 
@@ -1164,7 +1168,9 @@ static void scan_block(void *_start, void *_end,
 						  BYTES_PER_POINTER))
 			continue;
 
+		kasan_disable_current();
 		pointer = *ptr;
+		kasan_enable_current();
 
 		object = find_and_get_object(pointer, 1);
 		if (!object)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 12/19] lib: add kasan test module
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 277 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 286 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index a11ac02..4d47d87 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -42,4 +42,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index b1dbda7..5b11c8f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -37,6 +37,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..098c08e
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,277 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+static char global_array[10];
+
+static noinline void __init kasan_global_oob(void)
+{
+	volatile int i = 3;
+	char *p = &global_array[ARRAY_SIZE(global_array) + i];
+
+	pr_info("out-of-bounds global variable\n");
+	*(volatile char *)p;
+}
+
+static noinline void __init kasan_stack_oob(void)
+{
+	char stack_array[10];
+	volatile int i = 0;
+	char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
+
+	pr_info("out-of-bounds on stack\n");
+	*(volatile char *)p;
+}
+
+static int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	kasan_stack_oob();
+	kasan_global_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 12/19] lib: add kasan test module
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

This is a test module doing various nasty things like
out of bounds accesses, use after free. It is useful for testing
kernel debugging features like kernel address sanitizer.

It mostly concentrates on testing of slab allocator, but we
might want to add more different stuff here in future (like
stack/global variables out of bounds accesses and so on).

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 lib/Kconfig.kasan |   8 ++
 lib/Makefile      |   1 +
 lib/test_kasan.c  | 277 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 286 insertions(+)
 create mode 100644 lib/test_kasan.c

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index a11ac02..4d47d87 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -42,4 +42,12 @@ config KASAN_INLINE
 
 endchoice
 
+config TEST_KASAN
+	tristate "Module for testing kasan for bug detection"
+	depends on m && KASAN
+	help
+	  This is a test module doing various nasty things like
+	  out of bounds accesses, use after free. It is useful for testing
+	  kernel debugging features like kernel address sanitizer.
+
 endif
diff --git a/lib/Makefile b/lib/Makefile
index b1dbda7..5b11c8f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -37,6 +37,7 @@ obj-$(CONFIG_TEST_LKM) += test_module.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..098c08e
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,277 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+	char *ptr;
+	size_t size = 123;
+
+	pr_info("out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 'x';
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+	char *ptr;
+	size_t size = 15;
+
+	pr_info("out-of-bounds to left\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	*ptr = *(ptr - 1);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+	char *ptr;
+	size_t size = 4096;
+
+	pr_info("kmalloc_node(): out-of-bounds to right\n");
+	ptr = kmalloc_node(size, GFP_KERNEL, 0);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+	pr_info("kmalloc large allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 19;
+
+	pr_info("out-of-bounds after krealloc more\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+
+	ptr2[size2] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+	char *ptr1, *ptr2;
+	size_t size1 = 17;
+	size_t size2 = 15;
+
+	pr_info("out-of-bounds after krealloc less\n");
+	ptr1 = kmalloc(size1, GFP_KERNEL);
+	ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		return;
+	}
+	ptr2[size1] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+	struct {
+		u64 words[2];
+	} *ptr1, *ptr2;
+
+	pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+	if (!ptr1 || !ptr2) {
+		pr_err("Allocation failed\n");
+		kfree(ptr1);
+		kfree(ptr2);
+		return;
+	}
+	*ptr1 = *ptr2;
+	kfree(ptr1);
+	kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+	char *ptr;
+	size_t size = 666;
+
+	pr_info("out-of-bounds in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	memset(ptr, 0, size+5);
+	kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+	char *ptr;
+	size_t size = 10;
+
+	pr_info("use-after-free\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	*(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+	char *ptr;
+	size_t size = 33;
+
+	pr_info("use-after-free in memset\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr);
+	memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+	char *ptr1, *ptr2;
+	size_t size = 43;
+
+	pr_info("use-after-free after another kmalloc\n");
+	ptr1 = kmalloc(size, GFP_KERNEL);
+	if (!ptr1) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	kfree(ptr1);
+	ptr2 = kmalloc(size, GFP_KERNEL);
+	if (!ptr2) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr1[40] = 'x';
+	kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+	char *p;
+	size_t size = 200;
+	struct kmem_cache *cache = kmem_cache_create("test_cache",
+						size, 0,
+						0, NULL);
+	if (!cache) {
+		pr_err("Cache allocation failed\n");
+		return;
+	}
+	pr_info("out-of-bounds in kmem_cache_alloc\n");
+	p = kmem_cache_alloc(cache, GFP_KERNEL);
+	if (!p) {
+		pr_err("Allocation failed\n");
+		kmem_cache_destroy(cache);
+		return;
+	}
+
+	*p = p[size];
+	kmem_cache_free(cache, p);
+	kmem_cache_destroy(cache);
+}
+
+static char global_array[10];
+
+static noinline void __init kasan_global_oob(void)
+{
+	volatile int i = 3;
+	char *p = &global_array[ARRAY_SIZE(global_array) + i];
+
+	pr_info("out-of-bounds global variable\n");
+	*(volatile char *)p;
+}
+
+static noinline void __init kasan_stack_oob(void)
+{
+	char stack_array[10];
+	volatile int i = 0;
+	char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
+
+	pr_info("out-of-bounds on stack\n");
+	*(volatile char *)p;
+}
+
+static int __init kmalloc_tests_init(void)
+{
+	kmalloc_oob_right();
+	kmalloc_oob_left();
+	kmalloc_node_oob_right();
+	kmalloc_large_oob_rigth();
+	kmalloc_oob_krealloc_more();
+	kmalloc_oob_krealloc_less();
+	kmalloc_oob_16();
+	kmalloc_oob_in_memset();
+	kmalloc_uaf();
+	kmalloc_uaf_memset();
+	kmalloc_uaf2();
+	kmem_cache_oob();
+	kasan_stack_oob();
+	kasan_global_oob();
+	return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 13/19] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Matt Fleming, H. Peter Anvin, Thomas Gleixner,
	Ingo Molnar, open list:EXTENSIBLE FIRMWA...

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c       |  3 +--
 arch/x86/boot/compressed/misc.h        |  1 +
 arch/x86/include/asm/string_64.h       | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c       | 10 ++++++++--
 arch/x86/lib/memcpy_64.S               |  6 ++++--
 arch/x86/lib/memmove_64.S              |  4 ++++
 arch/x86/lib/memset_64.S               | 10 ++++++----
 drivers/firmware/efi/libstub/efistub.h |  4 ++++
 mm/kasan/kasan.c                       | 29 +++++++++++++++++++++++++++++
 9 files changed, 74 insertions(+), 11 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 92b9a5f..ef17683 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -13,8 +13,7 @@
 #include <asm/setup.h>
 #include <asm/desc.h>
 
-#undef memcpy			/* Use memcpy from misc.c */
-
+#include "../string.h"
 #include "eboot.h"
 
 static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
 	 * only outcome...
 	 */
 	.section .altinstructions, "a"
-	altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+	altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
 			     .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
-	altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+	altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
 			     .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
 	.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 2be1098..47437b1 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -5,6 +5,10 @@
 /* error code which can't be mistaken for valid address */
 #define EFI_ERROR	(~0UL)
 
+#undef memcpy
+#undef memset
+#undef memmove
+
 void efi_char16_printk(efi_system_table_t *, efi_char16_t *);
 
 efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, void *__image,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index dc83f07..799c52b 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -255,6 +255,35 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write, _RET_IP_);
 }
 
+void __asan_loadN(unsigned long addr, size_t size);
+void __asan_storeN(unsigned long addr, size_t size);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	__asan_storeN((unsigned long)addr, len);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memcpy(dest, src, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 13/19] x86_64: kasan: add interceptors for memset/memmove/memcpy functions
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Matt Fleming, H. Peter Anvin, Thomas Gleixner,
	Ingo Molnar, open list:EXTENSIBLE FIRMWA...

Recently instrumentation of builtin functions calls was removed from GCC 5.0.
To check the memory accessed by such functions, userspace asan always uses
interceptors for them.

So now we should do this as well. This patch declares memset/memmove/memcpy
as weak symbols. In mm/kasan/kasan.c we have our own implementation
of those functions which checks memory before accessing it.

Default memset/memmove/memcpy now now always have aliases with '__' prefix.
For files that built without kasan instrumentation (e.g. mm/slub.c)
original mem* replaced (via #define) with prefixed variants,
cause we don't want to check memory accesses there.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/boot/compressed/eboot.c       |  3 +--
 arch/x86/boot/compressed/misc.h        |  1 +
 arch/x86/include/asm/string_64.h       | 18 +++++++++++++++++-
 arch/x86/kernel/x8664_ksyms_64.c       | 10 ++++++++--
 arch/x86/lib/memcpy_64.S               |  6 ++++--
 arch/x86/lib/memmove_64.S              |  4 ++++
 arch/x86/lib/memset_64.S               | 10 ++++++----
 drivers/firmware/efi/libstub/efistub.h |  4 ++++
 mm/kasan/kasan.c                       | 29 +++++++++++++++++++++++++++++
 9 files changed, 74 insertions(+), 11 deletions(-)

diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 92b9a5f..ef17683 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -13,8 +13,7 @@
 #include <asm/setup.h>
 #include <asm/desc.h>
 
-#undef memcpy			/* Use memcpy from misc.c */
-
+#include "../string.h"
 #include "eboot.h"
 
 static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..04477d6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
  * we just keep it from happening
  */
 #undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
 #ifdef CONFIG_X86_32
 #define _ASM_X86_DESC_H 1
 #endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
    function. */
 
 #define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
 #ifndef CONFIG_KMEMCHECK
 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
 extern void *memcpy(void *to, const void *from, size_t len);
 #else
-extern void *__memcpy(void *to, const void *from, size_t len);
 #define memcpy(dst, src, len)					\
 ({								\
 	size_t __len = (len);					\
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);
 
 #define __HAVE_ARCH_MEMSET
 void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);
 
 int memcmp(const void *cs, const void *ct, size_t count);
 size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 int strcmp(const char *cs, const char *ct);
 
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 0406819..37d8fa4 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
 #undef memset
 #undef memmove
 
+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
 extern void *memset(void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);
 
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(memmove);
 
 #ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
 .Lmemcpy_e_e:
 	.previous
 
+.weak memcpy
+
 ENTRY(__memcpy)
 ENTRY(memcpy)
 	CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
 	 * only outcome...
 	 */
 	.section .altinstructions, "a"
-	altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+	altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
 			     .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
-	altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+	altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
 			     .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
 	.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
  * Output:
  * rax: dest
  */
+.weak memmove
+
 ENTRY(memmove)
+ENTRY(__memmove)
 	CFI_STARTPROC
 
 	/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
 		.Lmemmove_end_forward-.Lmemmove_begin_forward,	\
 		.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
 	.previous
+ENDPROC(__memmove)
 ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
 .Lmemset_e_e:
 	.previous
 
+.weak memset
+
 ENTRY(memset)
 ENTRY(__memset)
 	CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
          * feature to implement the right patch order.
 	 */
 	.section .altinstructions,"a"
-	altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
-			     .Lfinal-memset,.Lmemset_e-.Lmemset_c
-	altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
-			     .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+	altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+			     .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+	altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+			     .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
 	.previous
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 2be1098..47437b1 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -5,6 +5,10 @@
 /* error code which can't be mistaken for valid address */
 #define EFI_ERROR	(~0UL)
 
+#undef memcpy
+#undef memset
+#undef memmove
+
 void efi_char16_printk(efi_system_table_t *, efi_char16_t *);
 
 efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, void *__image,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index dc83f07..799c52b 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -255,6 +255,35 @@ static __always_inline void check_memory_region(unsigned long addr,
 	kasan_report(addr, size, write, _RET_IP_);
 }
 
+void __asan_loadN(unsigned long addr, size_t size);
+void __asan_storeN(unsigned long addr, size_t size);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	__asan_storeN((unsigned long)addr, len);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	__asan_loadN((unsigned long)src, len);
+	__asan_storeN((unsigned long)dest, len);
+
+	return __memcpy(dest, src, len);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 14/19] kasan: enable stack instrumentation
  2015-02-03 17:42   ` Andrey Ryabinin
  (?)
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Michal Marek, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          | 11 +++++++++--
 include/linux/init_task.h            |  8 ++++++++
 mm/kasan/kasan.h                     |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 scripts/Makefile.kasan               |  1 +
 7 files changed, 44 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 4fc8ca7..057f6f6 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 3e4d9a1..5350870 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -189,11 +189,18 @@ void __init kasan_init(void)
 		if (map_range(&pfn_mapped[i]))
 			panic("kasan: unable to allocate shadow!");
 	}
-
 	populate_zero_shadow(kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
-				(void *)KASAN_SHADOW_END);
+			kasan_mem_to_shadow((void *)__START_KERNEL_map));
+
+	vmemmap_populate((unsigned long)kasan_mem_to_shadow(_stext),
+			(unsigned long)kasan_mem_to_shadow(_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_VADDR),
+			(void *)KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index d3d43ec..696d223 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -250,6 +257,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5b052ab..1fcc1d8 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,15 @@
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 
 struct kasan_access_info {
 	const void *access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 2760edb..866732e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -64,6 +64,12 @@ static void print_error_description(struct kasan_access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: KASan: %s in %pS at addr %p\n",
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 159396a..0ac7d1d 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -9,6 +9,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 14/19] kasan: enable stack instrumentation
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Michal Marek, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          | 11 +++++++++--
 include/linux/init_task.h            |  8 ++++++++
 mm/kasan/kasan.h                     |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 scripts/Makefile.kasan               |  1 +
 7 files changed, 44 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 4fc8ca7..057f6f6 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 3e4d9a1..5350870 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -189,11 +189,18 @@ void __init kasan_init(void)
 		if (map_range(&pfn_mapped[i]))
 			panic("kasan: unable to allocate shadow!");
 	}
-
 	populate_zero_shadow(kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
-				(void *)KASAN_SHADOW_END);
+			kasan_mem_to_shadow((void *)__START_KERNEL_map));
+
+	vmemmap_populate((unsigned long)kasan_mem_to_shadow(_stext),
+			(unsigned long)kasan_mem_to_shadow(_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_VADDR),
+			(void *)KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index d3d43ec..696d223 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -250,6 +257,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5b052ab..1fcc1d8 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,15 @@
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 
 struct kasan_access_info {
 	const void *access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 2760edb..866732e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -64,6 +64,12 @@ static void print_error_description(struct kasan_access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: KASan: %s in %pS at addr %p\n",
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 159396a..0ac7d1d 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -9,6 +9,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 14/19] kasan: enable stack instrumentation
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Michal Marek, open list:KERNEL BUILD + fi...

Stack instrumentation allows to detect out of bounds
memory accesses for variables allocated on stack.
Compiler adds redzones around every variable on stack
and poisons redzones in function's prologue.

Such approach significantly increases stack usage,
so all in-kernel stacks size were doubled.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/include/asm/page_64_types.h | 12 +++++++++---
 arch/x86/kernel/Makefile             |  2 ++
 arch/x86/mm/kasan_init_64.c          | 11 +++++++++--
 include/linux/init_task.h            |  8 ++++++++
 mm/kasan/kasan.h                     |  9 +++++++++
 mm/kasan/report.c                    |  6 ++++++
 scripts/Makefile.kasan               |  1 +
 7 files changed, 44 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 75450b2..4edd53b 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -1,17 +1,23 @@
 #ifndef _ASM_X86_PAGE_64_DEFS_H
 #define _ASM_X86_PAGE_64_DEFS_H
 
-#define THREAD_SIZE_ORDER	2
+#ifdef CONFIG_KASAN
+#define KASAN_STACK_ORDER 1
+#else
+#define KASAN_STACK_ORDER 0
+#endif
+
+#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
 #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
 #define CURRENT_MASK (~(THREAD_SIZE - 1))
 
-#define EXCEPTION_STACK_ORDER 0
+#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
 
 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1)
 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER)
 
-#define IRQ_STACK_ORDER 2
+#define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
 
 #define DOUBLEFAULT_STACK 1
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 4fc8ca7..057f6f6 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,8 @@ CFLAGS_REMOVE_early_printk.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n
 
 CFLAGS_irq.o := -I$(src)/../include/asm/trace
 
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 3e4d9a1..5350870 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -189,11 +189,18 @@ void __init kasan_init(void)
 		if (map_range(&pfn_mapped[i]))
 			panic("kasan: unable to allocate shadow!");
 	}
-
 	populate_zero_shadow(kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
-				(void *)KASAN_SHADOW_END);
+			kasan_mem_to_shadow((void *)__START_KERNEL_map));
+
+	vmemmap_populate((unsigned long)kasan_mem_to_shadow(_stext),
+			(unsigned long)kasan_mem_to_shadow(_end),
+			NUMA_NO_NODE);
+
+	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_VADDR),
+			(void *)KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
 
 	load_cr3(init_level4_pgt);
+	init_task.kasan_depth = 0;
 }
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index d3d43ec..696d223 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -175,6 +175,13 @@ extern struct task_group root_task_group;
 # define INIT_NUMA_BALANCING(tsk)
 #endif
 
+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk)						\
+	.kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -250,6 +257,7 @@ extern struct task_group root_task_group;
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
 	INIT_NUMA_BALANCING(tsk)					\
+	INIT_KASAN(tsk)							\
 }
 
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5b052ab..1fcc1d8 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,15 @@
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
 
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT        0xF1
+#define KASAN_STACK_MID         0xF2
+#define KASAN_STACK_RIGHT       0xF3
+#define KASAN_STACK_PARTIAL     0xF4
+
 
 struct kasan_access_info {
 	const void *access_addr;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 2760edb..866732e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -64,6 +64,12 @@ static void print_error_description(struct kasan_access_info *info)
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
+	case KASAN_STACK_LEFT:
+	case KASAN_STACK_MID:
+	case KASAN_STACK_RIGHT:
+	case KASAN_STACK_PARTIAL:
+		bug_type = "out of bounds on stack";
+		break;
 	}
 
 	pr_err("BUG: KASan: %s in %pS at addr %p\n",
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 159396a..0ac7d1d 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -9,6 +9,7 @@ CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+		--param asan-stack=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 15/19] mm: vmalloc: add flag preventing guard hole allocation
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().

Add a new vm_struct flag 'VM_NO_GUARD' indicating that vm area
doesn't have a guard hole.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/vmalloc.h | 9 +++++++--
 mm/vmalloc.c            | 6 ++----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b87696f..1526fe7 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -16,6 +16,7 @@ struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
 #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
 #define VM_VPAGES		0x00000010	/* buffer for pages was vmalloc'ed */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
+#define VM_NO_GUARD		0x00000040      /* don't add guard page */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -96,8 +97,12 @@ void vmalloc_sync_all(void);
 
 static inline size_t get_vm_area_size(const struct vm_struct *area)
 {
-	/* return actual size without guard page */
-	return area->size - PAGE_SIZE;
+	if (!(area->flags & VM_NO_GUARD))
+		/* return actual size without guard page */
+		return area->size - PAGE_SIZE;
+	else
+		return area->size;
+
 }
 
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 39c3388..2e74e99 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1324,10 +1324,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (unlikely(!area))
 		return NULL;
 
-	/*
-	 * We always allocate a guard page.
-	 */
-	size += PAGE_SIZE;
+	if (!(flags & VM_NO_GUARD))
+		size += PAGE_SIZE;
 
 	va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
 	if (IS_ERR(va)) {
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 15/19] mm: vmalloc: add flag preventing guard hole allocation
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().

Add a new vm_struct flag 'VM_NO_GUARD' indicating that vm area
doesn't have a guard hole.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/vmalloc.h | 9 +++++++--
 mm/vmalloc.c            | 6 ++----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b87696f..1526fe7 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -16,6 +16,7 @@ struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
 #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
 #define VM_VPAGES		0x00000010	/* buffer for pages was vmalloc'ed */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
+#define VM_NO_GUARD		0x00000040      /* don't add guard page */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -96,8 +97,12 @@ void vmalloc_sync_all(void);
 
 static inline size_t get_vm_area_size(const struct vm_struct *area)
 {
-	/* return actual size without guard page */
-	return area->size - PAGE_SIZE;
+	if (!(area->flags & VM_NO_GUARD))
+		/* return actual size without guard page */
+		return area->size - PAGE_SIZE;
+	else
+		return area->size;
+
 }
 
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 39c3388..2e74e99 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1324,10 +1324,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (unlikely(!area))
 		return NULL;
 
-	/*
-	 * We always allocate a guard page.
-	 */
-	size += PAGE_SIZE;
+	if (!(flags & VM_NO_GUARD))
+		size += PAGE_SIZE;
 
 	va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
 	if (IS_ERR(va)) {
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 16/19] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
  2015-02-03 17:42   ` Andrey Ryabinin
                       ` (3 preceding siblings ...)
  (?)
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 16/19] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 16/19] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 16/19] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-arm-kernel

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 16/19] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Russell King, Catalin Marinas, Will Deacon,
	Ralf Baechle, James E.J. Bottomley, Helge Deller,
	Martin Schwidefsky, Heiko Carstens, supporter:S390,
	David S. Miller, Guan Xuetao, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, moderated list:ARM PORT, open list:MIPS,
	open list:PARISC ARCHITECTURE, open list:S390,
	open list:SPARC + UltraSPAR...

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 16/19] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-arm-kernel

For instrumenting global variables KASan will shadow memory
backing memory for modules. So on module loading we will need
to allocate memory for shadow and map it at address in shadow
that corresponds to the address allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose,
except it puts a guard hole after allocated area. Guard hole
in shadow memory should be a problem because at some future
point we might need to have a shadow memory at address
occupied by guard hole. So we could fail to allocate shadow
for module_alloc().

Now we have VM_NO_GUARD flag disabling guard page, so we need to
pass into __vmalloc_node_range(). Add new parameter 'vm_flags'
to __vmalloc_node_range() function.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/arm/kernel/module.c       |  2 +-
 arch/arm64/kernel/module.c     |  4 ++--
 arch/mips/kernel/module.c      |  2 +-
 arch/parisc/kernel/module.c    |  2 +-
 arch/s390/kernel/module.c      |  2 +-
 arch/sparc/kernel/module.c     |  2 +-
 arch/unicore32/kernel/module.c |  2 +-
 arch/x86/kernel/module.c       |  2 +-
 include/linux/vmalloc.h        |  4 +++-
 mm/vmalloc.c                   | 10 ++++++----
 10 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index bea7db9..2e11961 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -41,7 +41,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 9b6f71d..67bf410 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -35,8 +35,8 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+				    GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+				    NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 2a52568..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -47,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 5822e8e..3c63a82 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
 	 * init_data correctly */
 	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
 				    GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_RWX, NUMA_NO_NODE,
+				    PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 409d152..36154a2 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				    GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				    GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 #endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 97655e0..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 #else
diff --git a/arch/unicore32/kernel/module.c b/arch/unicore32/kernel/module.c
index dc41f6d..e191b34 100644
--- a/arch/unicore32/kernel/module.c
+++ b/arch/unicore32/kernel/module.c
@@ -25,7 +25,7 @@
 void *module_alloc(unsigned long size)
 {
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
-				GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..e830e61 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -88,7 +88,7 @@ void *module_alloc(unsigned long size)
 	return __vmalloc_node_range(size, 1,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
-				    PAGE_KERNEL_EXEC, NUMA_NO_NODE,
+				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
 
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1526fe7..7d7acb3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,7 +76,9 @@ extern void *vmalloc_32_user(unsigned long size);
 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
 extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller);
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller);
+
 extern void vfree(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2e74e99..35b25e1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1619,6 +1619,7 @@ fail:
  *	@end:		vm area range end
  *	@gfp_mask:	flags for the page level allocator
  *	@prot:		protection mask for the allocated pages
+ *	@vm_flags:	additional vm area flags (e.g. %VM_NO_GUARD)
  *	@node:		node to use for allocation or NUMA_NO_NODE
  *	@caller:	caller's return address
  *
@@ -1628,7 +1629,8 @@ fail:
  */
 void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			unsigned long start, unsigned long end, gfp_t gfp_mask,
-			pgprot_t prot, int node, const void *caller)
+			pgprot_t prot, unsigned long vm_flags, int node,
+			const void *caller)
 {
 	struct vm_struct *area;
 	void *addr;
@@ -1638,8 +1640,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!size || (size >> PAGE_SHIFT) > totalram_pages)
 		goto fail;
 
-	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED,
-				  start, end, node, gfp_mask, caller);
+	area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+				vm_flags, start, end, node, gfp_mask, caller);
 	if (!area)
 		goto fail;
 
@@ -1688,7 +1690,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
 			    int node, const void *caller)
 {
 	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
-				gfp_mask, prot, node, caller);
+				gfp_mask, prot, 0, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
-- 
2.2.2

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 17/19] kernel: add support for .init_array.* constructors
  2015-02-03 17:42   ` Andrey Ryabinin
  (?)
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Globals instrumentation in GCC 4.9.2 produces
constructors with priority (.init_array.00099)

Currently kernel ignores such constructors. Only constructors
with default priority supported (.init_array)

This patch adds support for constructors with priorities.
For kernel image we put pointers to constructors between
__ctors_start/__ctors_end and do_ctors() will call them
on start up.
For modules we merge .init_array.* sections into resulting .init_array.
Module code properly handles constructors in .init_array section.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 17/19] kernel: add support for .init_array.* constructors
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Globals instrumentation in GCC 4.9.2 produces
constructors with priority (.init_array.00099)

Currently kernel ignores such constructors. Only constructors
with default priority supported (.init_array)

This patch adds support for constructors with priorities.
For kernel image we put pointers to constructors between
__ctors_start/__ctors_end and do_ctors() will call them
on start up.
For modules we merge .init_array.* sections into resulting .init_array.
Module code properly handles constructors in .init_array section.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.2

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 17/19] kernel: add support for .init_array.* constructors
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Arnd Bergmann, open list:GENERIC INCLUDE/A...

KASan uses constructors for initializing redzones for global
variables. Globals instrumentation in GCC 4.9.2 produces
constructors with priority (.init_array.00099)

Currently kernel ignores such constructors. Only constructors
with default priority supported (.init_array)

This patch adds support for constructors with priorities.
For kernel image we put pointers to constructors between
__ctors_start/__ctors_end and do_ctors() will call them
on start up.
For modules we merge .init_array.* sections into resulting .init_array.
Module code properly handles constructors in .init_array section.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 scripts/module-common.lds         | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bee5d68..ac78910 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -478,6 +478,7 @@
 #define KERNEL_CTORS()	. = ALIGN(8);			   \
 			VMLINUX_SYMBOL(__ctors_start) = .; \
 			*(.ctors)			   \
+			*(SORT(.init_array.*))		   \
 			*(.init_array)			   \
 			VMLINUX_SYMBOL(__ctors_end) = .;
 #else
diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..01c5849 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,7 @@ SECTIONS {
 	__kcrctab_unused	: { *(SORT(___kcrctab_unused+*)) }
 	__kcrctab_unused_gpl	: { *(SORT(___kcrctab_unused_gpl+*)) }
 	__kcrctab_gpl_future	: { *(SORT(___kcrctab_gpl_future+*)) }
+
+	. = ALIGN(8);
+	.init_array		: { *(SORT(.init_array.*)) *(.init_array) }
 }
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 18/19] module: fix types of device tables aliases
  2015-02-03 17:42   ` Andrey Ryabinin
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Rusty Russell

MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
Normally alias should have the same type as aliased symbol.

Device tables are arrays, so they have 'struct type##_device_id[x]'
types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
	'struct type##_device_id'.

This inconsistency confuses compiler, it could make a wrong
assumption about variable's size which leads KASan to
produce a false positive report about out of bounds access.

For every global variable compiler calls __asan_register_globals()
passing information about global variable (address, size, size with
redzone, name ...) __asan_register_globals() poison symbols
redzone to detect possible out of bounds accesses.

When symbol has an alias __asan_register_globals() will be called
as for symbol so for alias. Compiler determines size of variable by
size of variable's type. Alias and symbol have the same address,
so if alias have the wrong size part of memory that actually belongs
to the symbol could be poisoned as redzone of alias symbol.

By fixing type of alias symbol we will fix size of it, so
__asan_register_globals() will not poison valid memory.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/module.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/module.h b/include/linux/module.h
index b653d7c..42999fe 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
 #ifdef MODULE
 /* Creates an alias so file2alias.c can find device table. */
 #define MODULE_DEVICE_TABLE(type, name)					\
-  extern const struct type##_device_id __mod_##type##__##name##_device_table \
+extern const typeof(name) __mod_##type##__##name##_device_table		\
   __attribute__ ((unused, alias(__stringify(name))))
 #else  /* !MODULE */
 #define MODULE_DEVICE_TABLE(type, name)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 18/19] module: fix types of device tables aliases
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Rusty Russell

MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
Normally alias should have the same type as aliased symbol.

Device tables are arrays, so they have 'struct type##_device_id[x]'
types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
	'struct type##_device_id'.

This inconsistency confuses compiler, it could make a wrong
assumption about variable's size which leads KASan to
produce a false positive report about out of bounds access.

For every global variable compiler calls __asan_register_globals()
passing information about global variable (address, size, size with
redzone, name ...) __asan_register_globals() poison symbols
redzone to detect possible out of bounds accesses.

When symbol has an alias __asan_register_globals() will be called
as for symbol so for alias. Compiler determines size of variable by
size of variable's type. Alias and symbol have the same address,
so if alias have the wrong size part of memory that actually belongs
to the symbol could be poisoned as redzone of alias symbol.

By fixing type of alias symbol we will fix size of it, so
__asan_register_globals() will not poison valid memory.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 include/linux/module.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/module.h b/include/linux/module.h
index b653d7c..42999fe 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
 #ifdef MODULE
 /* Creates an alias so file2alias.c can find device table. */
 #define MODULE_DEVICE_TABLE(type, name)					\
-  extern const struct type##_device_id __mod_##type##__##name##_device_table \
+extern const typeof(name) __mod_##type##__##name##_device_table		\
   __attribute__ ((unused, alias(__stringify(name))))
 #else  /* !MODULE */
 #define MODULE_DEVICE_TABLE(type, name)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 19/19] kasan: enable instrumentation of global variables
  2015-02-03 17:42   ` Andrey Ryabinin
  (?)
@ 2015-02-03 17:43     ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Rusty Russell, Michal Marek, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds of
global variables. This will work as for globals in kernel
image, so for globals in modules. Currently this won't work
for symbols in user-specified sections (e.g. __init, __read_mostly, ...)

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/module.c      | 12 +++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 10 +++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 52 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 25 +++++++++++++++++++++
 mm/kasan/report.c             | 22 ++++++++++++++++++
 scripts/Makefile.kasan        |  5 +++--
 11 files changed, 133 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 5350870..4860906 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -196,7 +196,7 @@ void __init kasan_init(void)
 			(unsigned long)kasan_mem_to_shadow(_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
 			(void *)KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d5310ee..72ba725 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_current(void) {}
@@ -74,6 +81,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index d856e96..f842027 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_memfree(void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 4d47d87..4fecaedc 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "KASan: runtime memory debugger"
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables kernel address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 799c52b..78fee63 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -22,6 +22,7 @@
 #include <linux/memblock.h>
 #include <linux/memory.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -395,6 +396,57 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+	void *ret;
+	size_t shadow_size;
+	unsigned long shadow_start;
+
+	shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
+	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+			PAGE_SIZE);
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree(kasan_mem_to_shadow(addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DEFINE_ASAN_LOAD_STORE(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 1fcc1d8..4986b0a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -11,6 +11,7 @@
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 
 /*
  * Stack redzone shadow values
@@ -21,6 +22,10 @@
 #define KASAN_STACK_RIGHT       0xF3
 #define KASAN_STACK_PARTIAL     0xF4
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
 
 struct kasan_access_info {
 	const void *access_addr;
@@ -30,6 +35,26 @@ struct kasan_access_info {
 	unsigned long ip;
 };
 
+/* The layout of struct dictated by compiler */
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+/* The layout of struct dictated by compiler */
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct kasan_access_info *info);
 void kasan_report_user_access(struct kasan_access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 866732e..680ceed 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct kasan_access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +83,20 @@ static void print_error_description(struct kasan_access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(const void *addr)
+{
+	return (addr >= (void *)_stext && addr < (void *)_end)
+		|| (addr >= (void *)MODULES_VADDR
+			&& addr < (void *)MODULES_END);
+}
+
+static inline bool init_task_stack_addr(const void *addr)
+{
+	return addr >= (void *)&init_thread_union.stack &&
+		(addr <= (void *)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct kasan_access_info *info)
 {
 	const void *addr = info->access_addr;
@@ -107,6 +124,11 @@ static void print_address_description(struct kasan_access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n", addr);
+	}
+
 	dump_stack();
 }
 
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 0ac7d1d..df302f8 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -5,11 +5,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Rusty Russell, Michal Marek, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds of
global variables. This will work as for globals in kernel
image, so for globals in modules. Currently this won't work
for symbols in user-specified sections (e.g. __init, __read_mostly, ...)

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/module.c      | 12 +++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 10 +++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 52 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 25 +++++++++++++++++++++
 mm/kasan/report.c             | 22 ++++++++++++++++++
 scripts/Makefile.kasan        |  5 +++--
 11 files changed, 133 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 5350870..4860906 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -196,7 +196,7 @@ void __init kasan_init(void)
 			(unsigned long)kasan_mem_to_shadow(_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
 			(void *)KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d5310ee..72ba725 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_current(void) {}
@@ -74,6 +81,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index d856e96..f842027 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_memfree(void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 4d47d87..4fecaedc 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "KASan: runtime memory debugger"
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables kernel address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 799c52b..78fee63 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -22,6 +22,7 @@
 #include <linux/memblock.h>
 #include <linux/memory.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -395,6 +396,57 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+	void *ret;
+	size_t shadow_size;
+	unsigned long shadow_start;
+
+	shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
+	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+			PAGE_SIZE);
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree(kasan_mem_to_shadow(addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DEFINE_ASAN_LOAD_STORE(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 1fcc1d8..4986b0a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -11,6 +11,7 @@
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 
 /*
  * Stack redzone shadow values
@@ -21,6 +22,10 @@
 #define KASAN_STACK_RIGHT       0xF3
 #define KASAN_STACK_PARTIAL     0xF4
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
 
 struct kasan_access_info {
 	const void *access_addr;
@@ -30,6 +35,26 @@ struct kasan_access_info {
 	unsigned long ip;
 };
 
+/* The layout of struct dictated by compiler */
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+/* The layout of struct dictated by compiler */
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct kasan_access_info *info);
 void kasan_report_user_access(struct kasan_access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 866732e..680ceed 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct kasan_access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +83,20 @@ static void print_error_description(struct kasan_access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(const void *addr)
+{
+	return (addr >= (void *)_stext && addr < (void *)_end)
+		|| (addr >= (void *)MODULES_VADDR
+			&& addr < (void *)MODULES_END);
+}
+
+static inline bool init_task_stack_addr(const void *addr)
+{
+	return addr >= (void *)&init_thread_union.stack &&
+		(addr <= (void *)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct kasan_access_info *info)
 {
 	const void *addr = info->access_addr;
@@ -107,6 +124,11 @@ static void print_address_description(struct kasan_access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n", addr);
+	}
+
 	dump_stack();
 }
 
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 0ac7d1d..df302f8 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -5,11 +5,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2


^ permalink raw reply related	[flat|nested] 862+ messages in thread

* [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-03 17:43     ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-03 17:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Rusty Russell, Michal Marek, open list:KERNEL BUILD + fi...

This feature let us to detect accesses out of bounds of
global variables. This will work as for globals in kernel
image, so for globals in modules. Currently this won't work
for symbols in user-specified sections (e.g. __init, __read_mostly, ...)

The idea of this is simple. Compiler increases each global variable
by redzone size and add constructors invoking __asan_register_globals()
function. Information about global variable (address, size,
size with redzone ...) passed to __asan_register_globals() so we could
poison variable's redzone.

This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
more simple. Such alignment guarantees that each shadow page backing
modules address space correspond to only one module_alloc() allocation.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 arch/x86/kernel/module.c      | 12 +++++++++-
 arch/x86/mm/kasan_init_64.c   |  2 +-
 include/linux/compiler-gcc4.h |  4 ++++
 include/linux/compiler-gcc5.h |  2 ++
 include/linux/kasan.h         | 10 +++++++++
 kernel/module.c               |  2 ++
 lib/Kconfig.kasan             |  1 +
 mm/kasan/kasan.c              | 52 +++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h              | 25 +++++++++++++++++++++
 mm/kasan/report.c             | 22 ++++++++++++++++++
 scripts/Makefile.kasan        |  5 +++--
 11 files changed, 133 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e830e61..d1ac80b 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/kasan.h>
 #include <linux/bug.h>
 #include <linux/mm.h>
 #include <linux/gfp.h>
@@ -83,13 +84,22 @@ static unsigned long int get_module_load_offset(void)
 
 void *module_alloc(unsigned long size)
 {
+	void *p;
+
 	if (PAGE_ALIGN(size) > MODULES_LEN)
 		return NULL;
-	return __vmalloc_node_range(size, 1,
+
+	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL | __GFP_HIGHMEM,
 				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
+	if (p && (kasan_module_alloc(p, size) < 0)) {
+		vfree(p);
+		return NULL;
+	}
+
+	return p;
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 5350870..4860906 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -196,7 +196,7 @@ void __init kasan_init(void)
 			(unsigned long)kasan_mem_to_shadow(_end),
 			NUMA_NO_NODE);
 
-	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_VADDR),
+	populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
 			(void *)KASAN_SHADOW_END);
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
diff --git a/include/linux/compiler-gcc4.h b/include/linux/compiler-gcc4.h
index d1a5582..769e198 100644
--- a/include/linux/compiler-gcc4.h
+++ b/include/linux/compiler-gcc4.h
@@ -85,3 +85,7 @@
 #define __HAVE_BUILTIN_BSWAP16__
 #endif
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#if GCC_VERSION >= 40902
+#define KASAN_ABI_VERSION 3
+#endif
diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
index c8c5659..efee493 100644
--- a/include/linux/compiler-gcc5.h
+++ b/include/linux/compiler-gcc5.h
@@ -63,3 +63,5 @@
 #define __HAVE_BUILTIN_BSWAP64__
 #define __HAVE_BUILTIN_BSWAP16__
 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+
+#define KASAN_ABI_VERSION 4
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d5310ee..72ba725 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
 void kasan_slab_alloc(struct kmem_cache *s, void *object);
 void kasan_slab_free(struct kmem_cache *s, void *object);
 
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_module_free(void *addr);
+
 #else /* CONFIG_KASAN */
 
+#define MODULE_ALIGN 1
+
 static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
 
 static inline void kasan_enable_current(void) {}
@@ -74,6 +81,9 @@ static inline void kasan_krealloc(const void *object, size_t new_size) {}
 static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
 
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_module_free(void *addr) {}
+
 #endif /* CONFIG_KASAN */
 
 #endif /* LINUX_KASAN_H */
diff --git a/kernel/module.c b/kernel/module.c
index d856e96..f842027 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -56,6 +56,7 @@
 #include <linux/async.h>
 #include <linux/percpu.h>
 #include <linux/kmemleak.h>
+#include <linux/kasan.h>
 #include <linux/jump_label.h>
 #include <linux/pfn.h>
 #include <linux/bsearch.h>
@@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
 void __weak module_memfree(void *module_region)
 {
 	vfree(module_region);
+	kasan_module_free(module_region);
 }
 
 void __weak module_arch_cleanup(struct module *mod)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 4d47d87..4fecaedc 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
 config KASAN
 	bool "KASan: runtime memory debugger"
 	depends on SLUB_DEBUG
+	select CONSTRUCTORS
 	help
 	  Enables kernel address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 799c52b..78fee63 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -22,6 +22,7 @@
 #include <linux/memblock.h>
 #include <linux/memory.h>
 #include <linux/mm.h>
+#include <linux/module.h>
 #include <linux/printk.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -395,6 +396,57 @@ void kasan_kfree_large(const void *ptr)
 			KASAN_FREE_PAGE);
 }
 
+int kasan_module_alloc(void *addr, size_t size)
+{
+	void *ret;
+	size_t shadow_size;
+	unsigned long shadow_start;
+
+	shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
+	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+			PAGE_SIZE);
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+	return ret ? 0 : -ENOMEM;
+}
+
+void kasan_module_free(void *addr)
+{
+	vfree(kasan_mem_to_shadow(addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+	kasan_unpoison_shadow(global->beg, global->size);
+
+	kasan_poison_shadow(global->beg + aligned_size,
+		global->size_with_redzone - aligned_size,
+		KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+	int i;
+
+	for (i = 0; i < size; i++)
+		register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
 #define DEFINE_ASAN_LOAD_STORE(size)				\
 	void __asan_load##size(unsigned long addr)		\
 	{							\
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 1fcc1d8..4986b0a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -11,6 +11,7 @@
 #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
 #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
 
 /*
  * Stack redzone shadow values
@@ -21,6 +22,10 @@
 #define KASAN_STACK_RIGHT       0xF3
 #define KASAN_STACK_PARTIAL     0xF4
 
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
 
 struct kasan_access_info {
 	const void *access_addr;
@@ -30,6 +35,26 @@ struct kasan_access_info {
 	unsigned long ip;
 };
 
+/* The layout of struct dictated by compiler */
+struct kasan_source_location {
+	const char *filename;
+	int line_no;
+	int column_no;
+};
+
+/* The layout of struct dictated by compiler */
+struct kasan_global {
+	const void *beg;		/* Address of the beginning of the global variable. */
+	size_t size;			/* Size of the global variable. */
+	size_t size_with_redzone;	/* Size of the variable + size of the red zone. 32 bytes aligned */
+	const void *name;
+	const void *module_name;	/* Name of the module where the global variable is declared. */
+	unsigned long has_dynamic_init;	/* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+	struct kasan_source_location *location;
+#endif
+};
+
 void kasan_report_error(struct kasan_access_info *info);
 void kasan_report_user_access(struct kasan_access_info *info);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 866732e..680ceed 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -23,6 +23,8 @@
 #include <linux/types.h>
 #include <linux/kasan.h>
 
+#include <asm/sections.h>
+
 #include "kasan.h"
 #include "../slab.h"
 
@@ -61,6 +63,7 @@ static void print_error_description(struct kasan_access_info *info)
 		break;
 	case KASAN_PAGE_REDZONE:
 	case KASAN_KMALLOC_REDZONE:
+	case KASAN_GLOBAL_REDZONE:
 	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
 		bug_type = "out of bounds access";
 		break;
@@ -80,6 +83,20 @@ static void print_error_description(struct kasan_access_info *info)
 		info->access_size, current->comm, task_pid_nr(current));
 }
 
+static inline bool kernel_or_module_addr(const void *addr)
+{
+	return (addr >= (void *)_stext && addr < (void *)_end)
+		|| (addr >= (void *)MODULES_VADDR
+			&& addr < (void *)MODULES_END);
+}
+
+static inline bool init_task_stack_addr(const void *addr)
+{
+	return addr >= (void *)&init_thread_union.stack &&
+		(addr <= (void *)&init_thread_union.stack +
+			sizeof(init_thread_union.stack));
+}
+
 static void print_address_description(struct kasan_access_info *info)
 {
 	const void *addr = info->access_addr;
@@ -107,6 +124,11 @@ static void print_address_description(struct kasan_access_info *info)
 		dump_page(page, "kasan: bad access detected");
 	}
 
+	if (kernel_or_module_addr(addr)) {
+		if (!init_task_stack_addr(addr))
+			pr_err("Address belongs to variable %pS\n", addr);
+	}
+
 	dump_stack();
 }
 
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 0ac7d1d..df302f8 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -5,11 +5,12 @@ else
 	call_threshold := 0
 endif
 
-CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address)
+CFLAGS_KASAN_MINIMAL := $(call cc-option, -fsanitize=kernel-address \
+				--param asan-globals=1)
 
 CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
 		-fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
-		--param asan-stack=1 \
+		--param asan-stack=1 --param asan-globals=1 \
 		--param asan-instrumentation-with-call-threshold=$(call_threshold))
 
 ifeq ($(CFLAGS_KASAN_MINIMAL),)
-- 
2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
  2015-02-03 17:42     ` Andrey Ryabinin
  (?)
@ 2015-02-03 23:04       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-02-03 23:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

On Tue, 03 Feb 2015 20:42:55 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

>
> ...
>
> Based on work by Andrey Konovalov <adech.fo@gmail.com>
>

We still don't have Andrey Konovalov's signoff?  As it stands we're
taking some of his work and putting it into Linux without his
permission.

> ...
>
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,302 @@
> +/*
> + * This file contains shadow memory manipulation code.
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * Some of code borrowed from https://github.com/xairy/linux by
> + *        Andrey Konovalov <adech.fo@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */

https://code.google.com/p/thread-sanitizer/ is BSD licensed and we're
changing it to GPL.

I don't do the lawyer stuff, but this is all a bit worrisome.  I'd be a
lot more comfortable with that signed-off-by, please.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
@ 2015-02-03 23:04       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-02-03 23:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

On Tue, 03 Feb 2015 20:42:55 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

>
> ...
>
> Based on work by Andrey Konovalov <adech.fo@gmail.com>
>

We still don't have Andrey Konovalov's signoff?  As it stands we're
taking some of his work and putting it into Linux without his
permission.

> ...
>
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,302 @@
> +/*
> + * This file contains shadow memory manipulation code.
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * Some of code borrowed from https://github.com/xairy/linux by
> + *        Andrey Konovalov <adech.fo@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */

https://code.google.com/p/thread-sanitizer/ is BSD licensed and we're
changing it to GPL.

I don't do the lawyer stuff, but this is all a bit worrisome.  I'd be a
lot more comfortable with that signed-off-by, please.



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
@ 2015-02-03 23:04       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-02-03 23:04 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

On Tue, 03 Feb 2015 20:42:55 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

>
> ...
>
> Based on work by Andrey Konovalov <adech.fo@gmail.com>
>

We still don't have Andrey Konovalov's signoff?  As it stands we're
taking some of his work and putting it into Linux without his
permission.

> ...
>
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,302 @@
> +/*
> + * This file contains shadow memory manipulation code.
> + *
> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * Some of code borrowed from https://github.com/xairy/linux by
> + *        Andrey Konovalov <adech.fo@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */

https://code.google.com/p/thread-sanitizer/ is BSD licensed and we're
changing it to GPL.

I don't do the lawyer stuff, but this is all a bit worrisome.  I'd be a
lot more comfortable with that signed-off-by, please.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
  2015-02-03 17:43     ` Andrey Ryabinin
@ 2015-02-03 23:51       ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-02-03 23:51 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Rusty Russell, James Bottomley

On Tue, 03 Feb 2015 20:43:11 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
> Normally alias should have the same type as aliased symbol.
> 
> Device tables are arrays, so they have 'struct type##_device_id[x]'
> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
> 	'struct type##_device_id'.
> 
> This inconsistency confuses compiler, it could make a wrong
> assumption about variable's size which leads KASan to
> produce a false positive report about out of bounds access.
> 
> For every global variable compiler calls __asan_register_globals()
> passing information about global variable (address, size, size with
> redzone, name ...) __asan_register_globals() poison symbols
> redzone to detect possible out of bounds accesses.
> 
> When symbol has an alias __asan_register_globals() will be called
> as for symbol so for alias. Compiler determines size of variable by
> size of variable's type. Alias and symbol have the same address,
> so if alias have the wrong size part of memory that actually belongs
> to the symbol could be poisoned as redzone of alias symbol.
> 
> By fixing type of alias symbol we will fix size of it, so
> __asan_register_globals() will not poison valid memory.
> 
> ...
>
> --- a/include/linux/module.h
> +++ b/include/linux/module.h
> @@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
>  #ifdef MODULE
>  /* Creates an alias so file2alias.c can find device table. */
>  #define MODULE_DEVICE_TABLE(type, name)					\
> -  extern const struct type##_device_id __mod_##type##__##name##_device_table \
> +extern const typeof(name) __mod_##type##__##name##_device_table		\
>    __attribute__ ((unused, alias(__stringify(name))))
>  #else  /* !MODULE */
>  #define MODULE_DEVICE_TABLE(type, name)

This newly requires that `name' has been defined at the
MODULE_DEVICE_TABLE expansion site.

So drivers/scsi/be2iscsi/be_main.c explodes because we converted

extern const struct pci_device_id __mod_pci__beiscsi_pci_id_table_device_table __attribute__ ((unused, alias("beiscsi_pci_id_table")));

into

extern const typeof(beiscsi_pci_id_table) __mod_pci__beiscsi_pci_id_table_device_table __attribute__ ((unused, alias("beiscsi_pci_id_table")));

before beiscsi_pci_id_table was defined.


There are probably others, so I'll start accumulating the fixes.



From: Andrew Morton <akpm@linux-foundation.org>
Subject: MODULE_DEVICE_TABLE: fix some callsites

The patch "module: fix types of device tables aliases" newly requires that
invokations of

MODULE_DEVICE_TABLE(type, name);

come *after* the definition of `name'.  That is reasonable, but some
drivers weren't doing this.  Fix them.

Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 drivers/scsi/be2iscsi/be_main.c |    1 -
 1 file changed, 1 deletion(-)

diff -puN drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites drivers/scsi/be2iscsi/be_main.c
--- a/drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites
+++ a/drivers/scsi/be2iscsi/be_main.c
@@ -48,7 +48,6 @@ static unsigned int be_iopoll_budget = 1
 static unsigned int be_max_phys_size = 64;
 static unsigned int enable_msix = 1;
 
-MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
 MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
 MODULE_VERSION(BUILD_STR);
 MODULE_AUTHOR("Emulex Corporation");
_


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
@ 2015-02-03 23:51       ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-02-03 23:51 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Rusty Russell, James Bottomley

On Tue, 03 Feb 2015 20:43:11 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:

> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
> Normally alias should have the same type as aliased symbol.
> 
> Device tables are arrays, so they have 'struct type##_device_id[x]'
> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
> 	'struct type##_device_id'.
> 
> This inconsistency confuses compiler, it could make a wrong
> assumption about variable's size which leads KASan to
> produce a false positive report about out of bounds access.
> 
> For every global variable compiler calls __asan_register_globals()
> passing information about global variable (address, size, size with
> redzone, name ...) __asan_register_globals() poison symbols
> redzone to detect possible out of bounds accesses.
> 
> When symbol has an alias __asan_register_globals() will be called
> as for symbol so for alias. Compiler determines size of variable by
> size of variable's type. Alias and symbol have the same address,
> so if alias have the wrong size part of memory that actually belongs
> to the symbol could be poisoned as redzone of alias symbol.
> 
> By fixing type of alias symbol we will fix size of it, so
> __asan_register_globals() will not poison valid memory.
> 
> ...
>
> --- a/include/linux/module.h
> +++ b/include/linux/module.h
> @@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
>  #ifdef MODULE
>  /* Creates an alias so file2alias.c can find device table. */
>  #define MODULE_DEVICE_TABLE(type, name)					\
> -  extern const struct type##_device_id __mod_##type##__##name##_device_table \
> +extern const typeof(name) __mod_##type##__##name##_device_table		\
>    __attribute__ ((unused, alias(__stringify(name))))
>  #else  /* !MODULE */
>  #define MODULE_DEVICE_TABLE(type, name)

This newly requires that `name' has been defined at the
MODULE_DEVICE_TABLE expansion site.

So drivers/scsi/be2iscsi/be_main.c explodes because we converted

extern const struct pci_device_id __mod_pci__beiscsi_pci_id_table_device_table __attribute__ ((unused, alias("beiscsi_pci_id_table")));

into

extern const typeof(beiscsi_pci_id_table) __mod_pci__beiscsi_pci_id_table_device_table __attribute__ ((unused, alias("beiscsi_pci_id_table")));

before beiscsi_pci_id_table was defined.


There are probably others, so I'll start accumulating the fixes.



From: Andrew Morton <akpm@linux-foundation.org>
Subject: MODULE_DEVICE_TABLE: fix some callsites

The patch "module: fix types of device tables aliases" newly requires that
invokations of

MODULE_DEVICE_TABLE(type, name);

come *after* the definition of `name'.  That is reasonable, but some
drivers weren't doing this.  Fix them.

Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 drivers/scsi/be2iscsi/be_main.c |    1 -
 1 file changed, 1 deletion(-)

diff -puN drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites drivers/scsi/be2iscsi/be_main.c
--- a/drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites
+++ a/drivers/scsi/be2iscsi/be_main.c
@@ -48,7 +48,6 @@ static unsigned int be_iopoll_budget = 1
 static unsigned int be_max_phys_size = 64;
 static unsigned int enable_msix = 1;
 
-MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
 MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
 MODULE_VERSION(BUILD_STR);
 MODULE_AUTHOR("Emulex Corporation");
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
  2015-02-03 23:51       ` Andrew Morton
@ 2015-02-04  0:01         ` Sasha Levin
  -1 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2015-02-04  0:01 UTC (permalink / raw)
  To: Andrew Morton, Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Christoph Lameter, Joonsoo Kim,
	Dave Hansen, Andi Kleen, x86, linux-mm, Rusty Russell,
	James Bottomley

On 02/03/2015 06:51 PM, Andrew Morton wrote:
> From: Andrew Morton <akpm@linux-foundation.org>
> Subject: MODULE_DEVICE_TABLE: fix some callsites
> 
> The patch "module: fix types of device tables aliases" newly requires that
> invokations of
  invocations
> 
> MODULE_DEVICE_TABLE(type, name);
> 
> come *after* the definition of `name'.  That is reasonable, but some
> drivers weren't doing this.  Fix them.
> 
> Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> 
>  drivers/scsi/be2iscsi/be_main.c |    1 -
>  1 file changed, 1 deletion(-)
> 
> diff -puN drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites drivers/scsi/be2iscsi/be_main.c
> --- a/drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites
> +++ a/drivers/scsi/be2iscsi/be_main.c
> @@ -48,7 +48,6 @@ static unsigned int be_iopoll_budget = 1
>  static unsigned int be_max_phys_size = 64;
>  static unsigned int enable_msix = 1;
>  
> -MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
>  MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
>  MODULE_VERSION(BUILD_STR);
>  MODULE_AUTHOR("Emulex Corporation");

This just removes MODULE_DEVICE_TABLE() rather than moving it to after the
definition of beiscsi_pci_id_table.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
@ 2015-02-04  0:01         ` Sasha Levin
  0 siblings, 0 replies; 862+ messages in thread
From: Sasha Levin @ 2015-02-04  0:01 UTC (permalink / raw)
  To: Andrew Morton, Andrey Ryabinin
  Cc: linux-kernel, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Christoph Lameter, Joonsoo Kim,
	Dave Hansen, Andi Kleen, x86, linux-mm, Rusty Russell,
	James Bottomley

On 02/03/2015 06:51 PM, Andrew Morton wrote:
> From: Andrew Morton <akpm@linux-foundation.org>
> Subject: MODULE_DEVICE_TABLE: fix some callsites
> 
> The patch "module: fix types of device tables aliases" newly requires that
> invokations of
  invocations
> 
> MODULE_DEVICE_TABLE(type, name);
> 
> come *after* the definition of `name'.  That is reasonable, but some
> drivers weren't doing this.  Fix them.
> 
> Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> 
>  drivers/scsi/be2iscsi/be_main.c |    1 -
>  1 file changed, 1 deletion(-)
> 
> diff -puN drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites drivers/scsi/be2iscsi/be_main.c
> --- a/drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites
> +++ a/drivers/scsi/be2iscsi/be_main.c
> @@ -48,7 +48,6 @@ static unsigned int be_iopoll_budget = 1
>  static unsigned int be_max_phys_size = 64;
>  static unsigned int enable_msix = 1;
>  
> -MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
>  MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
>  MODULE_VERSION(BUILD_STR);
>  MODULE_AUTHOR("Emulex Corporation");

This just removes MODULE_DEVICE_TABLE() rather than moving it to after the
definition of beiscsi_pci_id_table.


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
  2015-02-04  0:01         ` Sasha Levin
@ 2015-02-04  0:10           ` Andrew Morton
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-02-04  0:10 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Rusty Russell, James Bottomley

On Tue, 03 Feb 2015 19:01:08 -0500 Sasha Levin <sasha.levin@oracle.com> wrote:

> > diff -puN drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites drivers/scsi/be2iscsi/be_main.c
> > --- a/drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites
> > +++ a/drivers/scsi/be2iscsi/be_main.c
> > @@ -48,7 +48,6 @@ static unsigned int be_iopoll_budget = 1
> >  static unsigned int be_max_phys_size = 64;
> >  static unsigned int enable_msix = 1;
> >  
> > -MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
> >  MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
> >  MODULE_VERSION(BUILD_STR);
> >  MODULE_AUTHOR("Emulex Corporation");
> 
> This just removes MODULE_DEVICE_TABLE() rather than moving it to after the
> definition of beiscsi_pci_id_table.

There's already a MODULE_DEVICE_TABLE() after the beiscsi_pci_id_table
definition. 

drivers/net/ethernet/emulex/benet/be_main.c did the same thing. 

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
@ 2015-02-04  0:10           ` Andrew Morton
  0 siblings, 0 replies; 862+ messages in thread
From: Andrew Morton @ 2015-02-04  0:10 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Andrey Konovalov,
	Yuri Gribov, Konstantin Khlebnikov, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Rusty Russell, James Bottomley

On Tue, 03 Feb 2015 19:01:08 -0500 Sasha Levin <sasha.levin@oracle.com> wrote:

> > diff -puN drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites drivers/scsi/be2iscsi/be_main.c
> > --- a/drivers/scsi/be2iscsi/be_main.c~module_device_table-fix-some-callsites
> > +++ a/drivers/scsi/be2iscsi/be_main.c
> > @@ -48,7 +48,6 @@ static unsigned int be_iopoll_budget = 1
> >  static unsigned int be_max_phys_size = 64;
> >  static unsigned int enable_msix = 1;
> >  
> > -MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
> >  MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
> >  MODULE_VERSION(BUILD_STR);
> >  MODULE_AUTHOR("Emulex Corporation");
> 
> This just removes MODULE_DEVICE_TABLE() rather than moving it to after the
> definition of beiscsi_pci_id_table.

There's already a MODULE_DEVICE_TABLE() after the beiscsi_pci_id_table
definition. 

drivers/net/ethernet/emulex/benet/be_main.c did the same thing. 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
  2015-02-03 23:04       ` Andrew Morton
  (?)
  (?)
@ 2015-02-04  3:56       ` Andrey Konovalov
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Konovalov @ 2015-02-04  3:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

[-- Attachment #1: Type: text/plain, Size: 1382 bytes --]

Sorry I didn't reply earlier.

Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>

On Wed, Feb 4, 2015 at 2:04 AM, Andrew Morton <akpm@linux-foundation.org>
wrote:

> On Tue, 03 Feb 2015 20:42:55 +0300 Andrey Ryabinin <a.ryabinin@samsung.com>
> wrote:
>
> >
> > ...
> >
> > Based on work by Andrey Konovalov <adech.fo@gmail.com>
> >
>
> We still don't have Andrey Konovalov's signoff?  As it stands we're
> taking some of his work and putting it into Linux without his
> permission.
>
> > ...
> >
> > --- /dev/null
> > +++ b/mm/kasan/kasan.c
> > @@ -0,0 +1,302 @@
> > +/*
> > + * This file contains shadow memory manipulation code.
> > + *
> > + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> > + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> > + *
> > + * Some of code borrowed from https://github.com/xairy/linux by
> > + *        Andrey Konovalov <adech.fo@gmail.com>
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License version 2 as
> > + * published by the Free Software Foundation.
> > + *
> > + */
>
> https://code.google.com/p/thread-sanitizer/ is BSD licensed and we're
> changing it to GPL.
>
> I don't do the lawyer stuff, but this is all a bit worrisome.  I'd be a
> lot more comfortable with that signed-off-by, please.
>
>
>


-- 
Sincerely,
Andrey Konovalov.

[-- Attachment #2: Type: text/html, Size: 2742 bytes --]

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
  2015-02-03 23:04       ` Andrew Morton
@ 2015-02-04  4:00         ` Andrey Konovalov
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Konovalov @ 2015-02-04  4:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

Sorry I didn't reply earlier.

Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>

(Repeating in plain text.)

On Wed, Feb 4, 2015 at 2:04 AM, Andrew Morton <akpm@linux-foundation.org> wrote:
> On Tue, 03 Feb 2015 20:42:55 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>>
>> ...
>>
>> Based on work by Andrey Konovalov <adech.fo@gmail.com>
>>
>
> We still don't have Andrey Konovalov's signoff?  As it stands we're
> taking some of his work and putting it into Linux without his
> permission.
>
>> ...
>>
>> --- /dev/null
>> +++ b/mm/kasan/kasan.c
>> @@ -0,0 +1,302 @@
>> +/*
>> + * This file contains shadow memory manipulation code.
>> + *
>> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
>> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
>> + *
>> + * Some of code borrowed from https://github.com/xairy/linux by
>> + *        Andrey Konovalov <adech.fo@gmail.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + */
>
> https://code.google.com/p/thread-sanitizer/ is BSD licensed and we're
> changing it to GPL.
>
> I don't do the lawyer stuff, but this is all a bit worrisome.  I'd be a
> lot more comfortable with that signed-off-by, please.
>
>



-- 
Sincerely,
Andrey Konovalov.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 02/19] Add kernel address sanitizer infrastructure.
@ 2015-02-04  4:00         ` Andrey Konovalov
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Konovalov @ 2015-02-04  4:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, linux-kernel, Dmitry Vyukov,
	Konstantin Serebryany, Dmitry Chernenkov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Dave Hansen, Andi Kleen, x86, linux-mm,
	Jonathan Corbet, Michal Marek, Ingo Molnar, Peter Zijlstra,
	open list:DOCUMENTATION, open list:KERNEL BUILD + fi...

Sorry I didn't reply earlier.

Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>

(Repeating in plain text.)

On Wed, Feb 4, 2015 at 2:04 AM, Andrew Morton <akpm@linux-foundation.org> wrote:
> On Tue, 03 Feb 2015 20:42:55 +0300 Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
>
>>
>> ...
>>
>> Based on work by Andrey Konovalov <adech.fo@gmail.com>
>>
>
> We still don't have Andrey Konovalov's signoff?  As it stands we're
> taking some of his work and putting it into Linux without his
> permission.
>
>> ...
>>
>> --- /dev/null
>> +++ b/mm/kasan/kasan.c
>> @@ -0,0 +1,302 @@
>> +/*
>> + * This file contains shadow memory manipulation code.
>> + *
>> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
>> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
>> + *
>> + * Some of code borrowed from https://github.com/xairy/linux by
>> + *        Andrey Konovalov <adech.fo@gmail.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + */
>
> https://code.google.com/p/thread-sanitizer/ is BSD licensed and we're
> changing it to GPL.
>
> I don't do the lawyer stuff, but this is all a bit worrisome.  I'd be a
> lot more comfortable with that signed-off-by, please.
>
>



-- 
Sincerely,
Andrey Konovalov.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
  2015-02-03 17:43     ` Andrey Ryabinin
@ 2015-02-16  2:44       ` Rusty Russell
  -1 siblings, 0 replies; 862+ messages in thread
From: Rusty Russell @ 2015-02-16  2:44 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
> Normally alias should have the same type as aliased symbol.
>
> Device tables are arrays, so they have 'struct type##_device_id[x]'
> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
> 	'struct type##_device_id'.
>
> This inconsistency confuses compiler, it could make a wrong
> assumption about variable's size which leads KASan to
> produce a false positive report about out of bounds access.

Hmm, as Andrew Morton points out, this breaks some usage; if we just
fix the type (struct type##_device_id[]) will that work instead?

I'm guessing not, since typeof(x) will presumably preserve sizing
information?

Cheers,
Rusty.

>
> For every global variable compiler calls __asan_register_globals()
> passing information about global variable (address, size, size with
> redzone, name ...) __asan_register_globals() poison symbols
> redzone to detect possible out of bounds accesses.
>
> When symbol has an alias __asan_register_globals() will be called
> as for symbol so for alias. Compiler determines size of variable by
> size of variable's type. Alias and symbol have the same address,
> so if alias have the wrong size part of memory that actually belongs
> to the symbol could be poisoned as redzone of alias symbol.
>
> By fixing type of alias symbol we will fix size of it, so
> __asan_register_globals() will not poison valid memory.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/module.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/module.h b/include/linux/module.h
> index b653d7c..42999fe 100644
> --- a/include/linux/module.h
> +++ b/include/linux/module.h
> @@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
>  #ifdef MODULE
>  /* Creates an alias so file2alias.c can find device table. */
>  #define MODULE_DEVICE_TABLE(type, name)					\
> -  extern const struct type##_device_id __mod_##type##__##name##_device_table \
> +extern const typeof(name) __mod_##type##__##name##_device_table		\
>    __attribute__ ((unused, alias(__stringify(name))))
>  #else  /* !MODULE */
>  #define MODULE_DEVICE_TABLE(type, name)
> -- 
> 2.2.2

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
@ 2015-02-16  2:44       ` Rusty Russell
  0 siblings, 0 replies; 862+ messages in thread
From: Rusty Russell @ 2015-02-16  2:44 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
> Normally alias should have the same type as aliased symbol.
>
> Device tables are arrays, so they have 'struct type##_device_id[x]'
> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
> 	'struct type##_device_id'.
>
> This inconsistency confuses compiler, it could make a wrong
> assumption about variable's size which leads KASan to
> produce a false positive report about out of bounds access.

Hmm, as Andrew Morton points out, this breaks some usage; if we just
fix the type (struct type##_device_id[]) will that work instead?

I'm guessing not, since typeof(x) will presumably preserve sizing
information?

Cheers,
Rusty.

>
> For every global variable compiler calls __asan_register_globals()
> passing information about global variable (address, size, size with
> redzone, name ...) __asan_register_globals() poison symbols
> redzone to detect possible out of bounds accesses.
>
> When symbol has an alias __asan_register_globals() will be called
> as for symbol so for alias. Compiler determines size of variable by
> size of variable's type. Alias and symbol have the same address,
> so if alias have the wrong size part of memory that actually belongs
> to the symbol could be poisoned as redzone of alias symbol.
>
> By fixing type of alias symbol we will fix size of it, so
> __asan_register_globals() will not poison valid memory.
>
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
>  include/linux/module.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/module.h b/include/linux/module.h
> index b653d7c..42999fe 100644
> --- a/include/linux/module.h
> +++ b/include/linux/module.h
> @@ -135,7 +135,7 @@ void trim_init_extable(struct module *m);
>  #ifdef MODULE
>  /* Creates an alias so file2alias.c can find device table. */
>  #define MODULE_DEVICE_TABLE(type, name)					\
> -  extern const struct type##_device_id __mod_##type##__##name##_device_table \
> +extern const typeof(name) __mod_##type##__##name##_device_table		\
>    __attribute__ ((unused, alias(__stringify(name))))
>  #else  /* !MODULE */
>  #define MODULE_DEVICE_TABLE(type, name)
> -- 
> 2.2.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
  2015-02-03 17:43     ` Andrey Ryabinin
  (?)
@ 2015-02-16  2:58       ` Rusty Russell
  -1 siblings, 0 replies; 862+ messages in thread
From: Rusty Russell @ 2015-02-16  2:58 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Andrey Ryabinin, Dmitry Vyukov, Konstantin Serebryany,
	Dmitry Chernenkov, Andrey Konovalov, Yuri Gribov,
	Konstantin Khlebnikov, Sasha Levin, Christoph Lameter,
	Joonsoo Kim, Andrew Morton, Dave Hansen, Andi Kleen, x86,
	linux-mm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Michal Marek, open list:KERNEL BUILD + fi...

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> This feature let us to detect accesses out of bounds of
> global variables. This will work as for globals in kernel
> image, so for globals in modules. Currently this won't work
> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>
> The idea of this is simple. Compiler increases each global variable
> by redzone size and add constructors invoking __asan_register_globals()
> function. Information about global variable (address, size,
> size with redzone ...) passed to __asan_register_globals() so we could
> poison variable's redzone.
>
> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
> more simple. Such alignment guarantees that each shadow page backing
> modules address space correspond to only one module_alloc() allocation.

Hmm, I understand why you only fixed x86, but it's weird.

I think MODULE_ALIGN belongs in linux/moduleloader.h, and every arch
should be fixed up to use it (though you could leave that for later).

Might as well fix the default implementation at least.

> @@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
>  void kasan_slab_alloc(struct kmem_cache *s, void *object);
>  void kasan_slab_free(struct kmem_cache *s, void *object);
>  
> +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
> +
> +int kasan_module_alloc(void *addr, size_t size);
> +void kasan_module_free(void *addr);
> +
>  #else /* CONFIG_KASAN */
>  
> +#define MODULE_ALIGN 1

Hmm, that should be PAGE_SIZE (we assume that in several places).

> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>  void __weak module_memfree(void *module_region)
>  {
>  	vfree(module_region);
> +	kasan_module_free(module_region);
>  }

This looks racy (memory reuse?).  Perhaps try other order?

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-16  2:58       ` Rusty Russell
  0 siblings, 0 replies; 862+ messages in thread
From: Rusty Russell @ 2015-02-16  2:58 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> This feature let us to detect accesses out of bounds of
> global variables. This will work as for globals in kernel
> image, so for globals in modules. Currently this won't work
> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>
> The idea of this is simple. Compiler increases each global variable
> by redzone size and add constructors invoking __asan_register_globals()
> function. Information about global variable (address, size,
> size with redzone ...) passed to __asan_register_globals() so we could
> poison variable's redzone.
>
> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
> more simple. Such alignment guarantees that each shadow page backing
> modules address space correspond to only one module_alloc() allocation.

Hmm, I understand why you only fixed x86, but it's weird.

I think MODULE_ALIGN belongs in linux/moduleloader.h, and every arch
should be fixed up to use it (though you could leave that for later).

Might as well fix the default implementation at least.

> @@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
>  void kasan_slab_alloc(struct kmem_cache *s, void *object);
>  void kasan_slab_free(struct kmem_cache *s, void *object);
>  
> +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
> +
> +int kasan_module_alloc(void *addr, size_t size);
> +void kasan_module_free(void *addr);
> +
>  #else /* CONFIG_KASAN */
>  
> +#define MODULE_ALIGN 1

Hmm, that should be PAGE_SIZE (we assume that in several places).

> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>  void __weak module_memfree(void *module_region)
>  {
>  	vfree(module_region);
> +	kasan_module_free(module_region);
>  }

This looks racy (memory reuse?).  Perhaps try other order?

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-16  2:58       ` Rusty Russell
  0 siblings, 0 replies; 862+ messages in thread
From: Rusty Russell @ 2015-02-16  2:58 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> This feature let us to detect accesses out of bounds of
> global variables. This will work as for globals in kernel
> image, so for globals in modules. Currently this won't work
> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>
> The idea of this is simple. Compiler increases each global variable
> by redzone size and add constructors invoking __asan_register_globals()
> function. Information about global variable (address, size,
> size with redzone ...) passed to __asan_register_globals() so we could
> poison variable's redzone.
>
> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
> more simple. Such alignment guarantees that each shadow page backing
> modules address space correspond to only one module_alloc() allocation.

Hmm, I understand why you only fixed x86, but it's weird.

I think MODULE_ALIGN belongs in linux/moduleloader.h, and every arch
should be fixed up to use it (though you could leave that for later).

Might as well fix the default implementation at least.

> @@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
>  void kasan_slab_alloc(struct kmem_cache *s, void *object);
>  void kasan_slab_free(struct kmem_cache *s, void *object);
>  
> +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
> +
> +int kasan_module_alloc(void *addr, size_t size);
> +void kasan_module_free(void *addr);
> +
>  #else /* CONFIG_KASAN */
>  
> +#define MODULE_ALIGN 1

Hmm, that should be PAGE_SIZE (we assume that in several places).

> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>  void __weak module_memfree(void *module_region)
>  {
>  	vfree(module_region);
> +	kasan_module_free(module_region);
>  }

This looks racy (memory reuse?).  Perhaps try other order?

Cheers,
Rusty.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
  2015-02-16  2:44       ` Rusty Russell
@ 2015-02-16 14:01         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-16 14:01 UTC (permalink / raw)
  To: Rusty Russell, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm

On 02/16/2015 05:44 AM, Rusty Russell wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
>> Normally alias should have the same type as aliased symbol.
>>
>> Device tables are arrays, so they have 'struct type##_device_id[x]'
>> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
>> 	'struct type##_device_id'.
>>
>> This inconsistency confuses compiler, it could make a wrong
>> assumption about variable's size which leads KASan to
>> produce a false positive report about out of bounds access.
> 
> Hmm, as Andrew Morton points out, this breaks some usage; if we just
> fix the type (struct type##_device_id[]) will that work instead?
> 
> I'm guessing not, since typeof(x) will presumably preserve sizing
> information?
> 

Yes, this won't work.
In this particular case 'struct type##_device_id[]' would be equivalent
to 'struct type##_device_id[1]'

$ cat test.c
struct d {
        int a;
        int b;
};
struct d arr[] = {
        {1, 2}, {3, 4}, {}
};
extern struct d arr_alias[] __attribute__((alias("arr")));

$ gcc -c test.c
test.c:8:17: warning: array ‘arr_alias’ assumed to have one element
 extern struct d arr_alias[] __attribute__((alias("arr")));


^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 18/19] module: fix types of device tables aliases
@ 2015-02-16 14:01         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-16 14:01 UTC (permalink / raw)
  To: Rusty Russell, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm

On 02/16/2015 05:44 AM, Rusty Russell wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>> MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
>> Normally alias should have the same type as aliased symbol.
>>
>> Device tables are arrays, so they have 'struct type##_device_id[x]'
>> types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
>> 	'struct type##_device_id'.
>>
>> This inconsistency confuses compiler, it could make a wrong
>> assumption about variable's size which leads KASan to
>> produce a false positive report about out of bounds access.
> 
> Hmm, as Andrew Morton points out, this breaks some usage; if we just
> fix the type (struct type##_device_id[]) will that work instead?
> 
> I'm guessing not, since typeof(x) will presumably preserve sizing
> information?
> 

Yes, this won't work.
In this particular case 'struct type##_device_id[]' would be equivalent
to 'struct type##_device_id[1]'

$ cat test.c
struct d {
        int a;
        int b;
};
struct d arr[] = {
        {1, 2}, {3, 4}, {}
};
extern struct d arr_alias[] __attribute__((alias("arr")));

$ gcc -c test.c
test.c:8:17: warning: array a??arr_aliasa?? assumed to have one element
 extern struct d arr_alias[] __attribute__((alias("arr")));

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
  2015-02-16  2:58       ` Rusty Russell
@ 2015-02-16 14:44         ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-16 14:44 UTC (permalink / raw)
  To: Rusty Russell, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

On 02/16/2015 05:58 AM, Rusty Russell wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>> This feature let us to detect accesses out of bounds of
>> global variables. This will work as for globals in kernel
>> image, so for globals in modules. Currently this won't work
>> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>>
>> The idea of this is simple. Compiler increases each global variable
>> by redzone size and add constructors invoking __asan_register_globals()
>> function. Information about global variable (address, size,
>> size with redzone ...) passed to __asan_register_globals() so we could
>> poison variable's redzone.
>>
>> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
>> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
>> more simple. Such alignment guarantees that each shadow page backing
>> modules address space correspond to only one module_alloc() allocation.
> 
> Hmm, I understand why you only fixed x86, but it's weird.
> 
> I think MODULE_ALIGN belongs in linux/moduleloader.h, and every arch
> should be fixed up to use it (though you could leave that for later).
> 
> Might as well fix the default implementation at least.
> 
>> @@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
>>  void kasan_slab_alloc(struct kmem_cache *s, void *object);
>>  void kasan_slab_free(struct kmem_cache *s, void *object);
>>  
>> +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
>> +
>> +int kasan_module_alloc(void *addr, size_t size);
>> +void kasan_module_free(void *addr);
>> +
>>  #else /* CONFIG_KASAN */
>>  
>> +#define MODULE_ALIGN 1
> 
> Hmm, that should be PAGE_SIZE (we assume that in several places).
> 
>> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>>  void __weak module_memfree(void *module_region)
>>  {
>>  	vfree(module_region);
>> +	kasan_module_free(module_region);
>>  }
> 
> This looks racy (memory reuse?).  Perhaps try other order?
> 

You are right, it's racy. Concurrent kasan_module_alloc() could fail because
kasan_module_free() wasn't called/finished yet, so whole module_alloc() will fail
and module loading will fail.
However, I just find out that this race is not the worst problem here.
When vfree(addr) called in interrupt context, memory at addr will be reused for
storing 'struct llist_node':

void vfree(const void *addr)
{
...
	if (unlikely(in_interrupt())) {
		struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
		if (llist_add((struct llist_node *)addr, &p->list))
			schedule_work(&p->wq);


In this case we have to free shadow *after* freeing 'module_region', because 'module_region'
is still used in llist_add() and in free_work() latter.
free_work() (in mm/vmalloc.c) processes list in LIFO order, so to free shadow after freeing
'module_region' kasan_module_free(module_region); should be called before vfree(module_region);

It will be racy still, but this is not so bad as potential crash that we have now.
Honestly, I have no idea how to fix this race nicely. Any suggestions?




^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-16 14:44         ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-16 14:44 UTC (permalink / raw)
  To: Rusty Russell, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

On 02/16/2015 05:58 AM, Rusty Russell wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>> This feature let us to detect accesses out of bounds of
>> global variables. This will work as for globals in kernel
>> image, so for globals in modules. Currently this won't work
>> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>>
>> The idea of this is simple. Compiler increases each global variable
>> by redzone size and add constructors invoking __asan_register_globals()
>> function. Information about global variable (address, size,
>> size with redzone ...) passed to __asan_register_globals() so we could
>> poison variable's redzone.
>>
>> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
>> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
>> more simple. Such alignment guarantees that each shadow page backing
>> modules address space correspond to only one module_alloc() allocation.
> 
> Hmm, I understand why you only fixed x86, but it's weird.
> 
> I think MODULE_ALIGN belongs in linux/moduleloader.h, and every arch
> should be fixed up to use it (though you could leave that for later).
> 
> Might as well fix the default implementation at least.
> 
>> @@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
>>  void kasan_slab_alloc(struct kmem_cache *s, void *object);
>>  void kasan_slab_free(struct kmem_cache *s, void *object);
>>  
>> +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
>> +
>> +int kasan_module_alloc(void *addr, size_t size);
>> +void kasan_module_free(void *addr);
>> +
>>  #else /* CONFIG_KASAN */
>>  
>> +#define MODULE_ALIGN 1
> 
> Hmm, that should be PAGE_SIZE (we assume that in several places).
> 
>> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>>  void __weak module_memfree(void *module_region)
>>  {
>>  	vfree(module_region);
>> +	kasan_module_free(module_region);
>>  }
> 
> This looks racy (memory reuse?).  Perhaps try other order?
> 

You are right, it's racy. Concurrent kasan_module_alloc() could fail because
kasan_module_free() wasn't called/finished yet, so whole module_alloc() will fail
and module loading will fail.
However, I just find out that this race is not the worst problem here.
When vfree(addr) called in interrupt context, memory at addr will be reused for
storing 'struct llist_node':

void vfree(const void *addr)
{
...
	if (unlikely(in_interrupt())) {
		struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
		if (llist_add((struct llist_node *)addr, &p->list))
			schedule_work(&p->wq);


In this case we have to free shadow *after* freeing 'module_region', because 'module_region'
is still used in llist_add() and in free_work() latter.
free_work() (in mm/vmalloc.c) processes list in LIFO order, so to free shadow after freeing
'module_region' kasan_module_free(module_region); should be called before vfree(module_region);

It will be racy still, but this is not so bad as potential crash that we have now.
Honestly, I have no idea how to fix this race nicely. Any suggestions?



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
  2015-02-16 14:44         ` Andrey Ryabinin
@ 2015-02-16 14:47           ` Dmitry Vyukov
  -1 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2015-02-16 14:47 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Rusty Russell, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

Can a module be freed in an interrupt?


On Mon, Feb 16, 2015 at 5:44 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> On 02/16/2015 05:58 AM, Rusty Russell wrote:
>> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>>> This feature let us to detect accesses out of bounds of
>>> global variables. This will work as for globals in kernel
>>> image, so for globals in modules. Currently this won't work
>>> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>>>
>>> The idea of this is simple. Compiler increases each global variable
>>> by redzone size and add constructors invoking __asan_register_globals()
>>> function. Information about global variable (address, size,
>>> size with redzone ...) passed to __asan_register_globals() so we could
>>> poison variable's redzone.
>>>
>>> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
>>> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
>>> more simple. Such alignment guarantees that each shadow page backing
>>> modules address space correspond to only one module_alloc() allocation.
>>
>> Hmm, I understand why you only fixed x86, but it's weird.
>>
>> I think MODULE_ALIGN belongs in linux/moduleloader.h, and every arch
>> should be fixed up to use it (though you could leave that for later).
>>
>> Might as well fix the default implementation at least.
>>
>>> @@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
>>>  void kasan_slab_alloc(struct kmem_cache *s, void *object);
>>>  void kasan_slab_free(struct kmem_cache *s, void *object);
>>>
>>> +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
>>> +
>>> +int kasan_module_alloc(void *addr, size_t size);
>>> +void kasan_module_free(void *addr);
>>> +
>>>  #else /* CONFIG_KASAN */
>>>
>>> +#define MODULE_ALIGN 1
>>
>> Hmm, that should be PAGE_SIZE (we assume that in several places).
>>
>>> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>>>  void __weak module_memfree(void *module_region)
>>>  {
>>>      vfree(module_region);
>>> +    kasan_module_free(module_region);
>>>  }
>>
>> This looks racy (memory reuse?).  Perhaps try other order?
>>
>
> You are right, it's racy. Concurrent kasan_module_alloc() could fail because
> kasan_module_free() wasn't called/finished yet, so whole module_alloc() will fail
> and module loading will fail.
> However, I just find out that this race is not the worst problem here.
> When vfree(addr) called in interrupt context, memory at addr will be reused for
> storing 'struct llist_node':
>
> void vfree(const void *addr)
> {
> ...
>         if (unlikely(in_interrupt())) {
>                 struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
>                 if (llist_add((struct llist_node *)addr, &p->list))
>                         schedule_work(&p->wq);
>
>
> In this case we have to free shadow *after* freeing 'module_region', because 'module_region'
> is still used in llist_add() and in free_work() latter.
> free_work() (in mm/vmalloc.c) processes list in LIFO order, so to free shadow after freeing
> 'module_region' kasan_module_free(module_region); should be called before vfree(module_region);
>
> It will be racy still, but this is not so bad as potential crash that we have now.
> Honestly, I have no idea how to fix this race nicely. Any suggestions?
>
>
>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-16 14:47           ` Dmitry Vyukov
  0 siblings, 0 replies; 862+ messages in thread
From: Dmitry Vyukov @ 2015-02-16 14:47 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Rusty Russell, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

Can a module be freed in an interrupt?


On Mon, Feb 16, 2015 at 5:44 PM, Andrey Ryabinin <a.ryabinin@samsung.com> wrote:
> On 02/16/2015 05:58 AM, Rusty Russell wrote:
>> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>>> This feature let us to detect accesses out of bounds of
>>> global variables. This will work as for globals in kernel
>>> image, so for globals in modules. Currently this won't work
>>> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>>>
>>> The idea of this is simple. Compiler increases each global variable
>>> by redzone size and add constructors invoking __asan_register_globals()
>>> function. Information about global variable (address, size,
>>> size with redzone ...) passed to __asan_register_globals() so we could
>>> poison variable's redzone.
>>>
>>> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
>>> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
>>> more simple. Such alignment guarantees that each shadow page backing
>>> modules address space correspond to only one module_alloc() allocation.
>>
>> Hmm, I understand why you only fixed x86, but it's weird.
>>
>> I think MODULE_ALIGN belongs in linux/moduleloader.h, and every arch
>> should be fixed up to use it (though you could leave that for later).
>>
>> Might as well fix the default implementation at least.
>>
>>> @@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
>>>  void kasan_slab_alloc(struct kmem_cache *s, void *object);
>>>  void kasan_slab_free(struct kmem_cache *s, void *object);
>>>
>>> +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
>>> +
>>> +int kasan_module_alloc(void *addr, size_t size);
>>> +void kasan_module_free(void *addr);
>>> +
>>>  #else /* CONFIG_KASAN */
>>>
>>> +#define MODULE_ALIGN 1
>>
>> Hmm, that should be PAGE_SIZE (we assume that in several places).
>>
>>> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>>>  void __weak module_memfree(void *module_region)
>>>  {
>>>      vfree(module_region);
>>> +    kasan_module_free(module_region);
>>>  }
>>
>> This looks racy (memory reuse?).  Perhaps try other order?
>>
>
> You are right, it's racy. Concurrent kasan_module_alloc() could fail because
> kasan_module_free() wasn't called/finished yet, so whole module_alloc() will fail
> and module loading will fail.
> However, I just find out that this race is not the worst problem here.
> When vfree(addr) called in interrupt context, memory at addr will be reused for
> storing 'struct llist_node':
>
> void vfree(const void *addr)
> {
> ...
>         if (unlikely(in_interrupt())) {
>                 struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
>                 if (llist_add((struct llist_node *)addr, &p->list))
>                         schedule_work(&p->wq);
>
>
> In this case we have to free shadow *after* freeing 'module_region', because 'module_region'
> is still used in llist_add() and in free_work() latter.
> free_work() (in mm/vmalloc.c) processes list in LIFO order, so to free shadow after freeing
> 'module_region' kasan_module_free(module_region); should be called before vfree(module_region);
>
> It will be racy still, but this is not so bad as potential crash that we have now.
> Honestly, I have no idea how to fix this race nicely. Any suggestions?
>
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
  2015-02-16 14:47           ` Dmitry Vyukov
@ 2015-02-16 15:09             ` Andrey Ryabinin
  -1 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-16 15:09 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Rusty Russell, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

On 02/16/2015 05:47 PM, Dmitry Vyukov wrote:
> Can a module be freed in an interrupt?
> 
> 

Since commit: c749637909ee ("module: fix race in kallsyms resolution during module load success.")
module's init section always freed rcu callback (rcu callbacks executed from softirq)

Currently, with DEBUG_PAGEALLOC and KASAN module loading always causing kernel crash.
It's harder to trigger this without DEBUG_PAGEALLOC because of lazy tlb flushing in vmalloc.

BUG: unable to handle kernel paging request at fffffbfff4011000
IP: [<ffffffff811d8f7b>] __asan_load8+0x2b/0xa0
PGD 7ffa3063 PUD 7ffa2063 PMD 484ea067 PTE 0
Oops: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN
Dumping ftrace buffer:
   (ftrace buffer empty)
Modules linked in: ipv6
CPU: 0 PID: 30 Comm: kworker/0:1 Tainted: G        W       3.19.0-rc7-next-20150209+ #209
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
Workqueue: events free_work
task: ffff88006c5a8870 ti: ffff88006c630000 task.ti: ffff88006c630000
RIP: 0010:[<ffffffff811d8f7b>]  [<ffffffff811d8f7b>] __asan_load8+0x2b/0xa0
RSP: 0018:ffff88006c637cd8  EFLAGS: 00010286
RAX: fffffbfff4011000 RBX: ffffffffa0088000 RCX: ffffed000da000a9
RDX: dffffc0000000000 RSI: 0000000000000001 RDI: ffffffffa0088000
RBP: ffff88006c637d08 R08: 0000000000000000 R09: ffff88006d007840
R10: ffff88006d000540 R11: ffffed000da000a9 R12: ffffffffa0088000
R13: ffff88006d61a5d8 R14: ffff88006d61a5d8 R15: ffff88006d61a5c0
FS:  0000000000000000(0000) GS:ffff88006d600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: fffffbfff4011000 CR3: 000000004d967000 CR4: 00000000000006b0
Stack:
 ffff88006c637ce8 fffffbfff4011000 ffffffffa0088000 ffff88006d61a5d8
 ffff88006d61a5d8 ffff88006d61a5c0 ffff88006c637d28 ffffffff811bb1b8
 ffff88006c5bc618 ffff88006d617b28 ffff88006c637db8 ffffffff8108e1b0
Call Trace:
 [<ffffffff811bb1b8>] free_work+0x38/0x60
 [<ffffffff8108e1b0>] process_one_work+0x2a0/0x7d0
 [<ffffffff8108f653>] worker_thread+0x93/0x840
 [<ffffffff8108f5c0>] ? init_pwq.part.11+0x10/0x10
 [<ffffffff81096f37>] kthread+0x177/0x1a0
 [<ffffffff81096dc0>] ? kthread_worker_fn+0x290/0x290
 [<ffffffff81096dc0>] ? kthread_worker_fn+0x290/0x290
 [<ffffffff8158cd7c>] ret_from_fork+0x7c/0xb0
 [<ffffffff81096dc0>] ? kthread_worker_fn+0x290/0x290
Code: 48 b8 ff ff ff ff ff 7f ff ff 55 48 89 e5 48 83 ec 30 48 39 c7 76 59 48 ba 00 00 00 00 00 fc ff df 48 89 f8 48 c1 e8 03 48 01 d0 <66> 83 38 00 75 07 c9 c3 0f 1f 44 00 00 48 8d 4f 07 48 89 ce 48
RIP  [<ffffffff811d8f7b>] __asan_load8+0x2b/0xa0
 RSP <ffff88006c637cd8>
CR2: fffffbfff4011000
---[ end trace b9411d841784b6cf ]---



^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-16 15:09             ` Andrey Ryabinin
  0 siblings, 0 replies; 862+ messages in thread
From: Andrey Ryabinin @ 2015-02-16 15:09 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Rusty Russell, LKML, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

On 02/16/2015 05:47 PM, Dmitry Vyukov wrote:
> Can a module be freed in an interrupt?
> 
> 

Since commit: c749637909ee ("module: fix race in kallsyms resolution during module load success.")
module's init section always freed rcu callback (rcu callbacks executed from softirq)

Currently, with DEBUG_PAGEALLOC and KASAN module loading always causing kernel crash.
It's harder to trigger this without DEBUG_PAGEALLOC because of lazy tlb flushing in vmalloc.

BUG: unable to handle kernel paging request at fffffbfff4011000
IP: [<ffffffff811d8f7b>] __asan_load8+0x2b/0xa0
PGD 7ffa3063 PUD 7ffa2063 PMD 484ea067 PTE 0
Oops: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN
Dumping ftrace buffer:
   (ftrace buffer empty)
Modules linked in: ipv6
CPU: 0 PID: 30 Comm: kworker/0:1 Tainted: G        W       3.19.0-rc7-next-20150209+ #209
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
Workqueue: events free_work
task: ffff88006c5a8870 ti: ffff88006c630000 task.ti: ffff88006c630000
RIP: 0010:[<ffffffff811d8f7b>]  [<ffffffff811d8f7b>] __asan_load8+0x2b/0xa0
RSP: 0018:ffff88006c637cd8  EFLAGS: 00010286
RAX: fffffbfff4011000 RBX: ffffffffa0088000 RCX: ffffed000da000a9
RDX: dffffc0000000000 RSI: 0000000000000001 RDI: ffffffffa0088000
RBP: ffff88006c637d08 R08: 0000000000000000 R09: ffff88006d007840
R10: ffff88006d000540 R11: ffffed000da000a9 R12: ffffffffa0088000
R13: ffff88006d61a5d8 R14: ffff88006d61a5d8 R15: ffff88006d61a5c0
FS:  0000000000000000(0000) GS:ffff88006d600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: fffffbfff4011000 CR3: 000000004d967000 CR4: 00000000000006b0
Stack:
 ffff88006c637ce8 fffffbfff4011000 ffffffffa0088000 ffff88006d61a5d8
 ffff88006d61a5d8 ffff88006d61a5c0 ffff88006c637d28 ffffffff811bb1b8
 ffff88006c5bc618 ffff88006d617b28 ffff88006c637db8 ffffffff8108e1b0
Call Trace:
 [<ffffffff811bb1b8>] free_work+0x38/0x60
 [<ffffffff8108e1b0>] process_one_work+0x2a0/0x7d0
 [<ffffffff8108f653>] worker_thread+0x93/0x840
 [<ffffffff8108f5c0>] ? init_pwq.part.11+0x10/0x10
 [<ffffffff81096f37>] kthread+0x177/0x1a0
 [<ffffffff81096dc0>] ? kthread_worker_fn+0x290/0x290
 [<ffffffff81096dc0>] ? kthread_worker_fn+0x290/0x290
 [<ffffffff8158cd7c>] ret_from_fork+0x7c/0xb0
 [<ffffffff81096dc0>] ? kthread_worker_fn+0x290/0x290
Code: 48 b8 ff ff ff ff ff 7f ff ff 55 48 89 e5 48 83 ec 30 48 39 c7 76 59 48 ba 00 00 00 00 00 fc ff df 48 89 f8 48 c1 e8 03 48 01 d0 <66> 83 38 00 75 07 c9 c3 0f 1f 44 00 00 48 8d 4f 07 48 89 ce 48
RIP  [<ffffffff811d8f7b>] __asan_load8+0x2b/0xa0
 RSP <ffff88006c637cd8>
CR2: fffffbfff4011000
---[ end trace b9411d841784b6cf ]---


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
  2015-02-16 14:44         ` Andrey Ryabinin
  (?)
@ 2015-02-16 23:55           ` Rusty Russell
  -1 siblings, 0 replies; 862+ messages in thread
From: Rusty Russell @ 2015-02-16 23:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> On 02/16/2015 05:58 AM, Rusty Russell wrote:
>> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>>> This feature let us to detect accesses out of bounds of
>>> global variables. This will work as for globals in kernel
>>> image, so for globals in modules. Currently this won't work
>>> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>>> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>>>  void __weak module_memfree(void *module_region)
>>>  {
>>>  	vfree(module_region);
>>> +	kasan_module_free(module_region);
>>>  }
>> 
>> This looks racy (memory reuse?).  Perhaps try other order?
>> 
>
> You are right, it's racy. Concurrent kasan_module_alloc() could fail because
> kasan_module_free() wasn't called/finished yet, so whole module_alloc() will fail
> and module loading will fail.
> However, I just find out that this race is not the worst problem here.
> When vfree(addr) called in interrupt context, memory at addr will be reused for
> storing 'struct llist_node':
>
> void vfree(const void *addr)
> {
> ...
> 	if (unlikely(in_interrupt())) {
> 		struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
> 		if (llist_add((struct llist_node *)addr, &p->list))
> 			schedule_work(&p->wq);
>
>
> In this case we have to free shadow *after* freeing 'module_region', because 'module_region'
> is still used in llist_add() and in free_work() latter.
> free_work() (in mm/vmalloc.c) processes list in LIFO order, so to free shadow after freeing
> 'module_region' kasan_module_free(module_region); should be called before vfree(module_region);
>
> It will be racy still, but this is not so bad as potential crash that we have now.
> Honestly, I have no idea how to fix this race nicely. Any suggestions?

I think you need to take over the rcu callback for the kasan case.

Perhaps we rename that __module_memfree(), and do:

void module_memfree(void *p)
{
#ifdef CONFIG_KASAN
        ...
#endif
        __module_memfree(p);        
}

Note: there are calls to module_memfree from other code (BPF and
kprobes).  I assume you looked at those too...

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-16 23:55           ` Rusty Russell
  0 siblings, 0 replies; 862+ messages in thread
From: Rusty Russell @ 2015-02-16 23:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> On 02/16/2015 05:58 AM, Rusty Russell wrote:
>> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>>> This feature let us to detect accesses out of bounds of
>>> global variables. This will work as for globals in kernel
>>> image, so for globals in modules. Currently this won't work
>>> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>>> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>>>  void __weak module_memfree(void *module_region)
>>>  {
>>>  	vfree(module_region);
>>> +	kasan_module_free(module_region);
>>>  }
>> 
>> This looks racy (memory reuse?).  Perhaps try other order?
>> 
>
> You are right, it's racy. Concurrent kasan_module_alloc() could fail because
> kasan_module_free() wasn't called/finished yet, so whole module_alloc() will fail
> and module loading will fail.
> However, I just find out that this race is not the worst problem here.
> When vfree(addr) called in interrupt context, memory at addr will be reused for
> storing 'struct llist_node':
>
> void vfree(const void *addr)
> {
> ...
> 	if (unlikely(in_interrupt())) {
> 		struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
> 		if (llist_add((struct llist_node *)addr, &p->list))
> 			schedule_work(&p->wq);
>
>
> In this case we have to free shadow *after* freeing 'module_region', because 'module_region'
> is still used in llist_add() and in free_work() latter.
> free_work() (in mm/vmalloc.c) processes list in LIFO order, so to free shadow after freeing
> 'module_region' kasan_module_free(module_region); should be called before vfree(module_region);
>
> It will be racy still, but this is not so bad as potential crash that we have now.
> Honestly, I have no idea how to fix this race nicely. Any suggestions?

I think you need to take over the rcu callback for the kasan case.

Perhaps we rename that __module_memfree(), and do:

void module_memfree(void *p)
{
#ifdef CONFIG_KASAN
        ...
#endif
        __module_memfree(p);        
}

Note: there are calls to module_memfree from other code (BPF and
kprobes).  I assume you looked at those too...

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 862+ messages in thread

* Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables
@ 2015-02-16 23:55           ` Rusty Russell
  0 siblings, 0 replies; 862+ messages in thread
From: Rusty Russell @ 2015-02-16 23:55 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel
  Cc: Dmitry Vyukov, Konstantin Serebryany, Dmitry Chernenkov,
	Andrey Konovalov, Yuri Gribov, Konstantin Khlebnikov,
	Sasha Levin, Christoph Lameter, Joonsoo Kim, Andrew Morton,
	Dave Hansen, Andi Kleen, x86, linux-mm, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Michal Marek,
	open list:KERNEL BUILD + fi...

Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> On 02/16/2015 05:58 AM, Rusty Russell wrote:
>> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
>>> This feature let us to detect accesses out of bounds of
>>> global variables. This will work as for globals in kernel
>>> image, so for globals in modules. Currently this won't work
>>> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>>> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>>>  void __weak module_memfree(void *module_region)
>>>  {
>>>  	vfree(module_region);
>>> +	kasan_module_free(module_region);
>>>  }
>> 
>> This looks racy (memory reuse?).  Perhaps try other order?
>> 
>
> You are right, it's racy. Concurrent kasan_module_alloc() could fail because
> kasan_module_free() wasn't called/finished yet, so whole module_alloc() will fail
> and module loading will fail.
> However, I just find out that this race is not the worst problem here.
> When vfree(addr) called in interrupt context, memory at addr will be reused for
> storing 'struct llist_node':
>
> void vfree(const void *addr)
> {
> ...
> 	if (unlikely(in_interrupt())) {
> 		struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
> 		if (llist_add((struct llist_node *)addr, &p->list))
> 			schedule_work(&p->wq);
>
>
> In this case we have to free shadow *after* freeing 'module_region', because 'module_region'
> is still used in llist_add() and in free_work() latter.
> free_work() (in mm/vmalloc.c) processes list in LIFO order, so to free shadow after freeing
> 'module_region' kasan_module_free(module_region); should be called before vfree(module_region);
>
> It will be racy still, but this is not so bad as potential crash that we have now.
> Honestly, I have no idea how to fix this race nicely. Any suggestions?

I think you need to take over the rcu callback for the kasan case.

Perhaps we rename that __module_memfree(), and do:

void module_memfree(void *p)
{
#ifdef CONFIG_KASAN
        ...
#endif
        __module_memfree(p);        
}

Note: there are calls to module_memfree from other code (BPF and
kprobes).  I assume you looked at those too...

Cheers,
Rusty.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 862+ messages in thread

end of thread, other threads:[~2015-02-16 23:56 UTC | newest]

Thread overview: 862+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-09 11:29 [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Andrey Ryabinin
2014-07-09 11:29 ` Andrey Ryabinin
2014-07-09 11:29 ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 01/21] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 14:26   ` Christoph Lameter
2014-07-09 14:26     ` Christoph Lameter
2014-07-09 14:26     ` Christoph Lameter
2014-07-10  7:31     ` Andrey Ryabinin
2014-07-10  7:31       ` Andrey Ryabinin
2014-07-10  7:31       ` Andrey Ryabinin
2014-07-09 19:29   ` Andi Kleen
2014-07-09 19:29     ` Andi Kleen
2014-07-09 19:29     ` Andi Kleen
2014-07-09 20:40     ` Yuri Gribov
2014-07-09 20:40       ` Yuri Gribov
2014-07-09 20:40       ` Yuri Gribov
2014-07-10 12:10     ` Andrey Ryabinin
2014-07-10 12:10       ` Andrey Ryabinin
2014-07-10 12:10       ` Andrey Ryabinin
2014-07-09 20:26   ` Dave Hansen
2014-07-09 20:26     ` Dave Hansen
2014-07-09 20:26     ` Dave Hansen
2014-07-10 12:12     ` Andrey Ryabinin
2014-07-10 12:12       ` Andrey Ryabinin
2014-07-10 12:12       ` Andrey Ryabinin
2014-07-10 15:55       ` Dave Hansen
2014-07-10 15:55         ` Dave Hansen
2014-07-10 15:55         ` Dave Hansen
2014-07-10 19:48         ` Andrey Ryabinin
2014-07-10 19:48           ` Andrey Ryabinin
2014-07-10 19:48           ` Andrey Ryabinin
2014-07-10 20:04           ` Dave Hansen
2014-07-10 20:04             ` Dave Hansen
2014-07-10 20:04             ` Dave Hansen
2014-07-09 20:37   ` Dave Hansen
2014-07-09 20:37     ` Dave Hansen
2014-07-09 20:37     ` Dave Hansen
2014-07-09 20:38   ` Dave Hansen
2014-07-09 20:38     ` Dave Hansen
2014-07-09 20:38     ` Dave Hansen
2014-07-10 11:55   ` Sasha Levin
2014-07-10 11:55     ` Sasha Levin
2014-07-10 11:55     ` Sasha Levin
2014-07-10 13:01     ` Andrey Ryabinin
2014-07-10 13:01       ` Andrey Ryabinin
2014-07-10 13:01       ` Andrey Ryabinin
2014-07-10 13:31       ` Sasha Levin
2014-07-10 13:31         ` Sasha Levin
2014-07-10 13:31         ` Sasha Levin
2014-07-10 13:39         ` Andrey Ryabinin
2014-07-10 13:39           ` Andrey Ryabinin
2014-07-10 13:39           ` Andrey Ryabinin
2014-07-10 14:02           ` Sasha Levin
2014-07-10 14:02             ` Sasha Levin
2014-07-10 19:04             ` Andrey Ryabinin
2014-07-10 19:04               ` Andrey Ryabinin
2014-07-10 19:04               ` Andrey Ryabinin
2014-07-10 13:50         ` Andrey Ryabinin
2014-07-10 13:50           ` Andrey Ryabinin
2014-07-10 13:50           ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 02/21] init: main: initialize kasan's shadow area on boot Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 03/21] x86: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 19:31   ` Andi Kleen
2014-07-09 19:31     ` Andi Kleen
2014-07-09 19:31     ` Andi Kleen
2014-07-10 13:54     ` Andrey Ryabinin
2014-07-10 13:54       ` Andrey Ryabinin
2014-07-10 13:54       ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 04/21] x86: boot: vdso: disable instrumentation for code not linked with kernel Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 11:29 ` [RFC/PATCH RESEND -next 05/21] x86: cpu: don't sanitize early stages of a secondary CPU boot Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 11:29   ` Andrey Ryabinin
2014-07-09 19:33   ` Andi Kleen
2014-07-09 19:33     ` Andi Kleen
2014-07-09 19:33     ` Andi Kleen
2014-07-10 13:15     ` Andrey Ryabinin
2014-07-10 13:15       ` Andrey Ryabinin
2014-07-10 13:15       ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 06/21] x86: mm: init: allocate shadow memory for kasan Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 07/21] x86: Kconfig: enable kernel address sanitizer Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-15  5:52   ` Joonsoo Kim
2014-07-15  5:52     ` Joonsoo Kim
2014-07-15  5:52     ` Joonsoo Kim
2014-07-15  6:54     ` Andrey Ryabinin
2014-07-15  6:54       ` Andrey Ryabinin
2014-07-15  6:54       ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 09/21] mm: Makefile: kasan: don't instrument slub.c and slab_common.c files Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-15  5:53   ` Joonsoo Kim
2014-07-15  5:53     ` Joonsoo Kim
2014-07-15  5:53     ` Joonsoo Kim
2014-07-15  6:56     ` Andrey Ryabinin
2014-07-15  6:56       ` Andrey Ryabinin
2014-07-15  6:56       ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 11/21] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 14:29   ` Christoph Lameter
2014-07-09 14:29     ` Christoph Lameter
2014-07-09 14:29     ` Christoph Lameter
2014-07-10  7:41     ` Andrey Ryabinin
2014-07-10  7:41       ` Andrey Ryabinin
2014-07-10  7:41       ` Andrey Ryabinin
2014-07-10 14:07       ` Christoph Lameter
2014-07-10 14:07         ` Christoph Lameter
2014-07-10 14:07         ` Christoph Lameter
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 12/21] mm: util: move krealloc/kzfree to slab_common.c Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 14:32   ` Christoph Lameter
2014-07-09 14:32     ` Christoph Lameter
2014-07-09 14:32     ` Christoph Lameter
2014-07-10  7:43     ` Andrey Ryabinin
2014-07-10  7:43       ` Andrey Ryabinin
2014-07-10  7:43       ` Andrey Ryabinin
2014-07-10 14:08       ` Christoph Lameter
2014-07-10 14:08         ` Christoph Lameter
2014-07-10 14:08         ` Christoph Lameter
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 14:33   ` Christoph Lameter
2014-07-09 14:33     ` Christoph Lameter
2014-07-09 14:33     ` Christoph Lameter
2014-07-10  8:44     ` Andrey Ryabinin
2014-07-10  8:44       ` Andrey Ryabinin
2014-07-10  8:44       ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-15  6:04   ` Joonsoo Kim
2014-07-15  6:04     ` Joonsoo Kim
2014-07-15  6:04     ` Joonsoo Kim
2014-07-15  7:37     ` Andrey Ryabinin
2014-07-15  7:37       ` Andrey Ryabinin
2014-07-15  7:37       ` Andrey Ryabinin
2014-07-15  8:18       ` Joonsoo Kim
2014-07-15  8:18         ` Joonsoo Kim
2014-07-15  8:18         ` Joonsoo Kim
2014-07-15  9:51         ` Andrey Ryabinin
2014-07-15  9:51           ` Andrey Ryabinin
2014-07-15  9:51           ` Andrey Ryabinin
2014-07-15 14:26         ` Christoph Lameter
2014-07-15 14:26           ` Christoph Lameter
2014-07-15 14:26           ` Christoph Lameter
2014-07-15 15:02           ` Andrey Ryabinin
2014-07-15 15:02             ` Andrey Ryabinin
2014-07-15 15:02             ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 14:48   ` Christoph Lameter
2014-07-09 14:48     ` Christoph Lameter
2014-07-09 14:48     ` Christoph Lameter
2014-07-10  9:24     ` Andrey Ryabinin
2014-07-10  9:24       ` Andrey Ryabinin
2014-07-10  9:24       ` Andrey Ryabinin
2014-07-15  6:09   ` Joonsoo Kim
2014-07-15  6:09     ` Joonsoo Kim
2014-07-15  6:09     ` Joonsoo Kim
2014-07-15  7:45     ` Andrey Ryabinin
2014-07-15  7:45       ` Andrey Ryabinin
2014-07-15  7:45       ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 16/21] arm: boot: compressed: disable kasan's instrumentation Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 17/21] arm: add kasan hooks fort memcpy/memmove/memset functions Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 18/21] arm: mm: reserve shadow memory for kasan Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 19/21] arm: Kconfig: enable kernel address sanitizer Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-15  6:12   ` Joonsoo Kim
2014-07-15  6:12     ` Joonsoo Kim
2014-07-15  6:12     ` Joonsoo Kim
2014-07-15  6:08     ` Dmitry Vyukov
2014-07-15  6:08       ` Dmitry Vyukov
2014-07-15  6:08       ` Dmitry Vyukov
2014-07-15  9:34     ` Andrey Ryabinin
2014-07-15  9:34       ` Andrey Ryabinin
2014-07-15  9:34       ` Andrey Ryabinin
2014-07-15  9:45       ` Dmitry Vyukov
2014-07-15  9:45         ` Dmitry Vyukov
2014-07-15  9:45         ` Dmitry Vyukov
2014-07-09 11:30 ` [RFC/PATCH RESEND -next 21/21] lib: add kmalloc_bug_test module Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 11:30   ` Andrey Ryabinin
2014-07-09 21:19 ` [RFC/PATCH RESEND -next 00/21] Address sanitizer for kernel (kasan) - dynamic memory error detector Dave Hansen
2014-07-09 21:19   ` Dave Hansen
2014-07-09 21:19   ` Dave Hansen
2014-07-09 21:44   ` Andi Kleen
2014-07-09 21:44     ` Andi Kleen
2014-07-09 21:44     ` Andi Kleen
2014-07-09 21:59     ` Vegard Nossum
2014-07-09 21:59       ` Vegard Nossum
2014-07-09 21:59       ` Vegard Nossum
2014-07-09 23:33       ` Dave Hansen
2014-07-09 23:33         ` Dave Hansen
2014-07-09 23:33         ` Dave Hansen
2014-07-10  0:03       ` Andi Kleen
2014-07-10  0:03         ` Andi Kleen
2014-07-10  0:03         ` Andi Kleen
2014-07-10 13:59       ` Andrey Ryabinin
2014-07-10 13:59         ` Andrey Ryabinin
2014-07-10 13:59         ` Andrey Ryabinin
2014-09-10 14:31 ` [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector Andrey Ryabinin
2014-09-10 14:31   ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 01/10] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-11  3:55     ` Sasha Levin
2014-09-11  3:55       ` Sasha Levin
2014-09-14  1:35     ` Randy Dunlap
2014-09-14  1:35       ` Randy Dunlap
2014-09-15 15:28       ` Andrey Ryabinin
2014-09-15 15:28         ` Andrey Ryabinin
2014-09-15 16:24         ` Randy Dunlap
2014-09-15 16:24           ` Randy Dunlap
2014-09-10 14:31   ` [RFC/PATCH v2 02/10] x86_64: add KASan support Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-10 15:46     ` Dave Hansen
2014-09-10 15:46       ` Dave Hansen
2014-09-10 20:30       ` Andrey Ryabinin
2014-09-10 20:30         ` Andrey Ryabinin
2014-09-10 22:45         ` Dave Hansen
2014-09-10 22:45           ` Dave Hansen
2014-09-11  4:26           ` H. Peter Anvin
2014-09-11  4:26             ` H. Peter Anvin
2014-09-11  4:29             ` Sasha Levin
2014-09-11  4:29               ` Sasha Levin
2014-09-11  4:33               ` H. Peter Anvin
2014-09-11  4:33                 ` H. Peter Anvin
2014-09-11  4:46                 ` Andi Kleen
2014-09-11  4:46                   ` Andi Kleen
2014-09-11  4:52                   ` H. Peter Anvin
2014-09-11  4:52                     ` H. Peter Anvin
2014-09-11  5:25                   ` Andrey Ryabinin
2014-09-11  5:25                     ` Andrey Ryabinin
2014-09-11  4:33               ` H. Peter Anvin
2014-09-11  4:33                 ` H. Peter Anvin
2014-09-11 11:51               ` Andrey Ryabinin
2014-09-11 11:51                 ` Andrey Ryabinin
2014-09-18 16:54                 ` Sasha Levin
2014-09-18 16:54                   ` Sasha Levin
2014-09-11  4:01     ` H. Peter Anvin
2014-09-11  4:01       ` H. Peter Anvin
2014-09-11  4:01     ` H. Peter Anvin
2014-09-11  4:01       ` H. Peter Anvin
2014-09-11  5:31       ` Andrey Ryabinin
2014-09-11  5:31         ` Andrey Ryabinin
2014-10-01 15:31         ` H. Peter Anvin
2014-10-01 15:31           ` H. Peter Anvin
2014-10-01 16:28           ` Andrey Ryabinin
2014-10-01 16:28             ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 03/10] mm: page_alloc: add kasan hooks on alloc and free pathes Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 04/10] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-10 16:16     ` Christoph Lameter
2014-09-10 16:16       ` Christoph Lameter
2014-09-10 20:32       ` Andrey Ryabinin
2014-09-10 20:32         ` Andrey Ryabinin
2014-09-15  7:11         ` Andrey Ryabinin
2014-09-15  7:11           ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 05/10] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-15  7:11     ` Andrey Ryabinin
2014-09-15  7:11       ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 06/10] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 07/10] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 08/10] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 09/10] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-10 14:31   ` [RFC/PATCH v2 10/10] lib: add kasan test module Andrey Ryabinin
2014-09-10 14:31     ` Andrey Ryabinin
2014-09-10 20:38     ` Dave Jones
2014-09-10 20:38       ` Dave Jones
2014-09-10 20:46       ` Andrey Ryabinin
2014-09-10 20:46         ` Andrey Ryabinin
2014-09-10 20:47         ` Dave Jones
2014-09-10 20:47           ` Dave Jones
2014-09-10 20:50           ` Andrey Ryabinin
2014-09-10 20:50             ` Andrey Ryabinin
2014-09-10 15:01   ` [RFC/PATCH v2 00/10] Kernel address sainitzer (KASan) - dynamic memory error deetector Dave Hansen
2014-09-10 15:01     ` Dave Hansen
2014-09-10 14:58     ` Andrey Ryabinin
2014-09-10 14:58       ` Andrey Ryabinin
2014-09-10 15:12   ` Sasha Levin
2014-09-10 15:12     ` Sasha Levin
2014-09-24 12:43 ` [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger Andrey Ryabinin
2014-09-24 12:43   ` Andrey Ryabinin
2014-09-24 12:43   ` [PATCH v3 01/13] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-09-24 12:43     ` Andrey Ryabinin
2014-09-24 12:43   ` [PATCH v3 02/13] efi: libstub: disable KASAN for efistub Andrey Ryabinin
2014-09-24 12:43     ` Andrey Ryabinin
2014-09-24 12:43   ` [PATCH v3 03/13] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment Andrey Ryabinin
2014-09-24 12:43     ` Andrey Ryabinin
2014-09-24 12:44   ` [PATCH v3 04/13] x86_64: add KASan support Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-24 12:44   ` [PATCH v3 05/13] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-25 17:04     ` Dmitry Vyukov
2014-09-25 17:04       ` Dmitry Vyukov
2014-09-24 12:44   ` [PATCH v3 06/13] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-24 12:44   ` [PATCH v3 07/13] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-24 12:44   ` [PATCH v3 08/13] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-26  4:03     ` Dmitry Vyukov
2014-09-26  4:03       ` Dmitry Vyukov
2014-09-24 12:44   ` [PATCH v3 09/13] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-26  4:48     ` Dmitry Vyukov
2014-09-26  4:48       ` Dmitry Vyukov
2014-09-26  7:25       ` Andrey Ryabinin
2014-09-26  7:25         ` Andrey Ryabinin
2014-09-26 15:52         ` Dmitry Vyukov
2014-09-26 15:52           ` Dmitry Vyukov
2014-09-26 14:22       ` Christoph Lameter
2014-09-26 14:22         ` Christoph Lameter
2014-09-26 15:55         ` Dmitry Vyukov
2014-09-26 15:55           ` Dmitry Vyukov
2014-09-24 12:44   ` [PATCH v3 10/13] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-24 12:44   ` [PATCH v3 11/13] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-26 17:10     ` Dmitry Vyukov
2014-09-26 17:10       ` Dmitry Vyukov
2014-09-26 17:36       ` Andrey Ryabinin
2014-09-26 17:36         ` Andrey Ryabinin
2014-09-29 14:10         ` Dmitry Vyukov
2014-09-29 14:10           ` Dmitry Vyukov
2014-10-01 10:39           ` Catalin Marinas
2014-10-01 10:39             ` Catalin Marinas
2014-10-01 11:45             ` Andrey Ryabinin
2014-10-01 11:45               ` Andrey Ryabinin
2014-10-01 13:27               ` Dmitry Vyukov
2014-10-01 13:27                 ` Dmitry Vyukov
2014-10-01 14:11                 ` Andrey Ryabinin
2014-10-01 14:11                   ` Andrey Ryabinin
2014-10-01 14:24                   ` Dmitry Vyukov
2014-10-01 14:24                     ` Dmitry Vyukov
2014-09-24 12:44   ` [PATCH v3 12/13] lib: add kasan test module Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-26 17:11     ` Dmitry Vyukov
2014-09-26 17:11       ` Dmitry Vyukov
2014-09-24 12:44   ` [RFC PATCH v3 13/13] kasan: introduce inline instrumentation Andrey Ryabinin
2014-09-24 12:44     ` Andrey Ryabinin
2014-09-26 17:18     ` Dmitry Vyukov
2014-09-26 17:18       ` Dmitry Vyukov
2014-09-26 17:33       ` Andrey Ryabinin
2014-09-26 17:33         ` Andrey Ryabinin
2014-09-29 14:28         ` Dmitry Vyukov
2014-09-29 14:28           ` Dmitry Vyukov
2014-09-29 14:27           ` Andrey Ryabinin
2014-09-29 14:27             ` Andrey Ryabinin
2014-09-29 14:27     ` Dmitry Vyukov
2014-09-29 14:27       ` Dmitry Vyukov
2014-09-24 15:11   ` [PATCH v3 00/13] Kernel address sanitizer - runtime memory debugger Andrew Morton
2014-09-24 15:11     ` Andrew Morton
2014-09-26 17:01   ` Sasha Levin
2014-09-26 17:01     ` Sasha Levin
2014-09-26 17:07     ` Dmitry Vyukov
2014-09-26 17:07       ` Dmitry Vyukov
2014-09-26 17:22       ` Andrey Ryabinin
2014-09-26 17:22         ` Andrey Ryabinin
2014-09-26 17:29         ` Dmitry Vyukov
2014-09-26 17:29           ` Dmitry Vyukov
2014-09-26 18:48           ` Yuri Gribov
2014-09-26 18:48             ` Yuri Gribov
2014-09-29 14:22             ` Dmitry Vyukov
2014-09-29 14:22               ` Dmitry Vyukov
2014-09-29 14:36               ` Peter Zijlstra
2014-09-29 14:36                 ` Peter Zijlstra
2014-09-29 14:48                 ` Dmitry Vyukov
2014-09-29 14:48                   ` Dmitry Vyukov
2014-09-26 17:17     ` Andrey Ryabinin
2014-09-26 17:17       ` Andrey Ryabinin
2014-10-16 17:18   ` Yuri Gribov
2014-10-16 17:18     ` Yuri Gribov
2014-10-06 15:53 ` [PATCH v4 " Andrey Ryabinin
2014-10-06 15:53   ` Andrey Ryabinin
2014-10-06 15:53   ` [PATCH v4 01/13] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-10-06 15:53     ` Andrey Ryabinin
2014-10-06 15:53   ` [PATCH v4 02/13] efi: libstub: disable KASAN for efistub Andrey Ryabinin
2014-10-06 15:53     ` Andrey Ryabinin
2014-10-07  9:19     ` Dmitry Vyukov
2014-10-07  9:19       ` Dmitry Vyukov
2014-10-06 15:53   ` [PATCH v4 03/13] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment Andrey Ryabinin
2014-10-06 15:53     ` Andrey Ryabinin
2014-10-06 15:53   ` [PATCH v4 04/13] x86_64: add KASan support Andrey Ryabinin
2014-10-06 15:53     ` Andrey Ryabinin
2014-10-06 15:53   ` [PATCH v4 05/13] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2014-10-06 15:53     ` Andrey Ryabinin
2014-10-06 15:54   ` [PATCH v4 06/13] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2014-10-06 15:54     ` Andrey Ryabinin
2014-10-06 15:54   ` [PATCH v4 07/13] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-10-06 15:54     ` Andrey Ryabinin
2014-10-06 15:54   ` [PATCH v4 08/13] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2014-10-06 15:54     ` Andrey Ryabinin
2014-10-06 15:54   ` [PATCH v4 09/13] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2014-10-06 15:54     ` Andrey Ryabinin
2014-10-06 15:54   ` [PATCH v4 10/13] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-10-06 15:54     ` Andrey Ryabinin
2014-10-06 15:54   ` [PATCH v4 11/13] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2014-10-06 15:54     ` Andrey Ryabinin
2014-10-06 15:54   ` [PATCH v4 12/13] lib: add kasan test module Andrey Ryabinin
2014-10-06 15:54     ` Andrey Ryabinin
2014-10-06 15:54   ` [RFC PATCH v4 13/13] kasan: introduce inline instrumentation Andrey Ryabinin
2014-10-06 15:54     ` Andrey Ryabinin
2014-10-07  9:17     ` Dmitry Vyukov
2014-10-07  9:17       ` Dmitry Vyukov
2014-10-27 16:46 ` [PATCH v5 00/12] Kernel address sanitizer - runtime memory debugger Andrey Ryabinin
2014-10-27 16:46   ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 01/12] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 17:20     ` Jonathan Corbet
2014-10-27 17:20       ` Jonathan Corbet
2014-10-28 12:24       ` Andrey Ryabinin
2014-10-28 12:24         ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 02/12] kasan: Add support for upcoming GCC 5.0 asan ABI changes Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 03/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 04/12] x86_64: add KASan support Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 05/12] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 06/12] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 07/12] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 17:00     ` Joe Perches
2014-10-27 17:00       ` Joe Perches
2014-10-27 17:07       ` Andrey Ryabinin
2014-10-27 17:07         ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 08/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 09/12] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 10/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 11/12] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-10-27 16:46   ` [PATCH v5 12/12] lib: add kasan test module Andrey Ryabinin
2014-10-27 16:46     ` Andrey Ryabinin
2014-11-05 14:53 ` [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger Andrey Ryabinin
2014-11-05 14:53   ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 01/11] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 02/11] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 03/11] x86_64: add KASan support Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 04/11] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 05/11] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 06/11] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 07/11] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 08/11] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:53   ` [PATCH v6 09/11] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-11-05 14:53     ` Andrey Ryabinin
2014-11-05 14:54   ` [PATCH v6 10/11] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2014-11-05 14:54     ` Andrey Ryabinin
2014-11-05 14:54   ` [PATCH] lib: add kasan test module Andrey Ryabinin
2014-11-05 14:54     ` Andrey Ryabinin
2014-11-11  7:21   ` [PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger Andrey Ryabinin
2014-11-11  7:21     ` Andrey Ryabinin
2014-11-18 17:08     ` Andrey Ryabinin
2014-11-18 17:08       ` Andrey Ryabinin
2014-11-18 20:58     ` Andrew Morton
2014-11-18 20:58       ` Andrew Morton
2014-11-18 21:09       ` Sasha Levin
2014-11-18 21:09         ` Sasha Levin
2014-11-18 21:15       ` Andi Kleen
2014-11-18 21:15         ` Andi Kleen
2014-11-18 21:32         ` Dave Hansen
2014-11-18 21:32           ` Dave Hansen
2014-11-18 23:53       ` Andrey Ryabinin
2014-11-18 23:53         ` Andrey Ryabinin
2014-11-20  9:03         ` Ingo Molnar
2014-11-20  9:03           ` Ingo Molnar
2014-11-20 12:35           ` Andrey Ryabinin
2014-11-20 12:35             ` Andrey Ryabinin
2014-11-20 16:32           ` Dmitry Vyukov
2014-11-20 16:32             ` Dmitry Vyukov
2014-11-20 23:00             ` Andrew Morton
2014-11-20 23:00               ` Andrew Morton
2014-11-20 23:14               ` Thomas Gleixner
2014-11-20 23:14                 ` Thomas Gleixner
2014-11-21 16:06                 ` Andrey Ryabinin
2014-11-21 16:06                   ` Andrey Ryabinin
2014-11-21  7:32               ` Dmitry Vyukov
2014-11-21  7:32                 ` Dmitry Vyukov
2014-11-21 11:19                 ` Andrey Ryabinin
2014-11-21 11:19                   ` Andrey Ryabinin
2014-11-21 11:06               ` Andrey Ryabinin
2014-11-21 11:06                 ` Andrey Ryabinin
2014-11-18 23:38   ` Sasha Levin
2014-11-18 23:38     ` Sasha Levin
2014-11-19  0:09     ` Andrey Ryabinin
2014-11-19  0:09       ` Andrey Ryabinin
2014-11-19  0:44       ` Sasha Levin
2014-11-19  0:44         ` Sasha Levin
2014-11-19 12:41         ` Andrey Ryabinin
2014-11-19 12:41           ` Andrey Ryabinin
2014-11-24 18:02 ` [PATCH v7 00/12] " Andrey Ryabinin
2014-11-24 18:02   ` Andrey Ryabinin
2014-11-24 18:02   ` [PATCH v7 01/12] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-25 12:40     ` Dmitry Chernenkov
2014-11-25 12:40       ` Dmitry Chernenkov
2014-11-25 14:16       ` Andrey Ryabinin
2014-11-25 14:16         ` Andrey Ryabinin
2014-11-24 18:02   ` [PATCH v7 02/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-25 12:41     ` Dmitry Chernenkov
2014-11-25 12:41       ` Dmitry Chernenkov
2014-11-24 18:02   ` [PATCH v7 03/12] x86_64: add KASan support Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-24 18:45     ` Sasha Levin
2014-11-24 18:45       ` Sasha Levin
2014-11-24 21:26       ` Andrey Ryabinin
2014-11-24 21:26         ` Andrey Ryabinin
2014-11-25 10:47         ` Dmitry Chernenkov
2014-11-25 10:47           ` Dmitry Chernenkov
2014-11-24 18:02   ` [PATCH v7 04/12] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-25 12:28     ` Dmitry Chernenkov
2014-11-25 12:28       ` Dmitry Chernenkov
2014-11-24 18:02   ` [PATCH v7 05/12] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-24 20:08     ` Christoph Lameter
2014-11-24 20:08       ` Christoph Lameter
2014-11-24 18:02   ` [PATCH v7 06/12] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-25 12:26     ` Dmitry Chernenkov
2014-11-25 12:26       ` Dmitry Chernenkov
2014-11-24 18:02   ` [PATCH v7 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-25 12:22     ` Dmitry Chernenkov
2014-11-25 12:22       ` Dmitry Chernenkov
2014-11-25 13:11       ` Andrey Ryabinin
2014-11-25 13:11         ` Andrey Ryabinin
2014-11-24 18:02   ` [PATCH v7 08/12] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-25 12:17     ` Dmitry Chernenkov
2014-11-25 12:17       ` Dmitry Chernenkov
2014-11-25 13:18       ` Andrey Ryabinin
2014-11-25 13:18         ` Andrey Ryabinin
2014-11-24 18:02   ` [PATCH v7 09/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-24 18:02   ` [PATCH v7 10/12] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-24 18:02   ` [PATCH v7 11/12] lib: add kasan test module Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-25 11:14     ` Dmitry Chernenkov
2014-11-25 11:14       ` Dmitry Chernenkov
2014-11-25 13:09       ` Andrey Ryabinin
2014-11-25 13:09         ` Andrey Ryabinin
2014-11-24 18:02   ` [PATCH v7 12/12] x86_64: kasan: add interceptors for memset/memmove/memcpy functions Andrey Ryabinin
2014-11-24 18:02     ` Andrey Ryabinin
2014-11-27 16:00 ` [PATCH v8 00/12] Kernel address sanitizer - runtime memory debugger Andrey Ryabinin
2014-11-27 16:00   ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 01/12] Add kernel address sanitizer infrastructure Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-12-01 23:13     ` David Rientjes
2014-12-01 23:13       ` David Rientjes
2014-11-27 16:00   ` [PATCH v8 02/12] x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 03/12] x86_64: add KASan support Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 04/12] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 05/12] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 06/12] mm: slub: share slab_err and object_err functions Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 07/12] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 08/12] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 09/12] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 10/12] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-12-01 16:28     ` Catalin Marinas
2014-12-01 16:28       ` Catalin Marinas
2014-11-27 16:00   ` [PATCH v8 11/12] lib: add kasan test module Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2014-11-27 16:00   ` [PATCH v8 12/12] x86_64: kasan: add interceptors for memset/memmove/memcpy functions Andrey Ryabinin
2014-11-27 16:00     ` Andrey Ryabinin
2015-01-21 16:51 ` [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger Andrey Ryabinin
2015-01-21 16:51   ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 01/17] Add kernel address sanitizer infrastructure Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-23 12:20     ` Michal Marek
2015-01-23 12:35     ` Michal Marek
2015-01-23 12:48       ` Andrey Ryabinin
2015-01-23 12:48         ` Andrey Ryabinin
2015-01-23 12:51         ` Michal Marek
2015-01-21 16:51   ` [PATCH v9 02/17] x86_64: add KASan support Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 03/17] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 04/17] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 05/17] mm: slub: share object_err function Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 07/17] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 20:47     ` Sasha Levin
2015-01-21 20:47       ` Sasha Levin
2015-01-21 21:48       ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 08/17] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 09/17] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 10/17] lib: add kasan test module Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 11/17] x86_64: kasan: add interceptors for memset/memmove/memcpy functions Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 12/17] kasan: enable stack instrumentation Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 13/17] mm: vmalloc: add flag preventing guard hole allocation Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range() Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 15/17] kernel: add support for .init_array.* constructors Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 16/17] module: fix types of device tables aliases Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51   ` [PATCH v9 17/17] kasan: enable instrumentation of global variables Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-21 16:51     ` Andrey Ryabinin
2015-01-22  0:22   ` [PATCH v9 00/17] Kernel address sanitizer - runtime memory debugger Sasha Levin
2015-01-22  0:22     ` Sasha Levin
2015-01-22  5:34     ` Andrey Ryabinin
2015-01-22  5:53       ` Andrey Ryabinin
2015-01-22 21:46         ` Sasha Levin
2015-01-22 21:46           ` Sasha Levin
2015-01-23  9:50           ` y.gribov
2015-01-23 10:14           ` Andrey Ryabinin
2015-01-23 10:14             ` Andrey Ryabinin
2015-01-29 15:11 ` [PATCH v10 " Andrey Ryabinin
2015-01-29 15:11   ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 01/17] Add kernel address sanitizer infrastructure Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:39     ` Michal Marek
2015-01-29 23:12     ` Andrew Morton
2015-01-29 23:12       ` Andrew Morton
2015-01-29 23:12       ` Andrew Morton
2015-01-30 16:04       ` Andrey Ryabinin
2015-01-30 16:04         ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 02/17] x86_64: add KASan support Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 23:12     ` Andrew Morton
2015-01-29 23:12       ` Andrew Morton
2015-01-30 16:15       ` Andrey Ryabinin
2015-01-30 16:15         ` Andrey Ryabinin
2015-01-30 21:35         ` Andrew Morton
2015-01-30 21:35           ` Andrew Morton
2015-01-30 21:37         ` Andrew Morton
2015-01-30 21:37           ` Andrew Morton
2015-01-30 23:27           ` Andrey Ryabinin
2015-01-30 23:27             ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 03/17] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 04/17] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 23:12     ` Andrew Morton
2015-01-29 23:12       ` Andrew Morton
2015-01-30 16:17       ` Andrey Ryabinin
2015-01-30 16:17         ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 05/17] mm: slub: share object_err function Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 06/17] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 23:12     ` Andrew Morton
2015-01-29 23:12       ` Andrew Morton
2015-01-30 17:05       ` Andrey Ryabinin
2015-01-30 17:05         ` Andrey Ryabinin
2015-01-30 21:42         ` Andrew Morton
2015-01-30 21:42           ` Andrew Morton
2015-01-30 23:11           ` Andrey Ryabinin
2015-01-30 23:11             ` Andrey Ryabinin
2015-01-30 23:16             ` Andrew Morton
2015-01-30 23:16               ` Andrew Morton
2015-01-30 23:19               ` Andrey Ryabinin
2015-01-30 23:19                 ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 07/17] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 08/17] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 09/17] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 10/17] lib: add kasan test module Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 11/17] x86_64: kasan: add interceptors for memset/memmove/memcpy functions Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 12/17] kasan: enable stack instrumentation Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 13/17] mm: vmalloc: add flag preventing guard hole allocation Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 23:12     ` Andrew Morton
2015-01-29 23:12       ` Andrew Morton
2015-01-30 17:51       ` Andrey Ryabinin
2015-01-30 17:51         ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 14/17] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range() Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11   ` [PATCH v10 15/17] kernel: add support for .init_array.* constructors Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 15:11     ` Andrey Ryabinin
2015-01-29 23:13     ` Andrew Morton
2015-01-29 23:13       ` Andrew Morton
2015-01-29 23:13       ` Andrew Morton
2015-01-30 17:21       ` Andrey Ryabinin
2015-01-30 17:21         ` Andrey Ryabinin
2015-01-29 15:12   ` [PATCH v10 16/17] module: fix types of device tables aliases Andrey Ryabinin
2015-01-29 15:12     ` Andrey Ryabinin
2015-01-29 23:13     ` Andrew Morton
2015-01-29 23:13       ` Andrew Morton
2015-01-30 17:44       ` Andrey Ryabinin
2015-01-30 17:44         ` Andrey Ryabinin
2015-01-29 15:12   ` [PATCH v10 17/17] kasan: enable instrumentation of global variables Andrey Ryabinin
2015-01-29 15:12     ` Andrey Ryabinin
2015-01-29 15:12     ` Andrey Ryabinin
2015-01-29 23:13     ` Andrew Morton
2015-01-29 23:13       ` Andrew Morton
2015-01-29 23:13       ` Andrew Morton
2015-01-30 17:47       ` Andrey Ryabinin
2015-01-30 17:47         ` Andrey Ryabinin
2015-01-30 21:45         ` Andrew Morton
2015-01-30 21:45           ` Andrew Morton
2015-01-30 23:18           ` Andrey Ryabinin
2015-01-30 23:18             ` Andrey Ryabinin
2015-02-03 17:42 ` [PATCH v11 00/19] Kernel address sanitizer - runtime memory debugger Andrey Ryabinin
2015-02-03 17:42   ` Andrey Ryabinin
2015-02-03 17:42   ` [PATCH v11 01/19] compiler: introduce __alias(symbol) shortcut Andrey Ryabinin
2015-02-03 17:42     ` Andrey Ryabinin
2015-02-03 17:42   ` [PATCH v11 02/19] Add kernel address sanitizer infrastructure Andrey Ryabinin
2015-02-03 17:42     ` Andrey Ryabinin
2015-02-03 17:42     ` Andrey Ryabinin
2015-02-03 23:04     ` Andrew Morton
2015-02-03 23:04       ` Andrew Morton
2015-02-03 23:04       ` Andrew Morton
2015-02-04  3:56       ` Andrey Konovalov
2015-02-04  4:00       ` Andrey Konovalov
2015-02-04  4:00         ` Andrey Konovalov
2015-02-03 17:42   ` [PATCH v11 03/19] kasan: disable memory hotplug Andrey Ryabinin
2015-02-03 17:42     ` Andrey Ryabinin
2015-02-03 17:42   ` [PATCH v11 04/19] x86_64: add KASan support Andrey Ryabinin
2015-02-03 17:42     ` Andrey Ryabinin
2015-02-03 17:42   ` [PATCH v11 05/19] mm: page_alloc: add kasan hooks on alloc and free paths Andrey Ryabinin
2015-02-03 17:42     ` Andrey Ryabinin
2015-02-03 17:42   ` [PATCH v11 06/19] mm: slub: introduce virt_to_obj function Andrey Ryabinin
2015-02-03 17:42     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 07/19] mm: slub: share object_err function Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 08/19] mm: slub: introduce metadata_access_enable()/metadata_access_disable() Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 09/19] mm: slub: add kernel address sanitizer support for slub allocator Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 10/19] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 11/19] kmemleak: disable kasan instrumentation for kmemleak Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 12/19] lib: add kasan test module Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 13/19] x86_64: kasan: add interceptors for memset/memmove/memcpy functions Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 14/19] kasan: enable stack instrumentation Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 15/19] mm: vmalloc: add flag preventing guard hole allocation Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 16/19] mm: vmalloc: pass additional vm_flags to __vmalloc_node_range() Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 17/19] kernel: add support for .init_array.* constructors Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 18/19] module: fix types of device tables aliases Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 23:51     ` Andrew Morton
2015-02-03 23:51       ` Andrew Morton
2015-02-04  0:01       ` Sasha Levin
2015-02-04  0:01         ` Sasha Levin
2015-02-04  0:10         ` Andrew Morton
2015-02-04  0:10           ` Andrew Morton
2015-02-16  2:44     ` Rusty Russell
2015-02-16  2:44       ` Rusty Russell
2015-02-16 14:01       ` Andrey Ryabinin
2015-02-16 14:01         ` Andrey Ryabinin
2015-02-03 17:43   ` [PATCH v11 19/19] kasan: enable instrumentation of global variables Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-03 17:43     ` Andrey Ryabinin
2015-02-16  2:58     ` Rusty Russell
2015-02-16  2:58       ` Rusty Russell
2015-02-16  2:58       ` Rusty Russell
2015-02-16 14:44       ` Andrey Ryabinin
2015-02-16 14:44         ` Andrey Ryabinin
2015-02-16 14:47         ` Dmitry Vyukov
2015-02-16 14:47           ` Dmitry Vyukov
2015-02-16 15:09           ` Andrey Ryabinin
2015-02-16 15:09             ` Andrey Ryabinin
2015-02-16 23:55         ` Rusty Russell
2015-02-16 23:55           ` Rusty Russell
2015-02-16 23:55           ` Rusty Russell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.