All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/6] mm/kasan: fix corruptions and false positive reports
@ 2016-08-01 14:45 ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

Once object put in quarantine, we no longer own it, i.e. object could leave
the quarantine and be reallocated. So having set_track() call after the
quarantine_put() may corrupt slab objects.

 BUG kmalloc-4096 (Not tainted): Poison overwritten
 -----------------------------------------------------------------------------
 Disabling lock debugging due to kernel taint
 INFO: 0xffff8804540de850-0xffff8804540de857. First byte 0xb5 instead of 0x6b
...
 INFO: Freed in qlist_free_all+0x42/0x100 age=75 cpu=3 pid=24492
  __slab_free+0x1d6/0x2e0
  ___cache_free+0xb6/0xd0
  qlist_free_all+0x83/0x100
  quarantine_reduce+0x177/0x1b0
  kasan_kmalloc+0xf3/0x100
  kasan_slab_alloc+0x12/0x20
  kmem_cache_alloc+0x109/0x3e0
  mmap_region+0x53e/0xe40
  do_mmap+0x70f/0xa50
  vm_mmap_pgoff+0x147/0x1b0
  SyS_mmap_pgoff+0x2c7/0x5b0
  SyS_mmap+0x1b/0x30
  do_syscall_64+0x1a0/0x4e0
  return_from_SYSCALL_64+0x0/0x7a
 INFO: Slab 0xffffea0011503600 objects=7 used=7 fp=0x          (null) flags=0x8000000000004080
 INFO: Object 0xffff8804540de848 @offset=26696 fp=0xffff8804540dc588
 Redzone ffff8804540de840: bb bb bb bb bb bb bb bb                          ........
 Object ffff8804540de848: 6b 6b 6b 6b 6b 6b 6b 6b b5 52 00 00 f2 01 60 cc  kkkkkkkk.R....`.

Similarly, poisoning after the quarantine_put() leads to false positive
use-after-free reports:

 BUG: KASAN: use-after-free in anon_vma_interval_tree_insert+0x304/0x430 at addr ffff880405c540a0
 Read of size 8 by task trinity-c0/3036
 CPU: 0 PID: 3036 Comm: trinity-c0 Not tainted 4.7.0-think+ #9
  ffff880405c54200 00000000c5c4423e ffff88044a5ef9f0 ffffffffaea48532
  ffff88044a5efa88 ffff880461497a00 ffff88044a5efa78 ffffffffae57cfe2
  ffff88046501c958 ffff880436aa5440 0000000000000282 0000000000000007
 Call Trace:
  [<ffffffffaea48532>] dump_stack+0x68/0x96
  [<ffffffffae57cfe2>] kasan_report_error+0x222/0x600
  [<ffffffffae57d571>] __asan_report_load8_noabort+0x61/0x70
  [<ffffffffae4f8924>] anon_vma_interval_tree_insert+0x304/0x430
  [<ffffffffae52f811>] anon_vma_chain_link+0x91/0xd0
  [<ffffffffae536e46>] anon_vma_clone+0x136/0x3f0
  [<ffffffffae537181>] anon_vma_fork+0x81/0x4c0
  [<ffffffffae125663>] copy_process.part.47+0x2c43/0x5b20
  [<ffffffffae12895d>] _do_fork+0x16d/0xbd0
  [<ffffffffae129469>] SyS_clone+0x19/0x20
  [<ffffffffae0064b0>] do_syscall_64+0x1a0/0x4e0
  [<ffffffffafa09b1a>] entry_SYSCALL64_slow_path+0x25/0x25

Fix this by putting an object in the quarantine after all other operations.

Fixes: 80a9201a5965 ("mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB")
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/kasan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index b6f99e8..3019cec 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -543,9 +543,9 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
 		switch (alloc_info->state) {
 		case KASAN_STATE_ALLOC:
 			alloc_info->state = KASAN_STATE_QUARANTINE;
-			quarantine_put(free_info, cache);
 			set_track(&free_info->track, GFP_NOWAIT);
 			kasan_poison_slab_free(cache, object);
+			quarantine_put(free_info, cache);
 			return true;
 		case KASAN_STATE_QUARANTINE:
 		case KASAN_STATE_FREE:
-- 
2.7.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 1/6] mm/kasan: fix corruptions and false positive reports
@ 2016-08-01 14:45 ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

Once object put in quarantine, we no longer own it, i.e. object could leave
the quarantine and be reallocated. So having set_track() call after the
quarantine_put() may corrupt slab objects.

 BUG kmalloc-4096 (Not tainted): Poison overwritten
 -----------------------------------------------------------------------------
 Disabling lock debugging due to kernel taint
 INFO: 0xffff8804540de850-0xffff8804540de857. First byte 0xb5 instead of 0x6b
...
 INFO: Freed in qlist_free_all+0x42/0x100 age=75 cpu=3 pid=24492
  __slab_free+0x1d6/0x2e0
  ___cache_free+0xb6/0xd0
  qlist_free_all+0x83/0x100
  quarantine_reduce+0x177/0x1b0
  kasan_kmalloc+0xf3/0x100
  kasan_slab_alloc+0x12/0x20
  kmem_cache_alloc+0x109/0x3e0
  mmap_region+0x53e/0xe40
  do_mmap+0x70f/0xa50
  vm_mmap_pgoff+0x147/0x1b0
  SyS_mmap_pgoff+0x2c7/0x5b0
  SyS_mmap+0x1b/0x30
  do_syscall_64+0x1a0/0x4e0
  return_from_SYSCALL_64+0x0/0x7a
 INFO: Slab 0xffffea0011503600 objects=7 used=7 fp=0x          (null) flags=0x8000000000004080
 INFO: Object 0xffff8804540de848 @offset=26696 fp=0xffff8804540dc588
 Redzone ffff8804540de840: bb bb bb bb bb bb bb bb                          ........
 Object ffff8804540de848: 6b 6b 6b 6b 6b 6b 6b 6b b5 52 00 00 f2 01 60 cc  kkkkkkkk.R....`.

Similarly, poisoning after the quarantine_put() leads to false positive
use-after-free reports:

 BUG: KASAN: use-after-free in anon_vma_interval_tree_insert+0x304/0x430 at addr ffff880405c540a0
 Read of size 8 by task trinity-c0/3036
 CPU: 0 PID: 3036 Comm: trinity-c0 Not tainted 4.7.0-think+ #9
  ffff880405c54200 00000000c5c4423e ffff88044a5ef9f0 ffffffffaea48532
  ffff88044a5efa88 ffff880461497a00 ffff88044a5efa78 ffffffffae57cfe2
  ffff88046501c958 ffff880436aa5440 0000000000000282 0000000000000007
 Call Trace:
  [<ffffffffaea48532>] dump_stack+0x68/0x96
  [<ffffffffae57cfe2>] kasan_report_error+0x222/0x600
  [<ffffffffae57d571>] __asan_report_load8_noabort+0x61/0x70
  [<ffffffffae4f8924>] anon_vma_interval_tree_insert+0x304/0x430
  [<ffffffffae52f811>] anon_vma_chain_link+0x91/0xd0
  [<ffffffffae536e46>] anon_vma_clone+0x136/0x3f0
  [<ffffffffae537181>] anon_vma_fork+0x81/0x4c0
  [<ffffffffae125663>] copy_process.part.47+0x2c43/0x5b20
  [<ffffffffae12895d>] _do_fork+0x16d/0xbd0
  [<ffffffffae129469>] SyS_clone+0x19/0x20
  [<ffffffffae0064b0>] do_syscall_64+0x1a0/0x4e0
  [<ffffffffafa09b1a>] entry_SYSCALL64_slow_path+0x25/0x25

Fix this by putting an object in the quarantine after all other operations.

Fixes: 80a9201a5965 ("mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB")
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/kasan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index b6f99e8..3019cec 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -543,9 +543,9 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
 		switch (alloc_info->state) {
 		case KASAN_STATE_ALLOC:
 			alloc_info->state = KASAN_STATE_QUARANTINE;
-			quarantine_put(free_info, cache);
 			set_track(&free_info->track, GFP_NOWAIT);
 			kasan_poison_slab_free(cache, object);
+			quarantine_put(free_info, cache);
 			return true;
 		case KASAN_STATE_QUARANTINE:
 		case KASAN_STATE_FREE:
-- 
2.7.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/6] mm/kasan: don't reduce quarantine in atomic contexts
  2016-08-01 14:45 ` Andrey Ryabinin
@ 2016-08-01 14:45   ` Andrey Ryabinin
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

Currently we call quarantine_reduce() for ___GFP_KSWAPD_RECLAIM
(implied by __GFP_RECLAIM) allocation. So, basically we call it on
almost every allocation. quarantine_reduce() sometimes is heavy operation,
and calling it with disabled interrupts may trigger hard LOCKUP:

 NMI watchdog: Watchdog detected hard LOCKUP on cpu 2irq event stamp: 1411258
 Call Trace:
  <NMI>  [<ffffffff98a48532>] dump_stack+0x68/0x96
  [<ffffffff98357fbb>] watchdog_overflow_callback+0x15b/0x190
  [<ffffffff9842f7d1>] __perf_event_overflow+0x1b1/0x540
  [<ffffffff98455b14>] perf_event_overflow+0x14/0x20
  [<ffffffff9801976a>] intel_pmu_handle_irq+0x36a/0xad0
  [<ffffffff9800ba4c>] perf_event_nmi_handler+0x2c/0x50
  [<ffffffff98057058>] nmi_handle+0x128/0x480
  [<ffffffff980576d2>] default_do_nmi+0xb2/0x210
  [<ffffffff980579da>] do_nmi+0x1aa/0x220
  [<ffffffff99a0bb07>] end_repeat_nmi+0x1a/0x1e
  <<EOE>>  [<ffffffff981871e6>] __kernel_text_address+0x86/0xb0
  [<ffffffff98055c4b>] print_context_stack+0x7b/0x100
  [<ffffffff98054e9b>] dump_trace+0x12b/0x350
  [<ffffffff98076ceb>] save_stack_trace+0x2b/0x50
  [<ffffffff98573003>] set_track+0x83/0x140
  [<ffffffff98575f4a>] free_debug_processing+0x1aa/0x420
  [<ffffffff98578506>] __slab_free+0x1d6/0x2e0
  [<ffffffff9857a9b6>] ___cache_free+0xb6/0xd0
  [<ffffffff9857db53>] qlist_free_all+0x83/0x100
  [<ffffffff9857df07>] quarantine_reduce+0x177/0x1b0
  [<ffffffff9857c423>] kasan_kmalloc+0xf3/0x100

Reduce the quarantine_reduce iff direct reclaim is allowed.

Fixes: 55834c59098d("mm: kasan: initial memory quarantine implementation")
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/kasan.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 3019cec..c99ef40 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -565,7 +565,7 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 	unsigned long redzone_start;
 	unsigned long redzone_end;
 
-	if (flags & __GFP_RECLAIM)
+	if (gfpflags_allow_blocking(flags))
 		quarantine_reduce();
 
 	if (unlikely(object == NULL))
@@ -596,7 +596,7 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 	unsigned long redzone_start;
 	unsigned long redzone_end;
 
-	if (flags & __GFP_RECLAIM)
+	if (gfpflags_allow_blocking(flags))
 		quarantine_reduce();
 
 	if (unlikely(ptr == NULL))
-- 
2.7.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/6] mm/kasan: don't reduce quarantine in atomic contexts
@ 2016-08-01 14:45   ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

Currently we call quarantine_reduce() for ___GFP_KSWAPD_RECLAIM
(implied by __GFP_RECLAIM) allocation. So, basically we call it on
almost every allocation. quarantine_reduce() sometimes is heavy operation,
and calling it with disabled interrupts may trigger hard LOCKUP:

 NMI watchdog: Watchdog detected hard LOCKUP on cpu 2irq event stamp: 1411258
 Call Trace:
  <NMI>  [<ffffffff98a48532>] dump_stack+0x68/0x96
  [<ffffffff98357fbb>] watchdog_overflow_callback+0x15b/0x190
  [<ffffffff9842f7d1>] __perf_event_overflow+0x1b1/0x540
  [<ffffffff98455b14>] perf_event_overflow+0x14/0x20
  [<ffffffff9801976a>] intel_pmu_handle_irq+0x36a/0xad0
  [<ffffffff9800ba4c>] perf_event_nmi_handler+0x2c/0x50
  [<ffffffff98057058>] nmi_handle+0x128/0x480
  [<ffffffff980576d2>] default_do_nmi+0xb2/0x210
  [<ffffffff980579da>] do_nmi+0x1aa/0x220
  [<ffffffff99a0bb07>] end_repeat_nmi+0x1a/0x1e
  <<EOE>>  [<ffffffff981871e6>] __kernel_text_address+0x86/0xb0
  [<ffffffff98055c4b>] print_context_stack+0x7b/0x100
  [<ffffffff98054e9b>] dump_trace+0x12b/0x350
  [<ffffffff98076ceb>] save_stack_trace+0x2b/0x50
  [<ffffffff98573003>] set_track+0x83/0x140
  [<ffffffff98575f4a>] free_debug_processing+0x1aa/0x420
  [<ffffffff98578506>] __slab_free+0x1d6/0x2e0
  [<ffffffff9857a9b6>] ___cache_free+0xb6/0xd0
  [<ffffffff9857db53>] qlist_free_all+0x83/0x100
  [<ffffffff9857df07>] quarantine_reduce+0x177/0x1b0
  [<ffffffff9857c423>] kasan_kmalloc+0xf3/0x100

Reduce the quarantine_reduce iff direct reclaim is allowed.

Fixes: 55834c59098d("mm: kasan: initial memory quarantine implementation")
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/kasan.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 3019cec..c99ef40 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -565,7 +565,7 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 	unsigned long redzone_start;
 	unsigned long redzone_end;
 
-	if (flags & __GFP_RECLAIM)
+	if (gfpflags_allow_blocking(flags))
 		quarantine_reduce();
 
 	if (unlikely(object == NULL))
@@ -596,7 +596,7 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 	unsigned long redzone_start;
 	unsigned long redzone_end;
 
-	if (flags & __GFP_RECLAIM)
+	if (gfpflags_allow_blocking(flags))
 		quarantine_reduce();
 
 	if (unlikely(ptr == NULL))
-- 
2.7.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/6] mm/kasan, slub: don't disable interrupts when object leaves quarantine
  2016-08-01 14:45 ` Andrey Ryabinin
@ 2016-08-01 14:45   ` Andrey Ryabinin
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

SLUB doesn't require disabled interrupts to call ___cache_free().

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/quarantine.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 65793f1..4852625 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -147,10 +147,14 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
 	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
 	unsigned long flags;
 
-	local_irq_save(flags);
+	if (IS_ENABLED(CONFIG_SLAB))
+		local_irq_save(flags);
+
 	alloc_info->state = KASAN_STATE_FREE;
 	___cache_free(cache, object, _THIS_IP_);
-	local_irq_restore(flags);
+
+	if (IS_ENABLED(CONFIG_SLAB))
+		local_irq_restore(flags);
 }
 
 static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
-- 
2.7.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/6] mm/kasan, slub: don't disable interrupts when object leaves quarantine
@ 2016-08-01 14:45   ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

SLUB doesn't require disabled interrupts to call ___cache_free().

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/quarantine.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 65793f1..4852625 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -147,10 +147,14 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
 	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
 	unsigned long flags;
 
-	local_irq_save(flags);
+	if (IS_ENABLED(CONFIG_SLAB))
+		local_irq_save(flags);
+
 	alloc_info->state = KASAN_STATE_FREE;
 	___cache_free(cache, object, _THIS_IP_);
-	local_irq_restore(flags);
+
+	if (IS_ENABLED(CONFIG_SLAB))
+		local_irq_restore(flags);
 }
 
 static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
-- 
2.7.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/6] mm/kasan: get rid of ->alloc_size in struct kasan_alloc_meta
  2016-08-01 14:45 ` Andrey Ryabinin
@ 2016-08-01 14:45   ` Andrey Ryabinin
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

Size of slab object already stored in cache->object_size.

Note, that kmalloc() internally rounds up size of allocation, so
object_size may be not equal to alloc_size, but, usually we don't need
to know the exact size of allocated object. In case if we need that
information, we still can figure it out from the report. The dump of
shadow memory allows to identify the end of allocated memory, and thereby
the exact allocation size.

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/kasan.c  | 1 -
 mm/kasan/kasan.h  | 3 +--
 mm/kasan/report.c | 8 +++-----
 3 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index c99ef40..388e812 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -584,7 +584,6 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 			get_alloc_info(cache, object);
 
 		alloc_info->state = KASAN_STATE_ALLOC;
-		alloc_info->alloc_size = size;
 		set_track(&alloc_info->track, flags);
 	}
 }
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 31972cd..aa17546 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -75,8 +75,7 @@ struct kasan_track {
 
 struct kasan_alloc_meta {
 	struct kasan_track track;
-	u32 state : 2;	/* enum kasan_state */
-	u32 alloc_size : 30;
+	u32 state;
 };
 
 struct qlist_node {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 861b977..d67a7e0 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -136,7 +136,9 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 	struct kasan_free_meta *free_info;
 
 	dump_stack();
-	pr_err("Object at %p, in cache %s\n", object, cache->name);
+	pr_err("Object at %p, in cache %s size: %d\n", object, cache->name,
+		cache->object_size);
+
 	if (!(cache->flags & SLAB_KASAN))
 		return;
 	switch (alloc_info->state) {
@@ -144,15 +146,11 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 		pr_err("Object not allocated yet\n");
 		break;
 	case KASAN_STATE_ALLOC:
-		pr_err("Object allocated with size %u bytes.\n",
-		       alloc_info->alloc_size);
 		pr_err("Allocation:\n");
 		print_track(&alloc_info->track);
 		break;
 	case KASAN_STATE_FREE:
 	case KASAN_STATE_QUARANTINE:
-		pr_err("Object freed, allocated with size %u bytes\n",
-		       alloc_info->alloc_size);
 		free_info = get_free_info(cache, object);
 		pr_err("Allocation:\n");
 		print_track(&alloc_info->track);
-- 
2.7.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/6] mm/kasan: get rid of ->alloc_size in struct kasan_alloc_meta
@ 2016-08-01 14:45   ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

Size of slab object already stored in cache->object_size.

Note, that kmalloc() internally rounds up size of allocation, so
object_size may be not equal to alloc_size, but, usually we don't need
to know the exact size of allocated object. In case if we need that
information, we still can figure it out from the report. The dump of
shadow memory allows to identify the end of allocated memory, and thereby
the exact allocation size.

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/kasan.c  | 1 -
 mm/kasan/kasan.h  | 3 +--
 mm/kasan/report.c | 8 +++-----
 3 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index c99ef40..388e812 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -584,7 +584,6 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 			get_alloc_info(cache, object);
 
 		alloc_info->state = KASAN_STATE_ALLOC;
-		alloc_info->alloc_size = size;
 		set_track(&alloc_info->track, flags);
 	}
 }
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 31972cd..aa17546 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -75,8 +75,7 @@ struct kasan_track {
 
 struct kasan_alloc_meta {
 	struct kasan_track track;
-	u32 state : 2;	/* enum kasan_state */
-	u32 alloc_size : 30;
+	u32 state;
 };
 
 struct qlist_node {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 861b977..d67a7e0 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -136,7 +136,9 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 	struct kasan_free_meta *free_info;
 
 	dump_stack();
-	pr_err("Object at %p, in cache %s\n", object, cache->name);
+	pr_err("Object at %p, in cache %s size: %d\n", object, cache->name,
+		cache->object_size);
+
 	if (!(cache->flags & SLAB_KASAN))
 		return;
 	switch (alloc_info->state) {
@@ -144,15 +146,11 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 		pr_err("Object not allocated yet\n");
 		break;
 	case KASAN_STATE_ALLOC:
-		pr_err("Object allocated with size %u bytes.\n",
-		       alloc_info->alloc_size);
 		pr_err("Allocation:\n");
 		print_track(&alloc_info->track);
 		break;
 	case KASAN_STATE_FREE:
 	case KASAN_STATE_QUARANTINE:
-		pr_err("Object freed, allocated with size %u bytes\n",
-		       alloc_info->alloc_size);
 		free_info = get_free_info(cache, object);
 		pr_err("Allocation:\n");
 		print_track(&alloc_info->track);
-- 
2.7.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/6] mm/kasan: get rid of ->state in struct kasan_alloc_meta
  2016-08-01 14:45 ` Andrey Ryabinin
@ 2016-08-01 14:45   ` Andrey Ryabinin
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin, xiaolong.ye, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim

The state of object currently tracked in two places - shadow memory,
and the ->state field in struct kasan_alloc_meta.
We can get rid of the latter. The will save us a little bit of memory.
Also, this allow us to move free stack into struct kasan_alloc_meta,
without increasing memory consumption. So now we should always know
when the last time the object was freed. This may be useful for
long delayed use-after-free bugs.

As a side effect this fixes following UBSAN warning:
	UBSAN: Undefined behaviour in mm/kasan/quarantine.c:102:13
	member access within misaligned address ffff88000d1efebc for type 'struct qlist_node'
	which requires 8 byte alignment

Reported-by: kernel test robot <xiaolong.ye@intel.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/kasan.h |  3 +++
 mm/kasan/kasan.c      | 61 +++++++++++++++++++++++----------------------------
 mm/kasan/kasan.h      | 12 ++--------
 mm/kasan/quarantine.c |  2 --
 mm/kasan/report.c     | 23 +++++--------------
 mm/slab.c             |  4 +++-
 mm/slub.c             |  1 +
 7 files changed, 42 insertions(+), 64 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index c9cf374..d600303 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -56,6 +56,7 @@ void kasan_cache_destroy(struct kmem_cache *cache);
 void kasan_poison_slab(struct page *page);
 void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
 void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+void kasan_init_slab_obj(struct kmem_cache *cache, const void *object);
 
 void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags);
 void kasan_kfree_large(const void *ptr);
@@ -102,6 +103,8 @@ static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
 					void *object) {}
 static inline void kasan_poison_object_data(struct kmem_cache *cache,
 					void *object) {}
+static inline void kasan_init_slab_obj(struct kmem_cache *cache,
+				const void *object) {}
 
 static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {}
 static inline void kasan_kfree_large(const void *ptr) {}
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 388e812..92750e3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -442,11 +442,6 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 	kasan_poison_shadow(object,
 			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
 			KASAN_KMALLOC_REDZONE);
-	if (cache->flags & SLAB_KASAN) {
-		struct kasan_alloc_meta *alloc_info =
-			get_alloc_info(cache, object);
-		alloc_info->state = KASAN_STATE_INIT;
-	}
 }
 
 static inline int in_irqentry_text(unsigned long ptr)
@@ -510,6 +505,17 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
 	return (void *)object + cache->kasan_info.free_meta_offset;
 }
 
+void kasan_init_slab_obj(struct kmem_cache *cache, const void *object)
+{
+	struct kasan_alloc_meta *alloc_info;
+
+	if (!(cache->flags & SLAB_KASAN))
+		return;
+
+	alloc_info = get_alloc_info(cache, object);
+	__memset(alloc_info, 0, sizeof(*alloc_info));
+}
+
 void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
 	kasan_kmalloc(cache, object, cache->object_size, flags);
@@ -529,34 +535,27 @@ static void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
 
 bool kasan_slab_free(struct kmem_cache *cache, void *object)
 {
+	s8 shadow_byte;
+
 	/* RCU slabs could be legally used after free within the RCU period */
 	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
 		return false;
 
-	if (likely(cache->flags & SLAB_KASAN)) {
-		struct kasan_alloc_meta *alloc_info;
-		struct kasan_free_meta *free_info;
+	shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
+	if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
+		pr_err("Double free");
+		dump_stack();
+		return true;
+	}
 
-		alloc_info = get_alloc_info(cache, object);
-		free_info = get_free_info(cache, object);
+	kasan_poison_slab_free(cache, object);
 
-		switch (alloc_info->state) {
-		case KASAN_STATE_ALLOC:
-			alloc_info->state = KASAN_STATE_QUARANTINE;
-			set_track(&free_info->track, GFP_NOWAIT);
-			kasan_poison_slab_free(cache, object);
-			quarantine_put(free_info, cache);
-			return true;
-		case KASAN_STATE_QUARANTINE:
-		case KASAN_STATE_FREE:
-			pr_err("Double free");
-			dump_stack();
-			break;
-		default:
-			break;
-		}
-	}
-	return false;
+	if (unlikely(!(cache->flags & SLAB_KASAN)))
+		return false;
+
+	set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
+	quarantine_put(get_free_info(cache, object), cache);
+	return true;
 }
 
 void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
@@ -579,13 +578,9 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 	kasan_unpoison_shadow(object, size);
 	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
 		KASAN_KMALLOC_REDZONE);
-	if (cache->flags & SLAB_KASAN) {
-		struct kasan_alloc_meta *alloc_info =
-			get_alloc_info(cache, object);
 
-		alloc_info->state = KASAN_STATE_ALLOC;
-		set_track(&alloc_info->track, flags);
-	}
+	if (cache->flags & SLAB_KASAN)
+		set_track(&get_alloc_info(cache, object)->alloc_track, flags);
 }
 EXPORT_SYMBOL(kasan_kmalloc);
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index aa17546..9b7b31e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -59,13 +59,6 @@ struct kasan_global {
  * Structures to keep alloc and free tracks *
  */
 
-enum kasan_state {
-	KASAN_STATE_INIT,
-	KASAN_STATE_ALLOC,
-	KASAN_STATE_QUARANTINE,
-	KASAN_STATE_FREE
-};
-
 #define KASAN_STACK_DEPTH 64
 
 struct kasan_track {
@@ -74,8 +67,8 @@ struct kasan_track {
 };
 
 struct kasan_alloc_meta {
-	struct kasan_track track;
-	u32 state;
+	struct kasan_track alloc_track;
+	struct kasan_track free_track;
 };
 
 struct qlist_node {
@@ -86,7 +79,6 @@ struct kasan_free_meta {
 	 * Otherwise it might be used for the allocator freelist.
 	 */
 	struct qlist_node quarantine_link;
-	struct kasan_track track;
 };
 
 struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 4852625..7fd121d 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -144,13 +144,11 @@ static void *qlink_to_object(struct qlist_node *qlink, struct kmem_cache *cache)
 static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
 {
 	void *object = qlink_to_object(qlink, cache);
-	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
 	unsigned long flags;
 
 	if (IS_ENABLED(CONFIG_SLAB))
 		local_irq_save(flags);
 
-	alloc_info->state = KASAN_STATE_FREE;
 	___cache_free(cache, object, _THIS_IP_);
 
 	if (IS_ENABLED(CONFIG_SLAB))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index d67a7e0..f437398 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -133,7 +133,6 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 				void *object, char *unused_reason)
 {
 	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
-	struct kasan_free_meta *free_info;
 
 	dump_stack();
 	pr_err("Object at %p, in cache %s size: %d\n", object, cache->name,
@@ -141,23 +140,11 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 
 	if (!(cache->flags & SLAB_KASAN))
 		return;
-	switch (alloc_info->state) {
-	case KASAN_STATE_INIT:
-		pr_err("Object not allocated yet\n");
-		break;
-	case KASAN_STATE_ALLOC:
-		pr_err("Allocation:\n");
-		print_track(&alloc_info->track);
-		break;
-	case KASAN_STATE_FREE:
-	case KASAN_STATE_QUARANTINE:
-		free_info = get_free_info(cache, object);
-		pr_err("Allocation:\n");
-		print_track(&alloc_info->track);
-		pr_err("Deallocation:\n");
-		print_track(&free_info->track);
-		break;
-	}
+
+	pr_err("Allocated:\n");
+	print_track(&alloc_info->alloc_track);
+	pr_err("Freed:\n");
+	print_track(&alloc_info->free_track);
 }
 
 static void print_address_description(struct kasan_access_info *info)
diff --git a/mm/slab.c b/mm/slab.c
index 09771ed..ca135bd 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2604,9 +2604,11 @@ static void cache_init_objs(struct kmem_cache *cachep,
 	}
 
 	for (i = 0; i < cachep->num; i++) {
+		objp = index_to_obj(cachep, page, i);
+		kasan_init_slab_obj(cachep, objp);
+
 		/* constructor could break poison info */
 		if (DEBUG == 0 && cachep->ctor) {
-			objp = index_to_obj(cachep, page, i);
 			kasan_unpoison_object_data(cachep, objp);
 			cachep->ctor(objp);
 			kasan_poison_object_data(cachep, objp);
diff --git a/mm/slub.c b/mm/slub.c
index 74e7c8c..26eb6a99 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1384,6 +1384,7 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
+	kasan_init_slab_obj(s, object);
 	if (unlikely(s->ctor)) {
 		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
-- 
2.7.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/6] mm/kasan: get rid of ->state in struct kasan_alloc_meta
@ 2016-08-01 14:45   ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin, xiaolong.ye, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim

The state of object currently tracked in two places - shadow memory,
and the ->state field in struct kasan_alloc_meta.
We can get rid of the latter. The will save us a little bit of memory.
Also, this allow us to move free stack into struct kasan_alloc_meta,
without increasing memory consumption. So now we should always know
when the last time the object was freed. This may be useful for
long delayed use-after-free bugs.

As a side effect this fixes following UBSAN warning:
	UBSAN: Undefined behaviour in mm/kasan/quarantine.c:102:13
	member access within misaligned address ffff88000d1efebc for type 'struct qlist_node'
	which requires 8 byte alignment

Reported-by: kernel test robot <xiaolong.ye@intel.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/kasan.h |  3 +++
 mm/kasan/kasan.c      | 61 +++++++++++++++++++++++----------------------------
 mm/kasan/kasan.h      | 12 ++--------
 mm/kasan/quarantine.c |  2 --
 mm/kasan/report.c     | 23 +++++--------------
 mm/slab.c             |  4 +++-
 mm/slub.c             |  1 +
 7 files changed, 42 insertions(+), 64 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index c9cf374..d600303 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -56,6 +56,7 @@ void kasan_cache_destroy(struct kmem_cache *cache);
 void kasan_poison_slab(struct page *page);
 void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
 void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+void kasan_init_slab_obj(struct kmem_cache *cache, const void *object);
 
 void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags);
 void kasan_kfree_large(const void *ptr);
@@ -102,6 +103,8 @@ static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
 					void *object) {}
 static inline void kasan_poison_object_data(struct kmem_cache *cache,
 					void *object) {}
+static inline void kasan_init_slab_obj(struct kmem_cache *cache,
+				const void *object) {}
 
 static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {}
 static inline void kasan_kfree_large(const void *ptr) {}
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 388e812..92750e3 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -442,11 +442,6 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 	kasan_poison_shadow(object,
 			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
 			KASAN_KMALLOC_REDZONE);
-	if (cache->flags & SLAB_KASAN) {
-		struct kasan_alloc_meta *alloc_info =
-			get_alloc_info(cache, object);
-		alloc_info->state = KASAN_STATE_INIT;
-	}
 }
 
 static inline int in_irqentry_text(unsigned long ptr)
@@ -510,6 +505,17 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
 	return (void *)object + cache->kasan_info.free_meta_offset;
 }
 
+void kasan_init_slab_obj(struct kmem_cache *cache, const void *object)
+{
+	struct kasan_alloc_meta *alloc_info;
+
+	if (!(cache->flags & SLAB_KASAN))
+		return;
+
+	alloc_info = get_alloc_info(cache, object);
+	__memset(alloc_info, 0, sizeof(*alloc_info));
+}
+
 void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
 	kasan_kmalloc(cache, object, cache->object_size, flags);
@@ -529,34 +535,27 @@ static void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
 
 bool kasan_slab_free(struct kmem_cache *cache, void *object)
 {
+	s8 shadow_byte;
+
 	/* RCU slabs could be legally used after free within the RCU period */
 	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
 		return false;
 
-	if (likely(cache->flags & SLAB_KASAN)) {
-		struct kasan_alloc_meta *alloc_info;
-		struct kasan_free_meta *free_info;
+	shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
+	if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
+		pr_err("Double free");
+		dump_stack();
+		return true;
+	}
 
-		alloc_info = get_alloc_info(cache, object);
-		free_info = get_free_info(cache, object);
+	kasan_poison_slab_free(cache, object);
 
-		switch (alloc_info->state) {
-		case KASAN_STATE_ALLOC:
-			alloc_info->state = KASAN_STATE_QUARANTINE;
-			set_track(&free_info->track, GFP_NOWAIT);
-			kasan_poison_slab_free(cache, object);
-			quarantine_put(free_info, cache);
-			return true;
-		case KASAN_STATE_QUARANTINE:
-		case KASAN_STATE_FREE:
-			pr_err("Double free");
-			dump_stack();
-			break;
-		default:
-			break;
-		}
-	}
-	return false;
+	if (unlikely(!(cache->flags & SLAB_KASAN)))
+		return false;
+
+	set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
+	quarantine_put(get_free_info(cache, object), cache);
+	return true;
 }
 
 void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
@@ -579,13 +578,9 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 	kasan_unpoison_shadow(object, size);
 	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
 		KASAN_KMALLOC_REDZONE);
-	if (cache->flags & SLAB_KASAN) {
-		struct kasan_alloc_meta *alloc_info =
-			get_alloc_info(cache, object);
 
-		alloc_info->state = KASAN_STATE_ALLOC;
-		set_track(&alloc_info->track, flags);
-	}
+	if (cache->flags & SLAB_KASAN)
+		set_track(&get_alloc_info(cache, object)->alloc_track, flags);
 }
 EXPORT_SYMBOL(kasan_kmalloc);
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index aa17546..9b7b31e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -59,13 +59,6 @@ struct kasan_global {
  * Structures to keep alloc and free tracks *
  */
 
-enum kasan_state {
-	KASAN_STATE_INIT,
-	KASAN_STATE_ALLOC,
-	KASAN_STATE_QUARANTINE,
-	KASAN_STATE_FREE
-};
-
 #define KASAN_STACK_DEPTH 64
 
 struct kasan_track {
@@ -74,8 +67,8 @@ struct kasan_track {
 };
 
 struct kasan_alloc_meta {
-	struct kasan_track track;
-	u32 state;
+	struct kasan_track alloc_track;
+	struct kasan_track free_track;
 };
 
 struct qlist_node {
@@ -86,7 +79,6 @@ struct kasan_free_meta {
 	 * Otherwise it might be used for the allocator freelist.
 	 */
 	struct qlist_node quarantine_link;
-	struct kasan_track track;
 };
 
 struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 4852625..7fd121d 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -144,13 +144,11 @@ static void *qlink_to_object(struct qlist_node *qlink, struct kmem_cache *cache)
 static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
 {
 	void *object = qlink_to_object(qlink, cache);
-	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
 	unsigned long flags;
 
 	if (IS_ENABLED(CONFIG_SLAB))
 		local_irq_save(flags);
 
-	alloc_info->state = KASAN_STATE_FREE;
 	___cache_free(cache, object, _THIS_IP_);
 
 	if (IS_ENABLED(CONFIG_SLAB))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index d67a7e0..f437398 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -133,7 +133,6 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 				void *object, char *unused_reason)
 {
 	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
-	struct kasan_free_meta *free_info;
 
 	dump_stack();
 	pr_err("Object at %p, in cache %s size: %d\n", object, cache->name,
@@ -141,23 +140,11 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 
 	if (!(cache->flags & SLAB_KASAN))
 		return;
-	switch (alloc_info->state) {
-	case KASAN_STATE_INIT:
-		pr_err("Object not allocated yet\n");
-		break;
-	case KASAN_STATE_ALLOC:
-		pr_err("Allocation:\n");
-		print_track(&alloc_info->track);
-		break;
-	case KASAN_STATE_FREE:
-	case KASAN_STATE_QUARANTINE:
-		free_info = get_free_info(cache, object);
-		pr_err("Allocation:\n");
-		print_track(&alloc_info->track);
-		pr_err("Deallocation:\n");
-		print_track(&free_info->track);
-		break;
-	}
+
+	pr_err("Allocated:\n");
+	print_track(&alloc_info->alloc_track);
+	pr_err("Freed:\n");
+	print_track(&alloc_info->free_track);
 }
 
 static void print_address_description(struct kasan_access_info *info)
diff --git a/mm/slab.c b/mm/slab.c
index 09771ed..ca135bd 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2604,9 +2604,11 @@ static void cache_init_objs(struct kmem_cache *cachep,
 	}
 
 	for (i = 0; i < cachep->num; i++) {
+		objp = index_to_obj(cachep, page, i);
+		kasan_init_slab_obj(cachep, objp);
+
 		/* constructor could break poison info */
 		if (DEBUG == 0 && cachep->ctor) {
-			objp = index_to_obj(cachep, page, i);
 			kasan_unpoison_object_data(cachep, objp);
 			cachep->ctor(objp);
 			kasan_poison_object_data(cachep, objp);
diff --git a/mm/slub.c b/mm/slub.c
index 74e7c8c..26eb6a99 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1384,6 +1384,7 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 				void *object)
 {
 	setup_object_debug(s, page, object);
+	kasan_init_slab_obj(s, object);
 	if (unlikely(s->ctor)) {
 		kasan_unpoison_object_data(s, object);
 		s->ctor(object);
-- 
2.7.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 6/6] kasan: improve double-free reports.
  2016-08-01 14:45 ` Andrey Ryabinin
@ 2016-08-01 14:45   ` Andrey Ryabinin
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

Currently we just dump stack in case of double free bug.
Let's dump all info about the object that we have.

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/kasan.c  |  3 +--
 mm/kasan/kasan.h  |  2 ++
 mm/kasan/report.c | 54 ++++++++++++++++++++++++++++++++++++++----------------
 3 files changed, 41 insertions(+), 18 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 92750e3..88af13c 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -543,8 +543,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
 
 	shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
 	if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
-		pr_err("Double free");
-		dump_stack();
+		kasan_report_double_free(cache, object, shadow_byte);
 		return true;
 	}
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9b7b31e..e5c2181 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -99,6 +99,8 @@ static inline bool kasan_report_enabled(void)
 
 void kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
+void kasan_report_double_free(struct kmem_cache *cache, void *object,
+			s8 shadow);
 
 #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f437398..ee2bdb4 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -116,6 +116,26 @@ static inline bool init_task_stack_addr(const void *addr)
 			sizeof(init_thread_union.stack));
 }
 
+static DEFINE_SPINLOCK(report_lock);
+
+static void kasan_start_report(unsigned long *flags)
+{
+	/*
+	 * Make sure we don't end up in loop.
+	 */
+	kasan_disable_current();
+	spin_lock_irqsave(&report_lock, *flags);
+	pr_err("==================================================================\n");
+}
+
+static void kasan_end_report(unsigned long *flags)
+{
+	pr_err("==================================================================\n");
+	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irqrestore(&report_lock, *flags);
+	kasan_enable_current();
+}
+
 static void print_track(struct kasan_track *track)
 {
 	pr_err("PID = %u\n", track->pid);
@@ -129,8 +149,7 @@ static void print_track(struct kasan_track *track)
 	}
 }
 
-static void kasan_object_err(struct kmem_cache *cache, struct page *page,
-				void *object, char *unused_reason)
+static void kasan_object_err(struct kmem_cache *cache, void *object)
 {
 	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
 
@@ -147,6 +166,18 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 	print_track(&alloc_info->free_track);
 }
 
+void kasan_report_double_free(struct kmem_cache *cache, void *object,
+			s8 shadow)
+{
+	unsigned long flags;
+
+	kasan_start_report(&flags);
+	pr_err("BUG: Double free or corrupt pointer\n");
+	pr_err("Unexpected shadow byte: 0x%hhX\n", shadow);
+	kasan_object_err(cache, object);
+	kasan_end_report(&flags);
+}
+
 static void print_address_description(struct kasan_access_info *info)
 {
 	const void *addr = info->access_addr;
@@ -160,8 +191,7 @@ static void print_address_description(struct kasan_access_info *info)
 			struct kmem_cache *cache = page->slab_cache;
 			object = nearest_obj(cache, page,
 						(void *)info->access_addr);
-			kasan_object_err(cache, page, object,
-					"kasan: bad access detected");
+			kasan_object_err(cache, object);
 			return;
 		}
 		dump_page(page, "kasan: bad access detected");
@@ -226,19 +256,13 @@ static void print_shadow_for_address(const void *addr)
 	}
 }
 
-static DEFINE_SPINLOCK(report_lock);
-
 static void kasan_report_error(struct kasan_access_info *info)
 {
 	unsigned long flags;
 	const char *bug_type;
 
-	/*
-	 * Make sure we don't end up in loop.
-	 */
-	kasan_disable_current();
-	spin_lock_irqsave(&report_lock, flags);
-	pr_err("==================================================================\n");
+	kasan_start_report(&flags);
+
 	if (info->access_addr <
 			kasan_shadow_to_mem((void *)KASAN_SHADOW_START)) {
 		if ((unsigned long)info->access_addr < PAGE_SIZE)
@@ -259,10 +283,8 @@ static void kasan_report_error(struct kasan_access_info *info)
 		print_address_description(info);
 		print_shadow_for_address(info->first_bad_addr);
 	}
-	pr_err("==================================================================\n");
-	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
-	spin_unlock_irqrestore(&report_lock, flags);
-	kasan_enable_current();
+
+	kasan_end_report(&flags);
 }
 
 void kasan_report(unsigned long addr, size_t size,
-- 
2.7.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 6/6] kasan: improve double-free reports.
@ 2016-08-01 14:45   ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Potapenko, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, linux-kernel, linux-mm,
	Andrey Ryabinin

Currently we just dump stack in case of double free bug.
Let's dump all info about the object that we have.

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/kasan/kasan.c  |  3 +--
 mm/kasan/kasan.h  |  2 ++
 mm/kasan/report.c | 54 ++++++++++++++++++++++++++++++++++++++----------------
 3 files changed, 41 insertions(+), 18 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 92750e3..88af13c 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -543,8 +543,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
 
 	shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
 	if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
-		pr_err("Double free");
-		dump_stack();
+		kasan_report_double_free(cache, object, shadow_byte);
 		return true;
 	}
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 9b7b31e..e5c2181 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -99,6 +99,8 @@ static inline bool kasan_report_enabled(void)
 
 void kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
+void kasan_report_double_free(struct kmem_cache *cache, void *object,
+			s8 shadow);
 
 #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f437398..ee2bdb4 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -116,6 +116,26 @@ static inline bool init_task_stack_addr(const void *addr)
 			sizeof(init_thread_union.stack));
 }
 
+static DEFINE_SPINLOCK(report_lock);
+
+static void kasan_start_report(unsigned long *flags)
+{
+	/*
+	 * Make sure we don't end up in loop.
+	 */
+	kasan_disable_current();
+	spin_lock_irqsave(&report_lock, *flags);
+	pr_err("==================================================================\n");
+}
+
+static void kasan_end_report(unsigned long *flags)
+{
+	pr_err("==================================================================\n");
+	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irqrestore(&report_lock, *flags);
+	kasan_enable_current();
+}
+
 static void print_track(struct kasan_track *track)
 {
 	pr_err("PID = %u\n", track->pid);
@@ -129,8 +149,7 @@ static void print_track(struct kasan_track *track)
 	}
 }
 
-static void kasan_object_err(struct kmem_cache *cache, struct page *page,
-				void *object, char *unused_reason)
+static void kasan_object_err(struct kmem_cache *cache, void *object)
 {
 	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
 
@@ -147,6 +166,18 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
 	print_track(&alloc_info->free_track);
 }
 
+void kasan_report_double_free(struct kmem_cache *cache, void *object,
+			s8 shadow)
+{
+	unsigned long flags;
+
+	kasan_start_report(&flags);
+	pr_err("BUG: Double free or corrupt pointer\n");
+	pr_err("Unexpected shadow byte: 0x%hhX\n", shadow);
+	kasan_object_err(cache, object);
+	kasan_end_report(&flags);
+}
+
 static void print_address_description(struct kasan_access_info *info)
 {
 	const void *addr = info->access_addr;
@@ -160,8 +191,7 @@ static void print_address_description(struct kasan_access_info *info)
 			struct kmem_cache *cache = page->slab_cache;
 			object = nearest_obj(cache, page,
 						(void *)info->access_addr);
-			kasan_object_err(cache, page, object,
-					"kasan: bad access detected");
+			kasan_object_err(cache, object);
 			return;
 		}
 		dump_page(page, "kasan: bad access detected");
@@ -226,19 +256,13 @@ static void print_shadow_for_address(const void *addr)
 	}
 }
 
-static DEFINE_SPINLOCK(report_lock);
-
 static void kasan_report_error(struct kasan_access_info *info)
 {
 	unsigned long flags;
 	const char *bug_type;
 
-	/*
-	 * Make sure we don't end up in loop.
-	 */
-	kasan_disable_current();
-	spin_lock_irqsave(&report_lock, flags);
-	pr_err("==================================================================\n");
+	kasan_start_report(&flags);
+
 	if (info->access_addr <
 			kasan_shadow_to_mem((void *)KASAN_SHADOW_START)) {
 		if ((unsigned long)info->access_addr < PAGE_SIZE)
@@ -259,10 +283,8 @@ static void kasan_report_error(struct kasan_access_info *info)
 		print_address_description(info);
 		print_shadow_for_address(info->first_bad_addr);
 	}
-	pr_err("==================================================================\n");
-	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
-	spin_unlock_irqrestore(&report_lock, flags);
-	kasan_enable_current();
+
+	kasan_end_report(&flags);
 }
 
 void kasan_report(unsigned long addr, size_t size,
-- 
2.7.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/6] mm/kasan: fix corruptions and false positive reports
  2016-08-01 14:45 ` Andrey Ryabinin
@ 2016-08-01 14:45   ` Alexander Potapenko
  -1 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> Once object put in quarantine, we no longer own it, i.e. object could leave
> the quarantine and be reallocated. So having set_track() call after the
> quarantine_put() may corrupt slab objects.
>
>  BUG kmalloc-4096 (Not tainted): Poison overwritten
>  -----------------------------------------------------------------------------
>  Disabling lock debugging due to kernel taint
>  INFO: 0xffff8804540de850-0xffff8804540de857. First byte 0xb5 instead of 0x6b
> ...
>  INFO: Freed in qlist_free_all+0x42/0x100 age=75 cpu=3 pid=24492
>   __slab_free+0x1d6/0x2e0
>   ___cache_free+0xb6/0xd0
>   qlist_free_all+0x83/0x100
>   quarantine_reduce+0x177/0x1b0
>   kasan_kmalloc+0xf3/0x100
>   kasan_slab_alloc+0x12/0x20
>   kmem_cache_alloc+0x109/0x3e0
>   mmap_region+0x53e/0xe40
>   do_mmap+0x70f/0xa50
>   vm_mmap_pgoff+0x147/0x1b0
>   SyS_mmap_pgoff+0x2c7/0x5b0
>   SyS_mmap+0x1b/0x30
>   do_syscall_64+0x1a0/0x4e0
>   return_from_SYSCALL_64+0x0/0x7a
>  INFO: Slab 0xffffea0011503600 objects=7 used=7 fp=0x          (null) flags=0x8000000000004080
>  INFO: Object 0xffff8804540de848 @offset=26696 fp=0xffff8804540dc588
>  Redzone ffff8804540de840: bb bb bb bb bb bb bb bb                          ........
>  Object ffff8804540de848: 6b 6b 6b 6b 6b 6b 6b 6b b5 52 00 00 f2 01 60 cc  kkkkkkkk.R....`.
>
> Similarly, poisoning after the quarantine_put() leads to false positive
> use-after-free reports:
>
>  BUG: KASAN: use-after-free in anon_vma_interval_tree_insert+0x304/0x430 at addr ffff880405c540a0
>  Read of size 8 by task trinity-c0/3036
>  CPU: 0 PID: 3036 Comm: trinity-c0 Not tainted 4.7.0-think+ #9
>   ffff880405c54200 00000000c5c4423e ffff88044a5ef9f0 ffffffffaea48532
>   ffff88044a5efa88 ffff880461497a00 ffff88044a5efa78 ffffffffae57cfe2
>   ffff88046501c958 ffff880436aa5440 0000000000000282 0000000000000007
>  Call Trace:
>   [<ffffffffaea48532>] dump_stack+0x68/0x96
>   [<ffffffffae57cfe2>] kasan_report_error+0x222/0x600
>   [<ffffffffae57d571>] __asan_report_load8_noabort+0x61/0x70
>   [<ffffffffae4f8924>] anon_vma_interval_tree_insert+0x304/0x430
>   [<ffffffffae52f811>] anon_vma_chain_link+0x91/0xd0
>   [<ffffffffae536e46>] anon_vma_clone+0x136/0x3f0
>   [<ffffffffae537181>] anon_vma_fork+0x81/0x4c0
>   [<ffffffffae125663>] copy_process.part.47+0x2c43/0x5b20
>   [<ffffffffae12895d>] _do_fork+0x16d/0xbd0
>   [<ffffffffae129469>] SyS_clone+0x19/0x20
>   [<ffffffffae0064b0>] do_syscall_64+0x1a0/0x4e0
>   [<ffffffffafa09b1a>] entry_SYSCALL64_slow_path+0x25/0x25
>
> Fix this by putting an object in the quarantine after all other operations.
>
> Fixes: 80a9201a5965 ("mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB")
> Reported-by: Dave Jones <davej@codemonkey.org.uk>
> Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
> Reported-by: Sasha Levin <alexander.levin@verizon.com>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  mm/kasan/kasan.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index b6f99e8..3019cec 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -543,9 +543,9 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>                 switch (alloc_info->state) {
>                 case KASAN_STATE_ALLOC:
>                         alloc_info->state = KASAN_STATE_QUARANTINE;
> -                       quarantine_put(free_info, cache);
>                         set_track(&free_info->track, GFP_NOWAIT);
>                         kasan_poison_slab_free(cache, object);
> +                       quarantine_put(free_info, cache);
This is exactly the patch I was going to send in a couple of minutes :)
>                         return true;
>                 case KASAN_STATE_QUARANTINE:
>                 case KASAN_STATE_FREE:
> --
> 2.7.3
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/6] mm/kasan: fix corruptions and false positive reports
@ 2016-08-01 14:45   ` Alexander Potapenko
  0 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-01 14:45 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> Once object put in quarantine, we no longer own it, i.e. object could leave
> the quarantine and be reallocated. So having set_track() call after the
> quarantine_put() may corrupt slab objects.
>
>  BUG kmalloc-4096 (Not tainted): Poison overwritten
>  -----------------------------------------------------------------------------
>  Disabling lock debugging due to kernel taint
>  INFO: 0xffff8804540de850-0xffff8804540de857. First byte 0xb5 instead of 0x6b
> ...
>  INFO: Freed in qlist_free_all+0x42/0x100 age=75 cpu=3 pid=24492
>   __slab_free+0x1d6/0x2e0
>   ___cache_free+0xb6/0xd0
>   qlist_free_all+0x83/0x100
>   quarantine_reduce+0x177/0x1b0
>   kasan_kmalloc+0xf3/0x100
>   kasan_slab_alloc+0x12/0x20
>   kmem_cache_alloc+0x109/0x3e0
>   mmap_region+0x53e/0xe40
>   do_mmap+0x70f/0xa50
>   vm_mmap_pgoff+0x147/0x1b0
>   SyS_mmap_pgoff+0x2c7/0x5b0
>   SyS_mmap+0x1b/0x30
>   do_syscall_64+0x1a0/0x4e0
>   return_from_SYSCALL_64+0x0/0x7a
>  INFO: Slab 0xffffea0011503600 objects=7 used=7 fp=0x          (null) flags=0x8000000000004080
>  INFO: Object 0xffff8804540de848 @offset=26696 fp=0xffff8804540dc588
>  Redzone ffff8804540de840: bb bb bb bb bb bb bb bb                          ........
>  Object ffff8804540de848: 6b 6b 6b 6b 6b 6b 6b 6b b5 52 00 00 f2 01 60 cc  kkkkkkkk.R....`.
>
> Similarly, poisoning after the quarantine_put() leads to false positive
> use-after-free reports:
>
>  BUG: KASAN: use-after-free in anon_vma_interval_tree_insert+0x304/0x430 at addr ffff880405c540a0
>  Read of size 8 by task trinity-c0/3036
>  CPU: 0 PID: 3036 Comm: trinity-c0 Not tainted 4.7.0-think+ #9
>   ffff880405c54200 00000000c5c4423e ffff88044a5ef9f0 ffffffffaea48532
>   ffff88044a5efa88 ffff880461497a00 ffff88044a5efa78 ffffffffae57cfe2
>   ffff88046501c958 ffff880436aa5440 0000000000000282 0000000000000007
>  Call Trace:
>   [<ffffffffaea48532>] dump_stack+0x68/0x96
>   [<ffffffffae57cfe2>] kasan_report_error+0x222/0x600
>   [<ffffffffae57d571>] __asan_report_load8_noabort+0x61/0x70
>   [<ffffffffae4f8924>] anon_vma_interval_tree_insert+0x304/0x430
>   [<ffffffffae52f811>] anon_vma_chain_link+0x91/0xd0
>   [<ffffffffae536e46>] anon_vma_clone+0x136/0x3f0
>   [<ffffffffae537181>] anon_vma_fork+0x81/0x4c0
>   [<ffffffffae125663>] copy_process.part.47+0x2c43/0x5b20
>   [<ffffffffae12895d>] _do_fork+0x16d/0xbd0
>   [<ffffffffae129469>] SyS_clone+0x19/0x20
>   [<ffffffffae0064b0>] do_syscall_64+0x1a0/0x4e0
>   [<ffffffffafa09b1a>] entry_SYSCALL64_slow_path+0x25/0x25
>
> Fix this by putting an object in the quarantine after all other operations.
>
> Fixes: 80a9201a5965 ("mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB")
> Reported-by: Dave Jones <davej@codemonkey.org.uk>
> Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
> Reported-by: Sasha Levin <alexander.levin@verizon.com>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  mm/kasan/kasan.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index b6f99e8..3019cec 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -543,9 +543,9 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>                 switch (alloc_info->state) {
>                 case KASAN_STATE_ALLOC:
>                         alloc_info->state = KASAN_STATE_QUARANTINE;
> -                       quarantine_put(free_info, cache);
>                         set_track(&free_info->track, GFP_NOWAIT);
>                         kasan_poison_slab_free(cache, object);
> +                       quarantine_put(free_info, cache);
This is exactly the patch I was going to send in a couple of minutes :)
>                         return true;
>                 case KASAN_STATE_QUARANTINE:
>                 case KASAN_STATE_FREE:
> --
> 2.7.3
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/6] mm/kasan, slub: don't disable interrupts when object leaves quarantine
  2016-08-01 14:45   ` Andrey Ryabinin
@ 2016-08-01 14:47     ` Alexander Potapenko
  -1 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-01 14:47 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> SLUB doesn't require disabled interrupts to call ___cache_free().
>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  mm/kasan/quarantine.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 65793f1..4852625 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -147,10 +147,14 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
>         struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
>         unsigned long flags;
>
> -       local_irq_save(flags);
> +       if (IS_ENABLED(CONFIG_SLAB))
> +               local_irq_save(flags);
> +
>         alloc_info->state = KASAN_STATE_FREE;
>         ___cache_free(cache, object, _THIS_IP_);
> -       local_irq_restore(flags);
> +
> +       if (IS_ENABLED(CONFIG_SLAB))
> +               local_irq_restore(flags);
>  }
>
>  static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
> --
> 2.7.3
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/1470062715-14077-3-git-send-email-aryabinin%40virtuozzo.com.
> For more options, visit https://groups.google.com/d/optout.



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/6] mm/kasan, slub: don't disable interrupts when object leaves quarantine
@ 2016-08-01 14:47     ` Alexander Potapenko
  0 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-01 14:47 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> SLUB doesn't require disabled interrupts to call ___cache_free().
>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  mm/kasan/quarantine.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 65793f1..4852625 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -147,10 +147,14 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
>         struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
>         unsigned long flags;
>
> -       local_irq_save(flags);
> +       if (IS_ENABLED(CONFIG_SLAB))
> +               local_irq_save(flags);
> +
>         alloc_info->state = KASAN_STATE_FREE;
>         ___cache_free(cache, object, _THIS_IP_);
> -       local_irq_restore(flags);
> +
> +       if (IS_ENABLED(CONFIG_SLAB))
> +               local_irq_restore(flags);
>  }
>
>  static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
> --
> 2.7.3
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/1470062715-14077-3-git-send-email-aryabinin%40virtuozzo.com.
> For more options, visit https://groups.google.com/d/optout.



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] kasan: improve double-free reports.
  2016-08-01 14:45   ` Andrey Ryabinin
@ 2016-08-02 11:39     ` Alexander Potapenko
  -1 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 11:39 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> Currently we just dump stack in case of double free bug.
> Let's dump all info about the object that we have.
>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> ---
>  mm/kasan/kasan.c  |  3 +--
>  mm/kasan/kasan.h  |  2 ++
>  mm/kasan/report.c | 54 ++++++++++++++++++++++++++++++++++++++----------------
>  3 files changed, 41 insertions(+), 18 deletions(-)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 92750e3..88af13c 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -543,8 +543,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>
>         shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
>         if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
> -               pr_err("Double free");
> -               dump_stack();
> +               kasan_report_double_free(cache, object, shadow_byte);
>                 return true;
>         }
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 9b7b31e..e5c2181 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -99,6 +99,8 @@ static inline bool kasan_report_enabled(void)
>
>  void kasan_report(unsigned long addr, size_t size,
>                 bool is_write, unsigned long ip);
> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
> +                       s8 shadow);
>
>  #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
>  void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index f437398..ee2bdb4 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -116,6 +116,26 @@ static inline bool init_task_stack_addr(const void *addr)
>                         sizeof(init_thread_union.stack));
>  }
>
> +static DEFINE_SPINLOCK(report_lock);
> +
> +static void kasan_start_report(unsigned long *flags)
> +{
> +       /*
> +        * Make sure we don't end up in loop.
> +        */
> +       kasan_disable_current();
> +       spin_lock_irqsave(&report_lock, *flags);
> +       pr_err("==================================================================\n");
> +}
> +
> +static void kasan_end_report(unsigned long *flags)
> +{
> +       pr_err("==================================================================\n");
> +       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
Don't we want to add the taint as early as possible once we've
detected the error?
> +       spin_unlock_irqrestore(&report_lock, *flags);
> +       kasan_enable_current();
> +}
> +
>  static void print_track(struct kasan_track *track)
>  {
>         pr_err("PID = %u\n", track->pid);
> @@ -129,8 +149,7 @@ static void print_track(struct kasan_track *track)
>         }
>  }
>
> -static void kasan_object_err(struct kmem_cache *cache, struct page *page,
> -                               void *object, char *unused_reason)
> +static void kasan_object_err(struct kmem_cache *cache, void *object)
>  {
>         struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
>
> @@ -147,6 +166,18 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>         print_track(&alloc_info->free_track);
>  }
>
> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
> +                       s8 shadow)
> +{
> +       unsigned long flags;
> +
> +       kasan_start_report(&flags);
> +       pr_err("BUG: Double free or corrupt pointer\n");
How about "Double free or freeing an invalid pointer\n"?
I think "corrupt pointer" doesn't exactly reflect where the bug is.
> +       pr_err("Unexpected shadow byte: 0x%hhX\n", shadow);
> +       kasan_object_err(cache, object);
> +       kasan_end_report(&flags);
> +}
> +
>  static void print_address_description(struct kasan_access_info *info)
>  {
>         const void *addr = info->access_addr;
> @@ -160,8 +191,7 @@ static void print_address_description(struct kasan_access_info *info)
>                         struct kmem_cache *cache = page->slab_cache;
>                         object = nearest_obj(cache, page,
>                                                 (void *)info->access_addr);
> -                       kasan_object_err(cache, page, object,
> -                                       "kasan: bad access detected");
> +                       kasan_object_err(cache, object);
>                         return;
>                 }
>                 dump_page(page, "kasan: bad access detected");
> @@ -226,19 +256,13 @@ static void print_shadow_for_address(const void *addr)
>         }
>  }
>
> -static DEFINE_SPINLOCK(report_lock);
> -
>  static void kasan_report_error(struct kasan_access_info *info)
>  {
>         unsigned long flags;
>         const char *bug_type;
>
> -       /*
> -        * Make sure we don't end up in loop.
> -        */
> -       kasan_disable_current();
> -       spin_lock_irqsave(&report_lock, flags);
> -       pr_err("==================================================================\n");
> +       kasan_start_report(&flags);
> +
>         if (info->access_addr <
>                         kasan_shadow_to_mem((void *)KASAN_SHADOW_START)) {
>                 if ((unsigned long)info->access_addr < PAGE_SIZE)
> @@ -259,10 +283,8 @@ static void kasan_report_error(struct kasan_access_info *info)
>                 print_address_description(info);
>                 print_shadow_for_address(info->first_bad_addr);
>         }
> -       pr_err("==================================================================\n");
> -       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> -       spin_unlock_irqrestore(&report_lock, flags);
> -       kasan_enable_current();
> +
> +       kasan_end_report(&flags);
>  }
>
>  void kasan_report(unsigned long addr, size_t size,
> --
> 2.7.3
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] kasan: improve double-free reports.
@ 2016-08-02 11:39     ` Alexander Potapenko
  0 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 11:39 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> Currently we just dump stack in case of double free bug.
> Let's dump all info about the object that we have.
>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> ---
>  mm/kasan/kasan.c  |  3 +--
>  mm/kasan/kasan.h  |  2 ++
>  mm/kasan/report.c | 54 ++++++++++++++++++++++++++++++++++++++----------------
>  3 files changed, 41 insertions(+), 18 deletions(-)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 92750e3..88af13c 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -543,8 +543,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>
>         shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
>         if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
> -               pr_err("Double free");
> -               dump_stack();
> +               kasan_report_double_free(cache, object, shadow_byte);
>                 return true;
>         }
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 9b7b31e..e5c2181 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -99,6 +99,8 @@ static inline bool kasan_report_enabled(void)
>
>  void kasan_report(unsigned long addr, size_t size,
>                 bool is_write, unsigned long ip);
> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
> +                       s8 shadow);
>
>  #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
>  void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index f437398..ee2bdb4 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -116,6 +116,26 @@ static inline bool init_task_stack_addr(const void *addr)
>                         sizeof(init_thread_union.stack));
>  }
>
> +static DEFINE_SPINLOCK(report_lock);
> +
> +static void kasan_start_report(unsigned long *flags)
> +{
> +       /*
> +        * Make sure we don't end up in loop.
> +        */
> +       kasan_disable_current();
> +       spin_lock_irqsave(&report_lock, *flags);
> +       pr_err("==================================================================\n");
> +}
> +
> +static void kasan_end_report(unsigned long *flags)
> +{
> +       pr_err("==================================================================\n");
> +       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
Don't we want to add the taint as early as possible once we've
detected the error?
> +       spin_unlock_irqrestore(&report_lock, *flags);
> +       kasan_enable_current();
> +}
> +
>  static void print_track(struct kasan_track *track)
>  {
>         pr_err("PID = %u\n", track->pid);
> @@ -129,8 +149,7 @@ static void print_track(struct kasan_track *track)
>         }
>  }
>
> -static void kasan_object_err(struct kmem_cache *cache, struct page *page,
> -                               void *object, char *unused_reason)
> +static void kasan_object_err(struct kmem_cache *cache, void *object)
>  {
>         struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
>
> @@ -147,6 +166,18 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>         print_track(&alloc_info->free_track);
>  }
>
> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
> +                       s8 shadow)
> +{
> +       unsigned long flags;
> +
> +       kasan_start_report(&flags);
> +       pr_err("BUG: Double free or corrupt pointer\n");
How about "Double free or freeing an invalid pointer\n"?
I think "corrupt pointer" doesn't exactly reflect where the bug is.
> +       pr_err("Unexpected shadow byte: 0x%hhX\n", shadow);
> +       kasan_object_err(cache, object);
> +       kasan_end_report(&flags);
> +}
> +
>  static void print_address_description(struct kasan_access_info *info)
>  {
>         const void *addr = info->access_addr;
> @@ -160,8 +191,7 @@ static void print_address_description(struct kasan_access_info *info)
>                         struct kmem_cache *cache = page->slab_cache;
>                         object = nearest_obj(cache, page,
>                                                 (void *)info->access_addr);
> -                       kasan_object_err(cache, page, object,
> -                                       "kasan: bad access detected");
> +                       kasan_object_err(cache, object);
>                         return;
>                 }
>                 dump_page(page, "kasan: bad access detected");
> @@ -226,19 +256,13 @@ static void print_shadow_for_address(const void *addr)
>         }
>  }
>
> -static DEFINE_SPINLOCK(report_lock);
> -
>  static void kasan_report_error(struct kasan_access_info *info)
>  {
>         unsigned long flags;
>         const char *bug_type;
>
> -       /*
> -        * Make sure we don't end up in loop.
> -        */
> -       kasan_disable_current();
> -       spin_lock_irqsave(&report_lock, flags);
> -       pr_err("==================================================================\n");
> +       kasan_start_report(&flags);
> +
>         if (info->access_addr <
>                         kasan_shadow_to_mem((void *)KASAN_SHADOW_START)) {
>                 if ((unsigned long)info->access_addr < PAGE_SIZE)
> @@ -259,10 +283,8 @@ static void kasan_report_error(struct kasan_access_info *info)
>                 print_address_description(info);
>                 print_shadow_for_address(info->first_bad_addr);
>         }
> -       pr_err("==================================================================\n");
> -       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> -       spin_unlock_irqrestore(&report_lock, flags);
> -       kasan_enable_current();
> +
> +       kasan_end_report(&flags);
>  }
>
>  void kasan_report(unsigned long addr, size_t size,
> --
> 2.7.3
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/6] mm/kasan: don't reduce quarantine in atomic contexts
  2016-08-01 14:45   ` Andrey Ryabinin
@ 2016-08-02 11:42     ` Alexander Potapenko
  -1 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 11:42 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> Currently we call quarantine_reduce() for ___GFP_KSWAPD_RECLAIM
> (implied by __GFP_RECLAIM) allocation. So, basically we call it on
> almost every allocation. quarantine_reduce() sometimes is heavy operation,
> and calling it with disabled interrupts may trigger hard LOCKUP:
>
>  NMI watchdog: Watchdog detected hard LOCKUP on cpu 2irq event stamp: 1411258
>  Call Trace:
>   <NMI>  [<ffffffff98a48532>] dump_stack+0x68/0x96
>   [<ffffffff98357fbb>] watchdog_overflow_callback+0x15b/0x190
>   [<ffffffff9842f7d1>] __perf_event_overflow+0x1b1/0x540
>   [<ffffffff98455b14>] perf_event_overflow+0x14/0x20
>   [<ffffffff9801976a>] intel_pmu_handle_irq+0x36a/0xad0
>   [<ffffffff9800ba4c>] perf_event_nmi_handler+0x2c/0x50
>   [<ffffffff98057058>] nmi_handle+0x128/0x480
>   [<ffffffff980576d2>] default_do_nmi+0xb2/0x210
>   [<ffffffff980579da>] do_nmi+0x1aa/0x220
>   [<ffffffff99a0bb07>] end_repeat_nmi+0x1a/0x1e
>   <<EOE>>  [<ffffffff981871e6>] __kernel_text_address+0x86/0xb0
>   [<ffffffff98055c4b>] print_context_stack+0x7b/0x100
>   [<ffffffff98054e9b>] dump_trace+0x12b/0x350
>   [<ffffffff98076ceb>] save_stack_trace+0x2b/0x50
>   [<ffffffff98573003>] set_track+0x83/0x140
>   [<ffffffff98575f4a>] free_debug_processing+0x1aa/0x420
>   [<ffffffff98578506>] __slab_free+0x1d6/0x2e0
>   [<ffffffff9857a9b6>] ___cache_free+0xb6/0xd0
>   [<ffffffff9857db53>] qlist_free_all+0x83/0x100
>   [<ffffffff9857df07>] quarantine_reduce+0x177/0x1b0
>   [<ffffffff9857c423>] kasan_kmalloc+0xf3/0x100
>
> Reduce the quarantine_reduce iff direct reclaim is allowed.
>
> Fixes: 55834c59098d("mm: kasan: initial memory quarantine implementation")
> Reported-by: Dave Jones <davej@codemonkey.org.uk>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  mm/kasan/kasan.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 3019cec..c99ef40 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -565,7 +565,7 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>         unsigned long redzone_start;
>         unsigned long redzone_end;
>
> -       if (flags & __GFP_RECLAIM)
> +       if (gfpflags_allow_blocking(flags))
>                 quarantine_reduce();
>
>         if (unlikely(object == NULL))
> @@ -596,7 +596,7 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
>         unsigned long redzone_start;
>         unsigned long redzone_end;
>
> -       if (flags & __GFP_RECLAIM)
> +       if (gfpflags_allow_blocking(flags))
>                 quarantine_reduce();
>
>         if (unlikely(ptr == NULL))
> --
> 2.7.3
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/6] mm/kasan: don't reduce quarantine in atomic contexts
@ 2016-08-02 11:42     ` Alexander Potapenko
  0 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 11:42 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> Currently we call quarantine_reduce() for ___GFP_KSWAPD_RECLAIM
> (implied by __GFP_RECLAIM) allocation. So, basically we call it on
> almost every allocation. quarantine_reduce() sometimes is heavy operation,
> and calling it with disabled interrupts may trigger hard LOCKUP:
>
>  NMI watchdog: Watchdog detected hard LOCKUP on cpu 2irq event stamp: 1411258
>  Call Trace:
>   <NMI>  [<ffffffff98a48532>] dump_stack+0x68/0x96
>   [<ffffffff98357fbb>] watchdog_overflow_callback+0x15b/0x190
>   [<ffffffff9842f7d1>] __perf_event_overflow+0x1b1/0x540
>   [<ffffffff98455b14>] perf_event_overflow+0x14/0x20
>   [<ffffffff9801976a>] intel_pmu_handle_irq+0x36a/0xad0
>   [<ffffffff9800ba4c>] perf_event_nmi_handler+0x2c/0x50
>   [<ffffffff98057058>] nmi_handle+0x128/0x480
>   [<ffffffff980576d2>] default_do_nmi+0xb2/0x210
>   [<ffffffff980579da>] do_nmi+0x1aa/0x220
>   [<ffffffff99a0bb07>] end_repeat_nmi+0x1a/0x1e
>   <<EOE>>  [<ffffffff981871e6>] __kernel_text_address+0x86/0xb0
>   [<ffffffff98055c4b>] print_context_stack+0x7b/0x100
>   [<ffffffff98054e9b>] dump_trace+0x12b/0x350
>   [<ffffffff98076ceb>] save_stack_trace+0x2b/0x50
>   [<ffffffff98573003>] set_track+0x83/0x140
>   [<ffffffff98575f4a>] free_debug_processing+0x1aa/0x420
>   [<ffffffff98578506>] __slab_free+0x1d6/0x2e0
>   [<ffffffff9857a9b6>] ___cache_free+0xb6/0xd0
>   [<ffffffff9857db53>] qlist_free_all+0x83/0x100
>   [<ffffffff9857df07>] quarantine_reduce+0x177/0x1b0
>   [<ffffffff9857c423>] kasan_kmalloc+0xf3/0x100
>
> Reduce the quarantine_reduce iff direct reclaim is allowed.
>
> Fixes: 55834c59098d("mm: kasan: initial memory quarantine implementation")
> Reported-by: Dave Jones <davej@codemonkey.org.uk>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  mm/kasan/kasan.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 3019cec..c99ef40 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -565,7 +565,7 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>         unsigned long redzone_start;
>         unsigned long redzone_end;
>
> -       if (flags & __GFP_RECLAIM)
> +       if (gfpflags_allow_blocking(flags))
>                 quarantine_reduce();
>
>         if (unlikely(object == NULL))
> @@ -596,7 +596,7 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
>         unsigned long redzone_start;
>         unsigned long redzone_end;
>
> -       if (flags & __GFP_RECLAIM)
> +       if (gfpflags_allow_blocking(flags))
>                 quarantine_reduce();
>
>         if (unlikely(ptr == NULL))
> --
> 2.7.3
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] kasan: improve double-free reports.
  2016-08-02 11:39     ` Alexander Potapenko
@ 2016-08-02 12:05       ` Alexander Potapenko
  -1 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 12:05 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Tue, Aug 2, 2016 at 1:39 PM, Alexander Potapenko <glider@google.com> wrote:
> On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>> Currently we just dump stack in case of double free bug.
>> Let's dump all info about the object that we have.
>>
>> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> ---
>>  mm/kasan/kasan.c  |  3 +--
>>  mm/kasan/kasan.h  |  2 ++
>>  mm/kasan/report.c | 54 ++++++++++++++++++++++++++++++++++++++----------------
>>  3 files changed, 41 insertions(+), 18 deletions(-)
>>
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index 92750e3..88af13c 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -543,8 +543,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>
>>         shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
>>         if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
>> -               pr_err("Double free");
>> -               dump_stack();
>> +               kasan_report_double_free(cache, object, shadow_byte);
>>                 return true;
>>         }
>>
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index 9b7b31e..e5c2181 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -99,6 +99,8 @@ static inline bool kasan_report_enabled(void)
>>
>>  void kasan_report(unsigned long addr, size_t size,
>>                 bool is_write, unsigned long ip);
>> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
>> +                       s8 shadow);
>>
>>  #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
>>  void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index f437398..ee2bdb4 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -116,6 +116,26 @@ static inline bool init_task_stack_addr(const void *addr)
>>                         sizeof(init_thread_union.stack));
>>  }
>>
>> +static DEFINE_SPINLOCK(report_lock);
>> +
>> +static void kasan_start_report(unsigned long *flags)
>> +{
>> +       /*
>> +        * Make sure we don't end up in loop.
>> +        */
>> +       kasan_disable_current();
>> +       spin_lock_irqsave(&report_lock, *flags);
>> +       pr_err("==================================================================\n");
>> +}
>> +
>> +static void kasan_end_report(unsigned long *flags)
>> +{
>> +       pr_err("==================================================================\n");
>> +       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> Don't we want to add the taint as early as possible once we've
> detected the error?
>> +       spin_unlock_irqrestore(&report_lock, *flags);
>> +       kasan_enable_current();
>> +}
>> +
>>  static void print_track(struct kasan_track *track)
>>  {
>>         pr_err("PID = %u\n", track->pid);
>> @@ -129,8 +149,7 @@ static void print_track(struct kasan_track *track)
>>         }
>>  }
>>
>> -static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>> -                               void *object, char *unused_reason)
>> +static void kasan_object_err(struct kmem_cache *cache, void *object)
>>  {
>>         struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
>>
>> @@ -147,6 +166,18 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>>         print_track(&alloc_info->free_track);
>>  }
>>
>> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
>> +                       s8 shadow)
>> +{
>> +       unsigned long flags;
>> +
>> +       kasan_start_report(&flags);
>> +       pr_err("BUG: Double free or corrupt pointer\n");
> How about "Double free or freeing an invalid pointer\n"?
> I think "corrupt pointer" doesn't exactly reflect where the bug is.
By the way, I think we could be doing a better job by always detecting
an invalid pointer being passed to kasan_slab_free().
>> +       pr_err("Unexpected shadow byte: 0x%hhX\n", shadow);
>> +       kasan_object_err(cache, object);
>> +       kasan_end_report(&flags);
>> +}
>> +
>>  static void print_address_description(struct kasan_access_info *info)
>>  {
>>         const void *addr = info->access_addr;
>> @@ -160,8 +191,7 @@ static void print_address_description(struct kasan_access_info *info)
>>                         struct kmem_cache *cache = page->slab_cache;
>>                         object = nearest_obj(cache, page,
>>                                                 (void *)info->access_addr);
>> -                       kasan_object_err(cache, page, object,
>> -                                       "kasan: bad access detected");
>> +                       kasan_object_err(cache, object);
>>                         return;
>>                 }
>>                 dump_page(page, "kasan: bad access detected");
>> @@ -226,19 +256,13 @@ static void print_shadow_for_address(const void *addr)
>>         }
>>  }
>>
>> -static DEFINE_SPINLOCK(report_lock);
>> -
>>  static void kasan_report_error(struct kasan_access_info *info)
>>  {
>>         unsigned long flags;
>>         const char *bug_type;
>>
>> -       /*
>> -        * Make sure we don't end up in loop.
>> -        */
>> -       kasan_disable_current();
>> -       spin_lock_irqsave(&report_lock, flags);
>> -       pr_err("==================================================================\n");
>> +       kasan_start_report(&flags);
>> +
>>         if (info->access_addr <
>>                         kasan_shadow_to_mem((void *)KASAN_SHADOW_START)) {
>>                 if ((unsigned long)info->access_addr < PAGE_SIZE)
>> @@ -259,10 +283,8 @@ static void kasan_report_error(struct kasan_access_info *info)
>>                 print_address_description(info);
>>                 print_shadow_for_address(info->first_bad_addr);
>>         }
>> -       pr_err("==================================================================\n");
>> -       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
>> -       spin_unlock_irqrestore(&report_lock, flags);
>> -       kasan_enable_current();
>> +
>> +       kasan_end_report(&flags);
>>  }
>>
>>  void kasan_report(unsigned long addr, size_t size,
>> --
>> 2.7.3
>>
>
>
>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-Straße, 33
> 80636 München
>
> Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] kasan: improve double-free reports.
@ 2016-08-02 12:05       ` Alexander Potapenko
  0 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 12:05 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Tue, Aug 2, 2016 at 1:39 PM, Alexander Potapenko <glider@google.com> wrote:
> On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>> Currently we just dump stack in case of double free bug.
>> Let's dump all info about the object that we have.
>>
>> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> ---
>>  mm/kasan/kasan.c  |  3 +--
>>  mm/kasan/kasan.h  |  2 ++
>>  mm/kasan/report.c | 54 ++++++++++++++++++++++++++++++++++++++----------------
>>  3 files changed, 41 insertions(+), 18 deletions(-)
>>
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index 92750e3..88af13c 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -543,8 +543,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>
>>         shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
>>         if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
>> -               pr_err("Double free");
>> -               dump_stack();
>> +               kasan_report_double_free(cache, object, shadow_byte);
>>                 return true;
>>         }
>>
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index 9b7b31e..e5c2181 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -99,6 +99,8 @@ static inline bool kasan_report_enabled(void)
>>
>>  void kasan_report(unsigned long addr, size_t size,
>>                 bool is_write, unsigned long ip);
>> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
>> +                       s8 shadow);
>>
>>  #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
>>  void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index f437398..ee2bdb4 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -116,6 +116,26 @@ static inline bool init_task_stack_addr(const void *addr)
>>                         sizeof(init_thread_union.stack));
>>  }
>>
>> +static DEFINE_SPINLOCK(report_lock);
>> +
>> +static void kasan_start_report(unsigned long *flags)
>> +{
>> +       /*
>> +        * Make sure we don't end up in loop.
>> +        */
>> +       kasan_disable_current();
>> +       spin_lock_irqsave(&report_lock, *flags);
>> +       pr_err("==================================================================\n");
>> +}
>> +
>> +static void kasan_end_report(unsigned long *flags)
>> +{
>> +       pr_err("==================================================================\n");
>> +       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> Don't we want to add the taint as early as possible once we've
> detected the error?
>> +       spin_unlock_irqrestore(&report_lock, *flags);
>> +       kasan_enable_current();
>> +}
>> +
>>  static void print_track(struct kasan_track *track)
>>  {
>>         pr_err("PID = %u\n", track->pid);
>> @@ -129,8 +149,7 @@ static void print_track(struct kasan_track *track)
>>         }
>>  }
>>
>> -static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>> -                               void *object, char *unused_reason)
>> +static void kasan_object_err(struct kmem_cache *cache, void *object)
>>  {
>>         struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
>>
>> @@ -147,6 +166,18 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>>         print_track(&alloc_info->free_track);
>>  }
>>
>> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
>> +                       s8 shadow)
>> +{
>> +       unsigned long flags;
>> +
>> +       kasan_start_report(&flags);
>> +       pr_err("BUG: Double free or corrupt pointer\n");
> How about "Double free or freeing an invalid pointer\n"?
> I think "corrupt pointer" doesn't exactly reflect where the bug is.
By the way, I think we could be doing a better job by always detecting
an invalid pointer being passed to kasan_slab_free().
>> +       pr_err("Unexpected shadow byte: 0x%hhX\n", shadow);
>> +       kasan_object_err(cache, object);
>> +       kasan_end_report(&flags);
>> +}
>> +
>>  static void print_address_description(struct kasan_access_info *info)
>>  {
>>         const void *addr = info->access_addr;
>> @@ -160,8 +191,7 @@ static void print_address_description(struct kasan_access_info *info)
>>                         struct kmem_cache *cache = page->slab_cache;
>>                         object = nearest_obj(cache, page,
>>                                                 (void *)info->access_addr);
>> -                       kasan_object_err(cache, page, object,
>> -                                       "kasan: bad access detected");
>> +                       kasan_object_err(cache, object);
>>                         return;
>>                 }
>>                 dump_page(page, "kasan: bad access detected");
>> @@ -226,19 +256,13 @@ static void print_shadow_for_address(const void *addr)
>>         }
>>  }
>>
>> -static DEFINE_SPINLOCK(report_lock);
>> -
>>  static void kasan_report_error(struct kasan_access_info *info)
>>  {
>>         unsigned long flags;
>>         const char *bug_type;
>>
>> -       /*
>> -        * Make sure we don't end up in loop.
>> -        */
>> -       kasan_disable_current();
>> -       spin_lock_irqsave(&report_lock, flags);
>> -       pr_err("==================================================================\n");
>> +       kasan_start_report(&flags);
>> +
>>         if (info->access_addr <
>>                         kasan_shadow_to_mem((void *)KASAN_SHADOW_START)) {
>>                 if ((unsigned long)info->access_addr < PAGE_SIZE)
>> @@ -259,10 +283,8 @@ static void kasan_report_error(struct kasan_access_info *info)
>>                 print_address_description(info);
>>                 print_shadow_for_address(info->first_bad_addr);
>>         }
>> -       pr_err("==================================================================\n");
>> -       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
>> -       spin_unlock_irqrestore(&report_lock, flags);
>> -       kasan_enable_current();
>> +
>> +       kasan_end_report(&flags);
>>  }
>>
>>  void kasan_report(unsigned long addr, size_t size,
>> --
>> 2.7.3
>>
>
>
>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-Straße, 33
> 80636 München
>
> Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] kasan: improve double-free reports.
  2016-08-02 11:39     ` Alexander Potapenko
@ 2016-08-02 12:34       ` Andrey Ryabinin
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-02 12:34 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List



On 08/02/2016 02:39 PM, Alexander Potapenko wrote:

>> +static void kasan_end_report(unsigned long *flags)
>> +{
>> +       pr_err("==================================================================\n");
>> +       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> Don't we want to add the taint as early as possible once we've
> detected the error?

What for?
It certainly shouldn't be before dump_stack(), otherwise on the first report the kernel will claimed as tainted.


>>
>> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
>> +                       s8 shadow)
>> +{
>> +       unsigned long flags;
>> +
>> +       kasan_start_report(&flags);
>> +       pr_err("BUG: Double free or corrupt pointer\n");
> How about "Double free or freeing an invalid pointer\n"?
> I think "corrupt pointer" doesn't exactly reflect where the bug is.

Ok

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] kasan: improve double-free reports.
@ 2016-08-02 12:34       ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-02 12:34 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List



On 08/02/2016 02:39 PM, Alexander Potapenko wrote:

>> +static void kasan_end_report(unsigned long *flags)
>> +{
>> +       pr_err("==================================================================\n");
>> +       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> Don't we want to add the taint as early as possible once we've
> detected the error?

What for?
It certainly shouldn't be before dump_stack(), otherwise on the first report the kernel will claimed as tainted.


>>
>> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
>> +                       s8 shadow)
>> +{
>> +       unsigned long flags;
>> +
>> +       kasan_start_report(&flags);
>> +       pr_err("BUG: Double free or corrupt pointer\n");
> How about "Double free or freeing an invalid pointer\n"?
> I think "corrupt pointer" doesn't exactly reflect where the bug is.

Ok

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/6] mm/kasan: get rid of ->state in struct kasan_alloc_meta
  2016-08-01 14:45   ` Andrey Ryabinin
@ 2016-08-02 12:37     ` Alexander Potapenko
  -1 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 12:37 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List,
	kernel test robot, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> The state of object currently tracked in two places - shadow memory,
> and the ->state field in struct kasan_alloc_meta.
> We can get rid of the latter. The will save us a little bit of memory.
Not sure I like this.
Wiping an object's shadow is non-atomic. You cannot rely on the fact
that every byte of the shadow is consistent with the metadata values.
We could possibly use the first byte of the object's shadow to protect
the metadata, but that's no better than keeping the state.
We also need to fix ordering problems, as accesses to neither the
allocation state in the existing code nor the shadow in your patch are
properly ordered with the metadata accesses.

> Also, this allow us to move free stack into struct kasan_alloc_meta,
> without increasing memory consumption. So now we should always know
> when the last time the object was freed. This may be useful for
> long delayed use-after-free bugs.
>
> As a side effect this fixes following UBSAN warning:
>         UBSAN: Undefined behaviour in mm/kasan/quarantine.c:102:13
>         member access within misaligned address ffff88000d1efebc for type 'struct qlist_node'
>         which requires 8 byte alignment
>
> Reported-by: kernel test robot <xiaolong.ye@intel.com>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  include/linux/kasan.h |  3 +++
>  mm/kasan/kasan.c      | 61 +++++++++++++++++++++++----------------------------
>  mm/kasan/kasan.h      | 12 ++--------
>  mm/kasan/quarantine.c |  2 --
>  mm/kasan/report.c     | 23 +++++--------------
>  mm/slab.c             |  4 +++-
>  mm/slub.c             |  1 +
>  7 files changed, 42 insertions(+), 64 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index c9cf374..d600303 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -56,6 +56,7 @@ void kasan_cache_destroy(struct kmem_cache *cache);
>  void kasan_poison_slab(struct page *page);
>  void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
>  void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> +void kasan_init_slab_obj(struct kmem_cache *cache, const void *object);
>
>  void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags);
>  void kasan_kfree_large(const void *ptr);
> @@ -102,6 +103,8 @@ static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
>                                         void *object) {}
>  static inline void kasan_poison_object_data(struct kmem_cache *cache,
>                                         void *object) {}
> +static inline void kasan_init_slab_obj(struct kmem_cache *cache,
> +                               const void *object) {}
>
>  static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {}
>  static inline void kasan_kfree_large(const void *ptr) {}
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 388e812..92750e3 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -442,11 +442,6 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>         kasan_poison_shadow(object,
>                         round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
>                         KASAN_KMALLOC_REDZONE);
> -       if (cache->flags & SLAB_KASAN) {
> -               struct kasan_alloc_meta *alloc_info =
> -                       get_alloc_info(cache, object);
> -               alloc_info->state = KASAN_STATE_INIT;
> -       }
>  }
>
>  static inline int in_irqentry_text(unsigned long ptr)
> @@ -510,6 +505,17 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
>         return (void *)object + cache->kasan_info.free_meta_offset;
>  }
>
> +void kasan_init_slab_obj(struct kmem_cache *cache, const void *object)
> +{
> +       struct kasan_alloc_meta *alloc_info;
> +
> +       if (!(cache->flags & SLAB_KASAN))
> +               return;
> +
> +       alloc_info = get_alloc_info(cache, object);
> +       __memset(alloc_info, 0, sizeof(*alloc_info));
> +}
> +
>  void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
>         kasan_kmalloc(cache, object, cache->object_size, flags);
> @@ -529,34 +535,27 @@ static void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
>
>  bool kasan_slab_free(struct kmem_cache *cache, void *object)
>  {
> +       s8 shadow_byte;
> +
>         /* RCU slabs could be legally used after free within the RCU period */
>         if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>                 return false;
>
> -       if (likely(cache->flags & SLAB_KASAN)) {
> -               struct kasan_alloc_meta *alloc_info;
> -               struct kasan_free_meta *free_info;
> +       shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
> +       if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
> +               pr_err("Double free");
> +               dump_stack();
> +               return true;
> +       }
>
> -               alloc_info = get_alloc_info(cache, object);
> -               free_info = get_free_info(cache, object);
> +       kasan_poison_slab_free(cache, object);
>
> -               switch (alloc_info->state) {
> -               case KASAN_STATE_ALLOC:
> -                       alloc_info->state = KASAN_STATE_QUARANTINE;
> -                       set_track(&free_info->track, GFP_NOWAIT);
> -                       kasan_poison_slab_free(cache, object);
> -                       quarantine_put(free_info, cache);
> -                       return true;
> -               case KASAN_STATE_QUARANTINE:
> -               case KASAN_STATE_FREE:
> -                       pr_err("Double free");
> -                       dump_stack();
> -                       break;
> -               default:
> -                       break;
> -               }
> -       }
> -       return false;
> +       if (unlikely(!(cache->flags & SLAB_KASAN)))
> +               return false;
> +
> +       set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
> +       quarantine_put(get_free_info(cache, object), cache);
> +       return true;
>  }
>
>  void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
> @@ -579,13 +578,9 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>         kasan_unpoison_shadow(object, size);
>         kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>                 KASAN_KMALLOC_REDZONE);
> -       if (cache->flags & SLAB_KASAN) {
> -               struct kasan_alloc_meta *alloc_info =
> -                       get_alloc_info(cache, object);
>
> -               alloc_info->state = KASAN_STATE_ALLOC;
> -               set_track(&alloc_info->track, flags);
> -       }
> +       if (cache->flags & SLAB_KASAN)
> +               set_track(&get_alloc_info(cache, object)->alloc_track, flags);
>  }
>  EXPORT_SYMBOL(kasan_kmalloc);
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index aa17546..9b7b31e 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -59,13 +59,6 @@ struct kasan_global {
>   * Structures to keep alloc and free tracks *
>   */
>
> -enum kasan_state {
> -       KASAN_STATE_INIT,
> -       KASAN_STATE_ALLOC,
> -       KASAN_STATE_QUARANTINE,
> -       KASAN_STATE_FREE
> -};
> -
>  #define KASAN_STACK_DEPTH 64
>
>  struct kasan_track {
> @@ -74,8 +67,8 @@ struct kasan_track {
>  };
>
>  struct kasan_alloc_meta {
> -       struct kasan_track track;
> -       u32 state;
> +       struct kasan_track alloc_track;
> +       struct kasan_track free_track;
>  };
>
>  struct qlist_node {
> @@ -86,7 +79,6 @@ struct kasan_free_meta {
>          * Otherwise it might be used for the allocator freelist.
>          */
>         struct qlist_node quarantine_link;
> -       struct kasan_track track;
>  };
>
>  struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 4852625..7fd121d 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -144,13 +144,11 @@ static void *qlink_to_object(struct qlist_node *qlink, struct kmem_cache *cache)
>  static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
>  {
>         void *object = qlink_to_object(qlink, cache);
> -       struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
>         unsigned long flags;
>
>         if (IS_ENABLED(CONFIG_SLAB))
>                 local_irq_save(flags);
>
> -       alloc_info->state = KASAN_STATE_FREE;
>         ___cache_free(cache, object, _THIS_IP_);
>
>         if (IS_ENABLED(CONFIG_SLAB))
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index d67a7e0..f437398 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -133,7 +133,6 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>                                 void *object, char *unused_reason)
>  {
>         struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
> -       struct kasan_free_meta *free_info;
>
>         dump_stack();
>         pr_err("Object at %p, in cache %s size: %d\n", object, cache->name,
> @@ -141,23 +140,11 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>
>         if (!(cache->flags & SLAB_KASAN))
>                 return;
> -       switch (alloc_info->state) {
> -       case KASAN_STATE_INIT:
> -               pr_err("Object not allocated yet\n");
> -               break;
> -       case KASAN_STATE_ALLOC:
> -               pr_err("Allocation:\n");
> -               print_track(&alloc_info->track);
> -               break;
> -       case KASAN_STATE_FREE:
> -       case KASAN_STATE_QUARANTINE:
> -               free_info = get_free_info(cache, object);
> -               pr_err("Allocation:\n");
> -               print_track(&alloc_info->track);
> -               pr_err("Deallocation:\n");
> -               print_track(&free_info->track);
> -               break;
> -       }
> +
> +       pr_err("Allocated:\n");
> +       print_track(&alloc_info->alloc_track);
> +       pr_err("Freed:\n");
> +       print_track(&alloc_info->free_track);
>  }
>
>  static void print_address_description(struct kasan_access_info *info)
> diff --git a/mm/slab.c b/mm/slab.c
> index 09771ed..ca135bd 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2604,9 +2604,11 @@ static void cache_init_objs(struct kmem_cache *cachep,
>         }
>
>         for (i = 0; i < cachep->num; i++) {
> +               objp = index_to_obj(cachep, page, i);
> +               kasan_init_slab_obj(cachep, objp);
> +
>                 /* constructor could break poison info */
>                 if (DEBUG == 0 && cachep->ctor) {
> -                       objp = index_to_obj(cachep, page, i);
>                         kasan_unpoison_object_data(cachep, objp);
>                         cachep->ctor(objp);
>                         kasan_poison_object_data(cachep, objp);
> diff --git a/mm/slub.c b/mm/slub.c
> index 74e7c8c..26eb6a99 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1384,6 +1384,7 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>                                 void *object)
>  {
>         setup_object_debug(s, page, object);
> +       kasan_init_slab_obj(s, object);
>         if (unlikely(s->ctor)) {
>                 kasan_unpoison_object_data(s, object);
>                 s->ctor(object);
> --
> 2.7.3
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/6] mm/kasan: get rid of ->state in struct kasan_alloc_meta
@ 2016-08-02 12:37     ` Alexander Potapenko
  0 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 12:37 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List,
	kernel test robot, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim

On Mon, Aug 1, 2016 at 4:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> The state of object currently tracked in two places - shadow memory,
> and the ->state field in struct kasan_alloc_meta.
> We can get rid of the latter. The will save us a little bit of memory.
Not sure I like this.
Wiping an object's shadow is non-atomic. You cannot rely on the fact
that every byte of the shadow is consistent with the metadata values.
We could possibly use the first byte of the object's shadow to protect
the metadata, but that's no better than keeping the state.
We also need to fix ordering problems, as accesses to neither the
allocation state in the existing code nor the shadow in your patch are
properly ordered with the metadata accesses.

> Also, this allow us to move free stack into struct kasan_alloc_meta,
> without increasing memory consumption. So now we should always know
> when the last time the object was freed. This may be useful for
> long delayed use-after-free bugs.
>
> As a side effect this fixes following UBSAN warning:
>         UBSAN: Undefined behaviour in mm/kasan/quarantine.c:102:13
>         member access within misaligned address ffff88000d1efebc for type 'struct qlist_node'
>         which requires 8 byte alignment
>
> Reported-by: kernel test robot <xiaolong.ye@intel.com>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  include/linux/kasan.h |  3 +++
>  mm/kasan/kasan.c      | 61 +++++++++++++++++++++++----------------------------
>  mm/kasan/kasan.h      | 12 ++--------
>  mm/kasan/quarantine.c |  2 --
>  mm/kasan/report.c     | 23 +++++--------------
>  mm/slab.c             |  4 +++-
>  mm/slub.c             |  1 +
>  7 files changed, 42 insertions(+), 64 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index c9cf374..d600303 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -56,6 +56,7 @@ void kasan_cache_destroy(struct kmem_cache *cache);
>  void kasan_poison_slab(struct page *page);
>  void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
>  void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> +void kasan_init_slab_obj(struct kmem_cache *cache, const void *object);
>
>  void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags);
>  void kasan_kfree_large(const void *ptr);
> @@ -102,6 +103,8 @@ static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
>                                         void *object) {}
>  static inline void kasan_poison_object_data(struct kmem_cache *cache,
>                                         void *object) {}
> +static inline void kasan_init_slab_obj(struct kmem_cache *cache,
> +                               const void *object) {}
>
>  static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {}
>  static inline void kasan_kfree_large(const void *ptr) {}
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 388e812..92750e3 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -442,11 +442,6 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>         kasan_poison_shadow(object,
>                         round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
>                         KASAN_KMALLOC_REDZONE);
> -       if (cache->flags & SLAB_KASAN) {
> -               struct kasan_alloc_meta *alloc_info =
> -                       get_alloc_info(cache, object);
> -               alloc_info->state = KASAN_STATE_INIT;
> -       }
>  }
>
>  static inline int in_irqentry_text(unsigned long ptr)
> @@ -510,6 +505,17 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
>         return (void *)object + cache->kasan_info.free_meta_offset;
>  }
>
> +void kasan_init_slab_obj(struct kmem_cache *cache, const void *object)
> +{
> +       struct kasan_alloc_meta *alloc_info;
> +
> +       if (!(cache->flags & SLAB_KASAN))
> +               return;
> +
> +       alloc_info = get_alloc_info(cache, object);
> +       __memset(alloc_info, 0, sizeof(*alloc_info));
> +}
> +
>  void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
>         kasan_kmalloc(cache, object, cache->object_size, flags);
> @@ -529,34 +535,27 @@ static void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
>
>  bool kasan_slab_free(struct kmem_cache *cache, void *object)
>  {
> +       s8 shadow_byte;
> +
>         /* RCU slabs could be legally used after free within the RCU period */
>         if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>                 return false;
>
> -       if (likely(cache->flags & SLAB_KASAN)) {
> -               struct kasan_alloc_meta *alloc_info;
> -               struct kasan_free_meta *free_info;
> +       shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
> +       if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
> +               pr_err("Double free");
> +               dump_stack();
> +               return true;
> +       }
>
> -               alloc_info = get_alloc_info(cache, object);
> -               free_info = get_free_info(cache, object);
> +       kasan_poison_slab_free(cache, object);
>
> -               switch (alloc_info->state) {
> -               case KASAN_STATE_ALLOC:
> -                       alloc_info->state = KASAN_STATE_QUARANTINE;
> -                       set_track(&free_info->track, GFP_NOWAIT);
> -                       kasan_poison_slab_free(cache, object);
> -                       quarantine_put(free_info, cache);
> -                       return true;
> -               case KASAN_STATE_QUARANTINE:
> -               case KASAN_STATE_FREE:
> -                       pr_err("Double free");
> -                       dump_stack();
> -                       break;
> -               default:
> -                       break;
> -               }
> -       }
> -       return false;
> +       if (unlikely(!(cache->flags & SLAB_KASAN)))
> +               return false;
> +
> +       set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
> +       quarantine_put(get_free_info(cache, object), cache);
> +       return true;
>  }
>
>  void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
> @@ -579,13 +578,9 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>         kasan_unpoison_shadow(object, size);
>         kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>                 KASAN_KMALLOC_REDZONE);
> -       if (cache->flags & SLAB_KASAN) {
> -               struct kasan_alloc_meta *alloc_info =
> -                       get_alloc_info(cache, object);
>
> -               alloc_info->state = KASAN_STATE_ALLOC;
> -               set_track(&alloc_info->track, flags);
> -       }
> +       if (cache->flags & SLAB_KASAN)
> +               set_track(&get_alloc_info(cache, object)->alloc_track, flags);
>  }
>  EXPORT_SYMBOL(kasan_kmalloc);
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index aa17546..9b7b31e 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -59,13 +59,6 @@ struct kasan_global {
>   * Structures to keep alloc and free tracks *
>   */
>
> -enum kasan_state {
> -       KASAN_STATE_INIT,
> -       KASAN_STATE_ALLOC,
> -       KASAN_STATE_QUARANTINE,
> -       KASAN_STATE_FREE
> -};
> -
>  #define KASAN_STACK_DEPTH 64
>
>  struct kasan_track {
> @@ -74,8 +67,8 @@ struct kasan_track {
>  };
>
>  struct kasan_alloc_meta {
> -       struct kasan_track track;
> -       u32 state;
> +       struct kasan_track alloc_track;
> +       struct kasan_track free_track;
>  };
>
>  struct qlist_node {
> @@ -86,7 +79,6 @@ struct kasan_free_meta {
>          * Otherwise it might be used for the allocator freelist.
>          */
>         struct qlist_node quarantine_link;
> -       struct kasan_track track;
>  };
>
>  struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 4852625..7fd121d 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -144,13 +144,11 @@ static void *qlink_to_object(struct qlist_node *qlink, struct kmem_cache *cache)
>  static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
>  {
>         void *object = qlink_to_object(qlink, cache);
> -       struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
>         unsigned long flags;
>
>         if (IS_ENABLED(CONFIG_SLAB))
>                 local_irq_save(flags);
>
> -       alloc_info->state = KASAN_STATE_FREE;
>         ___cache_free(cache, object, _THIS_IP_);
>
>         if (IS_ENABLED(CONFIG_SLAB))
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index d67a7e0..f437398 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -133,7 +133,6 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>                                 void *object, char *unused_reason)
>  {
>         struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
> -       struct kasan_free_meta *free_info;
>
>         dump_stack();
>         pr_err("Object at %p, in cache %s size: %d\n", object, cache->name,
> @@ -141,23 +140,11 @@ static void kasan_object_err(struct kmem_cache *cache, struct page *page,
>
>         if (!(cache->flags & SLAB_KASAN))
>                 return;
> -       switch (alloc_info->state) {
> -       case KASAN_STATE_INIT:
> -               pr_err("Object not allocated yet\n");
> -               break;
> -       case KASAN_STATE_ALLOC:
> -               pr_err("Allocation:\n");
> -               print_track(&alloc_info->track);
> -               break;
> -       case KASAN_STATE_FREE:
> -       case KASAN_STATE_QUARANTINE:
> -               free_info = get_free_info(cache, object);
> -               pr_err("Allocation:\n");
> -               print_track(&alloc_info->track);
> -               pr_err("Deallocation:\n");
> -               print_track(&free_info->track);
> -               break;
> -       }
> +
> +       pr_err("Allocated:\n");
> +       print_track(&alloc_info->alloc_track);
> +       pr_err("Freed:\n");
> +       print_track(&alloc_info->free_track);
>  }
>
>  static void print_address_description(struct kasan_access_info *info)
> diff --git a/mm/slab.c b/mm/slab.c
> index 09771ed..ca135bd 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2604,9 +2604,11 @@ static void cache_init_objs(struct kmem_cache *cachep,
>         }
>
>         for (i = 0; i < cachep->num; i++) {
> +               objp = index_to_obj(cachep, page, i);
> +               kasan_init_slab_obj(cachep, objp);
> +
>                 /* constructor could break poison info */
>                 if (DEBUG == 0 && cachep->ctor) {
> -                       objp = index_to_obj(cachep, page, i);
>                         kasan_unpoison_object_data(cachep, objp);
>                         cachep->ctor(objp);
>                         kasan_poison_object_data(cachep, objp);
> diff --git a/mm/slub.c b/mm/slub.c
> index 74e7c8c..26eb6a99 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1384,6 +1384,7 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>                                 void *object)
>  {
>         setup_object_debug(s, page, object);
> +       kasan_init_slab_obj(s, object);
>         if (unlikely(s->ctor)) {
>                 kasan_unpoison_object_data(s, object);
>                 s->ctor(object);
> --
> 2.7.3
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] kasan: improve double-free reports.
  2016-08-02 12:34       ` Andrey Ryabinin
@ 2016-08-02 12:53         ` Alexander Potapenko
  -1 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 12:53 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Tue, Aug 2, 2016 at 2:34 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>
>
> On 08/02/2016 02:39 PM, Alexander Potapenko wrote:
>
>>> +static void kasan_end_report(unsigned long *flags)
>>> +{
>>> +       pr_err("==================================================================\n");
>>> +       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
>> Don't we want to add the taint as early as possible once we've
>> detected the error?
>
> What for?
> It certainly shouldn't be before dump_stack(), otherwise on the first report the kernel will claimed as tainted.
Ah, got it. Fair enough.
>
>>>
>>> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
>>> +                       s8 shadow)
>>> +{
>>> +       unsigned long flags;
>>> +
>>> +       kasan_start_report(&flags);
>>> +       pr_err("BUG: Double free or corrupt pointer\n");
>> How about "Double free or freeing an invalid pointer\n"?
>> I think "corrupt pointer" doesn't exactly reflect where the bug is.
>
> Ok
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] kasan: improve double-free reports.
@ 2016-08-02 12:53         ` Alexander Potapenko
  0 siblings, 0 replies; 30+ messages in thread
From: Alexander Potapenko @ 2016-08-02 12:53 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Dave Jones, Vegard Nossum, Sasha Levin,
	Dmitry Vyukov, kasan-dev, LKML, Linux Memory Management List

On Tue, Aug 2, 2016 at 2:34 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>
>
> On 08/02/2016 02:39 PM, Alexander Potapenko wrote:
>
>>> +static void kasan_end_report(unsigned long *flags)
>>> +{
>>> +       pr_err("==================================================================\n");
>>> +       add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
>> Don't we want to add the taint as early as possible once we've
>> detected the error?
>
> What for?
> It certainly shouldn't be before dump_stack(), otherwise on the first report the kernel will claimed as tainted.
Ah, got it. Fair enough.
>
>>>
>>> +void kasan_report_double_free(struct kmem_cache *cache, void *object,
>>> +                       s8 shadow)
>>> +{
>>> +       unsigned long flags;
>>> +
>>> +       kasan_start_report(&flags);
>>> +       pr_err("BUG: Double free or corrupt pointer\n");
>> How about "Double free or freeing an invalid pointer\n"?
>> I think "corrupt pointer" doesn't exactly reflect where the bug is.
>
> Ok
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH] kasan-improve-double-free-reports-fix
  2016-08-02 11:39     ` Alexander Potapenko
@ 2016-08-02 16:00       ` Andrey Ryabinin
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-02 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: kasan-dev, linux-mm, linux-kernel, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov

Change doulbe free message per Alexander

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
---
 mm/kasan/report.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index ee2bdb4..24c1211 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -172,7 +172,7 @@ void kasan_report_double_free(struct kmem_cache *cache, void *object,
 	unsigned long flags;
 
 	kasan_start_report(&flags);
-	pr_err("BUG: Double free or corrupt pointer\n");
+	pr_err("BUG: Double free or freeing an invalid pointer\n");
 	pr_err("Unexpected shadow byte: 0x%hhX\n", shadow);
 	kasan_object_err(cache, object);
 	kasan_end_report(&flags);
-- 
2.7.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH] kasan-improve-double-free-reports-fix
@ 2016-08-02 16:00       ` Andrey Ryabinin
  0 siblings, 0 replies; 30+ messages in thread
From: Andrey Ryabinin @ 2016-08-02 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: kasan-dev, linux-mm, linux-kernel, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov

Change doulbe free message per Alexander

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
---
 mm/kasan/report.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index ee2bdb4..24c1211 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -172,7 +172,7 @@ void kasan_report_double_free(struct kmem_cache *cache, void *object,
 	unsigned long flags;
 
 	kasan_start_report(&flags);
-	pr_err("BUG: Double free or corrupt pointer\n");
+	pr_err("BUG: Double free or freeing an invalid pointer\n");
 	pr_err("Unexpected shadow byte: 0x%hhX\n", shadow);
 	kasan_object_err(cache, object);
 	kasan_end_report(&flags);
-- 
2.7.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2016-08-02 18:45 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-01 14:45 [PATCH 1/6] mm/kasan: fix corruptions and false positive reports Andrey Ryabinin
2016-08-01 14:45 ` Andrey Ryabinin
2016-08-01 14:45 ` [PATCH 2/6] mm/kasan: don't reduce quarantine in atomic contexts Andrey Ryabinin
2016-08-01 14:45   ` Andrey Ryabinin
2016-08-02 11:42   ` Alexander Potapenko
2016-08-02 11:42     ` Alexander Potapenko
2016-08-01 14:45 ` [PATCH 3/6] mm/kasan, slub: don't disable interrupts when object leaves quarantine Andrey Ryabinin
2016-08-01 14:45   ` Andrey Ryabinin
2016-08-01 14:47   ` Alexander Potapenko
2016-08-01 14:47     ` Alexander Potapenko
2016-08-01 14:45 ` [PATCH 4/6] mm/kasan: get rid of ->alloc_size in struct kasan_alloc_meta Andrey Ryabinin
2016-08-01 14:45   ` Andrey Ryabinin
2016-08-01 14:45 ` [PATCH 5/6] mm/kasan: get rid of ->state " Andrey Ryabinin
2016-08-01 14:45   ` Andrey Ryabinin
2016-08-02 12:37   ` Alexander Potapenko
2016-08-02 12:37     ` Alexander Potapenko
2016-08-01 14:45 ` [PATCH 6/6] kasan: improve double-free reports Andrey Ryabinin
2016-08-01 14:45   ` Andrey Ryabinin
2016-08-02 11:39   ` Alexander Potapenko
2016-08-02 11:39     ` Alexander Potapenko
2016-08-02 12:05     ` Alexander Potapenko
2016-08-02 12:05       ` Alexander Potapenko
2016-08-02 12:34     ` Andrey Ryabinin
2016-08-02 12:34       ` Andrey Ryabinin
2016-08-02 12:53       ` Alexander Potapenko
2016-08-02 12:53         ` Alexander Potapenko
2016-08-02 16:00     ` [PATCH] kasan-improve-double-free-reports-fix Andrey Ryabinin
2016-08-02 16:00       ` Andrey Ryabinin
2016-08-01 14:45 ` [PATCH 1/6] mm/kasan: fix corruptions and false positive reports Alexander Potapenko
2016-08-01 14:45   ` Alexander Potapenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.