linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
@ 2019-08-19 11:44 Walter Wu
  2019-08-19 12:56 ` Will Deacon
  0 siblings, 1 reply; 13+ messages in thread
From: Walter Wu @ 2019-08-19 11:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Matthias Brugger, Andrew Morton,
	Andrey Konovalov
  Cc: kasan-dev, linux-arm-kernel, linux-kernel, linux-mediatek,
	wsd_upstream, Walter Wu

__arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
but it will modify pointer tag into 0xff, so there is a false positive.

When enable tag-based kasan, phys_to_virt() function need to rewrite
its original pointer tag in order to avoid kasan report an incorrect
memory corruption.

BUG: KASAN: double-free or invalid-free in __arm_v7s_unmap+0x720/0xda4
Pointer tag: [ff], memory tag: [c1]

Call trace:
 dump_backtrace+0x0/0x1d4
 show_stack+0x14/0x1c
 dump_stack+0xe8/0x140
 print_address_description+0x80/0x2f0
 kasan_report_invalid_free+0x58/0x74
 __kasan_slab_free+0x1e4/0x220
 kasan_slab_free+0xc/0x18
 kmem_cache_free+0xfc/0x884
 __arm_v7s_unmap+0x720/0xda4
 __arm_v7s_map+0xc8/0x774
 arm_v7s_map+0x80/0x158
 mtk_iommu_map+0xb4/0xe0
 iommu_map+0x154/0x450
 iommu_map_sg+0xe4/0x150
 iommu_dma_map_sg+0x214/0x4ec
 __iommu_map_sg_attrs+0xf0/0x110
 ion_map_dma_buf+0xe8/0x114
 dma_buf_map_attachment+0x4c/0x80
 disp_sync_prepare_buf+0x378/0x820
 _ioctl_prepare_buffer+0x130/0x870
 mtk_disp_mgr_ioctl+0x5c4/0xab0
 do_vfs_ioctl+0x8e0/0x15a4
 __arm64_sys_ioctl+0x8c/0xb4
 el0_svc_common+0xe4/0x1e0
 el0_svc_handler+0x30/0x3c
 el0_svc+0x8/0xc

Signed-off-by: Walter Wu <walter-zh.wu@mediatek.com>
---
 arch/arm64/include/asm/kasan.h  |  1 -
 arch/arm64/include/asm/memory.h | 10 ++++++++++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index b52aacd2c526..59894cafad60 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -5,7 +5,6 @@
 #ifndef __ASSEMBLY__
 
 #include <linux/linkage.h>
-#include <asm/memory.h>
 #include <asm/pgtable-types.h>
 
 #define arch_kasan_set_tag(addr, tag)	__tag_set(addr, tag)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 8ffcf5a512bb..75af5ba9ff22 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -171,6 +171,7 @@
 
 #include <linux/bitops.h>
 #include <linux/mmdebug.h>
+#include <asm/kasan.h>
 
 extern s64			memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
@@ -282,7 +283,16 @@ static inline phys_addr_t virt_to_phys(const volatile void *x)
 #define phys_to_virt phys_to_virt
 static inline void *phys_to_virt(phys_addr_t x)
 {
+#ifdef CONFIG_KASAN_SW_TAGS
+	unsigned long addr = __phys_to_virt(x);
+	u8 *tag = (void *)(addr >> KASAN_SHADOW_SCALE_SHIFT)
+				+ KASAN_SHADOW_OFFSET;
+
+	addr = __tag_set(addr, *tag);
+	return (void *)addr;
+#else
 	return (void *)(__phys_to_virt(x));
+#endif
 }
 
 /*
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 11:44 [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan Walter Wu
@ 2019-08-19 12:56 ` Will Deacon
  2019-08-19 13:23   ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Will Deacon @ 2019-08-19 12:56 UTC (permalink / raw)
  To: Walter Wu
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Matthias Brugger, Andrew Morton,
	Andrey Konovalov, wsd_upstream, linux-kernel, kasan-dev,
	linux-mediatek, linux-arm-kernel

On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> but it will modify pointer tag into 0xff, so there is a false positive.
> 
> When enable tag-based kasan, phys_to_virt() function need to rewrite
> its original pointer tag in order to avoid kasan report an incorrect
> memory corruption.

Hmm. Which tree did you see this on? We've recently queued a load of fixes
in this area, but I /thought/ they were only needed after the support for
52-bit virtual addressing in the kernel.

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 12:56 ` Will Deacon
@ 2019-08-19 13:23   ` Mark Rutland
  2019-08-19 13:34     ` Will Deacon
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2019-08-19 13:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: Walter Wu, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Matthias Brugger, Andrew Morton,
	Andrey Konovalov, wsd_upstream, linux-kernel, kasan-dev,
	linux-mediatek, linux-arm-kernel

On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > but it will modify pointer tag into 0xff, so there is a false positive.
> > 
> > When enable tag-based kasan, phys_to_virt() function need to rewrite
> > its original pointer tag in order to avoid kasan report an incorrect
> > memory corruption.
> 
> Hmm. Which tree did you see this on? We've recently queued a load of fixes
> in this area, but I /thought/ they were only needed after the support for
> 52-bit virtual addressing in the kernel.

I'm seeing similar issues in the virtio blk code (splat below), atop of
the arm64 for-next/core branch. I think this is a latent issue, and
people are only just starting to test with KASAN_SW_TAGS.

It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
virt->page->virt, losing the per-object tag in the process.

Our page_to_virt() seems to get a per-page tag, but this only makes
sense if you're dealing with the page allocator, rather than something
like SLUB which carves a page into smaller objects giving each object a
distinct tag.

Any round-trip of a pointer from SLUB is going to lose the per-object
tag.

Thanks,
Mark.

==================================================================
BUG: KASAN: double-free or invalid-free in virtblk_request_done+0x128/0x1d8 drivers/block/virtio_blk.c:215
Pointer tag: [ff], memory tag: [a8]

CPU: 0 PID: 19116 Comm: syz-executor.0 Not tainted 5.3.0-rc3-00075-gcb38552 #1
Hardware name: linux,dummy-virt (DT)
Call trace:
 dump_backtrace+0x0/0x3c0 arch/arm64/include/asm/stacktrace.h:166
 show_stack+0x24/0x30 arch/arm64/kernel/traps.c:138
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x138/0x1f4 lib/dump_stack.c:113
 print_address_description+0x7c/0x328 mm/kasan/report.c:351
 kasan_report_invalid_free+0x80/0xe0 mm/kasan/report.c:444
 __kasan_slab_free+0x1a8/0x208 mm/kasan/common.c:56
 kasan_slab_free+0xc/0x18 mm/kasan/common.c:457
 slab_free_hook mm/slub.c:1423 [inline]
 slab_free_freelist_hook mm/slub.c:1474 [inline]
 slab_free mm/slub.c:3016 [inline]
 kfree+0x254/0x9dc mm/slub.c:3957
 virtblk_request_done+0x128/0x1d8 drivers/block/virtio_blk.c:215
 blk_done_softirq+0x3dc/0x49c block/blk-softirq.c:37
 __do_softirq+0xa90/0x1504 kernel/softirq.c:292
 do_softirq_own_stack include/linux/interrupt.h:549 [inline]
 invoke_softirq kernel/softirq.c:380 [inline]
 irq_exit+0x3b0/0x4f8 kernel/softirq.c:413
 __handle_domain_irq+0x150/0x250 kernel/irq/irqdesc.c:671
 atomic_read include/asm-generic/atomic-instrumented.h:26 [inline]
 static_key_count include/linux/jump_label.h:254 [inline]
 cpus_have_const_cap arch/arm64/include/asm/cpufeature.h:410 [inline]
 gic_read_iar drivers/irqchip/irq-gic-v3.c:152 [inline]
 gic_handle_irq+0x244/0x4ac drivers/irqchip/irq-gic-v3.c:490
 el1_irq+0xbc/0x140 arch/arm64/kernel/entry.S:670
 ktime_add_safe kernel/time/hrtimer.c:321 [inline]
 hrtimer_set_expires_range_ns include/linux/hrtimer.h:235 [inline]
 hrtimer_nanosleep kernel/time/hrtimer.c:1732 [inline]
 __do_sys_nanosleep kernel/time/hrtimer.c:1767 [inline]
 __se_sys_nanosleep kernel/time/hrtimer.c:1754 [inline]
 __arm64_sys_nanosleep+0x344/0x554 kernel/time/hrtimer.c:1754
 __invoke_syscall arch/arm64/kernel/syscall.c:36 [inline]
 invoke_syscall arch/arm64/kernel/syscall.c:48 [inline]
 el0_svc_common arch/arm64/kernel/syscall.c:114 [inline]
 el0_svc_handler+0x300/0x540 arch/arm64/kernel/syscall.c:160
 el0_svc+0x8/0xc arch/arm64/kernel/entry.S:1006

Allocated by task 170:
 save_stack mm/kasan/common.c:69 [inline]
 set_track mm/kasan/common.c:77 [inline]
 __kasan_kmalloc+0x114/0x1d0 mm/kasan/common.c:487
 kasan_kmalloc+0x10/0x18 mm/kasan/common.c:501
 __kmalloc+0x1f0/0x48c mm/slub.c:3811
 kmalloc_array include/linux/slab.h:676 [inline]
 virtblk_setup_discard_write_zeroes drivers/block/virtio_blk.c:188 [inline]
 virtio_queue_rq+0x948/0xe48 drivers/block/virtio_blk.c:322
 blk_mq_dispatch_rq_list+0x914/0x16fc block/blk-mq.c:1257
 blk_mq_do_dispatch_sched+0x374/0x4d8 block/blk-mq-sched.c:115
 blk_mq_sched_dispatch_requests+0x4d0/0x68c block/blk-mq-sched.c:216
 __blk_mq_run_hw_queue+0x22c/0x35c block/blk-mq.c:1387
 blk_mq_run_work_fn+0x64/0x78 block/blk-mq.c:1620
 process_one_work+0x10bc/0x1df0 kernel/workqueue.c:2269
 worker_thread+0x1124/0x17bc kernel/workqueue.c:2415
 kthread+0x3c0/0x3d0 kernel/kthread.c:255
 ret_from_fork+0x10/0x18 arch/arm64/kernel/entry.S:1164

Freed by task 17121:
 save_stack mm/kasan/common.c:69 [inline]
 set_track mm/kasan/common.c:77 [inline]
 __kasan_slab_free+0x138/0x208 mm/kasan/common.c:449
 kasan_slab_free+0xc/0x18 mm/kasan/common.c:457
 slab_free_hook mm/slub.c:1423 [inline]
 slab_free_freelist_hook mm/slub.c:1474 [inline]
 slab_free mm/slub.c:3016 [inline]
 kfree+0x254/0x9dc mm/slub.c:3957
 kvfree+0x54/0x60 mm/util.c:488
 __vunmap+0xa3c/0xafc mm/vmalloc.c:2255
 __vfree mm/vmalloc.c:2299 [inline]
 vfree+0xe4/0x1c4 mm/vmalloc.c:2329
 copy_entries_to_user net/ipv6/netfilter/ip6_tables.c:883 [inline]
 get_entries net/ipv6/netfilter/ip6_tables.c:1041 [inline]
 do_ip6t_get_ctl+0xf78/0x1804 net/ipv6/netfilter/ip6_tables.c:1709
 nf_sockopt net/netfilter/nf_sockopt.c:104 [inline]
 nf_getsockopt+0x238/0x258 net/netfilter/nf_sockopt.c:122
 ipv6_getsockopt+0x3374/0x40c4 net/ipv6/ipv6_sockglue.c:1400
 tcp_getsockopt+0x214/0x54e0 net/ipv4/tcp.c:3662
 sock_common_getsockopt+0xc8/0xf4 net/core/sock.c:3089
 __sys_getsockopt net/socket.c:2129 [inline]
 __do_sys_getsockopt net/socket.c:2144 [inline]
 __se_sys_getsockopt net/socket.c:2141 [inline]
 __arm64_sys_getsockopt+0x240/0x308 net/socket.c:2141
 __invoke_syscall arch/arm64/kernel/syscall.c:36 [inline]
 invoke_syscall arch/arm64/kernel/syscall.c:48 [inline]
 el0_svc_common arch/arm64/kernel/syscall.c:114 [inline]
 el0_svc_handler+0x300/0x540 arch/arm64/kernel/syscall.c:160
 el0_svc+0x8/0xc arch/arm64/kernel/entry.S:1006

The buggy address belongs to the object at ffff00005338eb80
 which belongs to the cache kmalloc-128 of size 128
The buggy address is located 0 bytes inside of
 128-byte region [ffff00005338eb80, ffff00005338ec00)
The buggy address belongs to the page:
page:ffffffdffff4ce00 refcount:1 mapcount:0 mapping:e5ff0000576b0480 index:0x29ff000053388f00
flags: 0xffffff000000200(slab)
raw: 0ffffff000000200 ffffffdffff00108 5eff0000576afd40 e5ff0000576b0480
raw: 29ff000053388f00 000000000066005d 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected

Memory state around the buggy address:
 ffff00005338e900: 34 34 34 34 34 34 34 34 fe fe fe fe fe fe fe fe
 ffff00005338ea00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
>ffff00005338eb00: fe fe fe fe fe fe fe fe a8 fe fe fe fe fe fe fe
                                           ^
 ffff00005338ec00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
 ffff00005338ed00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
==================================================================

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 13:23   ` Mark Rutland
@ 2019-08-19 13:34     ` Will Deacon
  2019-08-19 14:05       ` Andrey Konovalov
  2019-08-19 14:06       ` Andrey Ryabinin
  0 siblings, 2 replies; 13+ messages in thread
From: Will Deacon @ 2019-08-19 13:34 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Walter Wu, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Matthias Brugger, Andrew Morton,
	Andrey Konovalov, wsd_upstream, linux-kernel, kasan-dev,
	linux-mediatek, linux-arm-kernel

On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> > On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > > but it will modify pointer tag into 0xff, so there is a false positive.
> > > 
> > > When enable tag-based kasan, phys_to_virt() function need to rewrite
> > > its original pointer tag in order to avoid kasan report an incorrect
> > > memory corruption.
> > 
> > Hmm. Which tree did you see this on? We've recently queued a load of fixes
> > in this area, but I /thought/ they were only needed after the support for
> > 52-bit virtual addressing in the kernel.
> 
> I'm seeing similar issues in the virtio blk code (splat below), atop of
> the arm64 for-next/core branch. I think this is a latent issue, and
> people are only just starting to test with KASAN_SW_TAGS.
> 
> It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
> virt->page->virt, losing the per-object tag in the process.
> 
> Our page_to_virt() seems to get a per-page tag, but this only makes
> sense if you're dealing with the page allocator, rather than something
> like SLUB which carves a page into smaller objects giving each object a
> distinct tag.
> 
> Any round-trip of a pointer from SLUB is going to lose the per-object
> tag.

Urgh, I wonder how this is supposed to work?

If we end up having to check the KASAN shadow for *_to_virt(), then why
do we need to store anything in the page flags at all? Andrey?

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 13:34     ` Will Deacon
@ 2019-08-19 14:05       ` Andrey Konovalov
  2019-08-19 14:22         ` Will Deacon
  2019-08-19 15:03         ` Mark Rutland
  2019-08-19 14:06       ` Andrey Ryabinin
  1 sibling, 2 replies; 13+ messages in thread
From: Andrey Konovalov @ 2019-08-19 14:05 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Walter Wu, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Catalin Marinas, Will Deacon, Matthias Brugger,
	Andrew Morton, wsd_upstream, LKML, kasan-dev, linux-mediatek,
	Linux ARM

On Mon, Aug 19, 2019 at 3:34 PM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> > On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> > > On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > > > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > > > but it will modify pointer tag into 0xff, so there is a false positive.
> > > >
> > > > When enable tag-based kasan, phys_to_virt() function need to rewrite
> > > > its original pointer tag in order to avoid kasan report an incorrect
> > > > memory corruption.
> > >
> > > Hmm. Which tree did you see this on? We've recently queued a load of fixes
> > > in this area, but I /thought/ they were only needed after the support for
> > > 52-bit virtual addressing in the kernel.
> >
> > I'm seeing similar issues in the virtio blk code (splat below), atop of
> > the arm64 for-next/core branch. I think this is a latent issue, and
> > people are only just starting to test with KASAN_SW_TAGS.
> >
> > It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
> > virt->page->virt, losing the per-object tag in the process.
> >
> > Our page_to_virt() seems to get a per-page tag, but this only makes
> > sense if you're dealing with the page allocator, rather than something
> > like SLUB which carves a page into smaller objects giving each object a
> > distinct tag.
> >
> > Any round-trip of a pointer from SLUB is going to lose the per-object
> > tag.
>
> Urgh, I wonder how this is supposed to work?
>
> If we end up having to check the KASAN shadow for *_to_virt(), then why
> do we need to store anything in the page flags at all? Andrey?

As per 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via
pagealloc") we should only save a non-0xff tag in page flags for non
slab pages.

Could you share your .config so I can reproduce this?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 13:34     ` Will Deacon
  2019-08-19 14:05       ` Andrey Konovalov
@ 2019-08-19 14:06       ` Andrey Ryabinin
  2019-08-19 14:21         ` Walter Wu
  1 sibling, 1 reply; 13+ messages in thread
From: Andrey Ryabinin @ 2019-08-19 14:06 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland
  Cc: Walter Wu, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas,
	Will Deacon, Matthias Brugger, Andrew Morton, Andrey Konovalov,
	wsd_upstream, linux-kernel, kasan-dev, linux-mediatek,
	linux-arm-kernel



On 8/19/19 4:34 PM, Will Deacon wrote:
> On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
>> On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
>>> On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
>>>> __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
>>>> but it will modify pointer tag into 0xff, so there is a false positive.
>>>>
>>>> When enable tag-based kasan, phys_to_virt() function need to rewrite
>>>> its original pointer tag in order to avoid kasan report an incorrect
>>>> memory corruption.
>>>
>>> Hmm. Which tree did you see this on? We've recently queued a load of fixes
>>> in this area, but I /thought/ they were only needed after the support for
>>> 52-bit virtual addressing in the kernel.
>>
>> I'm seeing similar issues in the virtio blk code (splat below), atop of
>> the arm64 for-next/core branch. I think this is a latent issue, and
>> people are only just starting to test with KASAN_SW_TAGS.
>>
>> It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
>> virt->page->virt, losing the per-object tag in the process.
>>
>> Our page_to_virt() seems to get a per-page tag, but this only makes
>> sense if you're dealing with the page allocator, rather than something
>> like SLUB which carves a page into smaller objects giving each object a
>> distinct tag.
>>
>> Any round-trip of a pointer from SLUB is going to lose the per-object
>> tag.
> 
> Urgh, I wonder how this is supposed to work?
> 

We supposed to ignore pointers with 0xff tags. We do ignore them when memory access checked,
but not in kfree() path.
This untested patch should fix the issue:



---
 mm/kasan/common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 895dc5e2b3d5..0a81cc328049 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -407,7 +407,7 @@ static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
 		return shadow_byte < 0 ||
 			shadow_byte >= KASAN_SHADOW_SCALE_SIZE;
 	else
-		return tag != (u8)shadow_byte;
+		return (tag != KASAN_TAG_KERNEL) && (tag != (u8)shadow_byte);
 }
 
 static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 14:06       ` Andrey Ryabinin
@ 2019-08-19 14:21         ` Walter Wu
  0 siblings, 0 replies; 13+ messages in thread
From: Walter Wu @ 2019-08-19 14:21 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Will Deacon, Mark Rutland, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Matthias Brugger, Andrew Morton,
	Andrey Konovalov, wsd_upstream, linux-kernel, kasan-dev,
	linux-mediatek, linux-arm-kernel

On Mon, 2019-08-19 at 17:06 +0300, Andrey Ryabinin wrote:
> 
> On 8/19/19 4:34 PM, Will Deacon wrote:
> > On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> >> On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> >>> On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> >>>> __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> >>>> but it will modify pointer tag into 0xff, so there is a false positive.
> >>>>
> >>>> When enable tag-based kasan, phys_to_virt() function need to rewrite
> >>>> its original pointer tag in order to avoid kasan report an incorrect
> >>>> memory corruption.
> >>>
> >>> Hmm. Which tree did you see this on? We've recently queued a load of fixes
> >>> in this area, but I /thought/ they were only needed after the support for
> >>> 52-bit virtual addressing in the kernel.
> >>
> >> I'm seeing similar issues in the virtio blk code (splat below), atop of
> >> the arm64 for-next/core branch. I think this is a latent issue, and
> >> people are only just starting to test with KASAN_SW_TAGS.
> >>
> >> It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
> >> virt->page->virt, losing the per-object tag in the process.
> >>
> >> Our page_to_virt() seems to get a per-page tag, but this only makes
> >> sense if you're dealing with the page allocator, rather than something
> >> like SLUB which carves a page into smaller objects giving each object a
> >> distinct tag.
> >>
> >> Any round-trip of a pointer from SLUB is going to lose the per-object
> >> tag.
> > 
> > Urgh, I wonder how this is supposed to work?
> > 
> 
> We supposed to ignore pointers with 0xff tags. We do ignore them when memory access checked,
> but not in kfree() path.
> This untested patch should fix the issue:
> 
> 
> 
> ---
>  mm/kasan/common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 895dc5e2b3d5..0a81cc328049 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -407,7 +407,7 @@ static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
>  		return shadow_byte < 0 ||
>  			shadow_byte >= KASAN_SHADOW_SCALE_SIZE;
>  	else
> -		return tag != (u8)shadow_byte;
> +		return (tag != KASAN_TAG_KERNEL) && (tag != (u8)shadow_byte);
>  }
>  
>  static bool __kasan_slab_free(struct kmem_cache *cache, void *object,


Hi, Andrey,

Does it miss the double-free case after ignore pointer tag 0xff ?
and please help review my another patch about memory corruption
identification.

Thanks your respondence

Walter




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 14:05       ` Andrey Konovalov
@ 2019-08-19 14:22         ` Will Deacon
  2019-08-19 14:35           ` Robin Murphy
  2019-08-19 15:03         ` Mark Rutland
  1 sibling, 1 reply; 13+ messages in thread
From: Will Deacon @ 2019-08-19 14:22 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Mark Rutland, Walter Wu, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Catalin Marinas, Will Deacon, Matthias Brugger,
	Andrew Morton, wsd_upstream, LKML, kasan-dev, linux-mediatek,
	Linux ARM

On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote:
> On Mon, Aug 19, 2019 at 3:34 PM Will Deacon <will@kernel.org> wrote:
> >
> > On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> > > On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> > > > On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > > > > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > > > > but it will modify pointer tag into 0xff, so there is a false positive.
> > > > >
> > > > > When enable tag-based kasan, phys_to_virt() function need to rewrite
> > > > > its original pointer tag in order to avoid kasan report an incorrect
> > > > > memory corruption.
> > > >
> > > > Hmm. Which tree did you see this on? We've recently queued a load of fixes
> > > > in this area, but I /thought/ they were only needed after the support for
> > > > 52-bit virtual addressing in the kernel.
> > >
> > > I'm seeing similar issues in the virtio blk code (splat below), atop of
> > > the arm64 for-next/core branch. I think this is a latent issue, and
> > > people are only just starting to test with KASAN_SW_TAGS.
> > >
> > > It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
> > > virt->page->virt, losing the per-object tag in the process.
> > >
> > > Our page_to_virt() seems to get a per-page tag, but this only makes
> > > sense if you're dealing with the page allocator, rather than something
> > > like SLUB which carves a page into smaller objects giving each object a
> > > distinct tag.
> > >
> > > Any round-trip of a pointer from SLUB is going to lose the per-object
> > > tag.
> >
> > Urgh, I wonder how this is supposed to work?
> >
> > If we end up having to check the KASAN shadow for *_to_virt(), then why
> > do we need to store anything in the page flags at all? Andrey?
> 
> As per 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via
> pagealloc") we should only save a non-0xff tag in page flags for non
> slab pages.

Thanks, that makes sense. Hopefully the patch from Andrey R will solve
both of the reported splats, since I'd not realised they were both on the
kfree() path.

> Could you share your .config so I can reproduce this?

This is in the iopgtable code, so it's probably pretty tricky to trigger
at runtime unless you have the write IOMMU hardware, unfortunately.

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 14:22         ` Will Deacon
@ 2019-08-19 14:35           ` Robin Murphy
  2019-08-19 14:43             ` Will Deacon
  0 siblings, 1 reply; 13+ messages in thread
From: Robin Murphy @ 2019-08-19 14:35 UTC (permalink / raw)
  To: Will Deacon, Andrey Konovalov
  Cc: Mark Rutland, Walter Wu, wsd_upstream, Catalin Marinas,
	Will Deacon, LKML, kasan-dev, linux-mediatek,
	Alexander Potapenko, Linux ARM, Matthias Brugger,
	Andrey Ryabinin, Andrew Morton, Dmitry Vyukov

On 19/08/2019 15:22, Will Deacon wrote:
> On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote:
>> On Mon, Aug 19, 2019 at 3:34 PM Will Deacon <will@kernel.org> wrote:
>>>
>>> On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
>>>> On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
>>>>> On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
>>>>>> __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
>>>>>> but it will modify pointer tag into 0xff, so there is a false positive.
>>>>>>
>>>>>> When enable tag-based kasan, phys_to_virt() function need to rewrite
>>>>>> its original pointer tag in order to avoid kasan report an incorrect
>>>>>> memory corruption.
>>>>>
>>>>> Hmm. Which tree did you see this on? We've recently queued a load of fixes
>>>>> in this area, but I /thought/ they were only needed after the support for
>>>>> 52-bit virtual addressing in the kernel.
>>>>
>>>> I'm seeing similar issues in the virtio blk code (splat below), atop of
>>>> the arm64 for-next/core branch. I think this is a latent issue, and
>>>> people are only just starting to test with KASAN_SW_TAGS.
>>>>
>>>> It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
>>>> virt->page->virt, losing the per-object tag in the process.
>>>>
>>>> Our page_to_virt() seems to get a per-page tag, but this only makes
>>>> sense if you're dealing with the page allocator, rather than something
>>>> like SLUB which carves a page into smaller objects giving each object a
>>>> distinct tag.
>>>>
>>>> Any round-trip of a pointer from SLUB is going to lose the per-object
>>>> tag.
>>>
>>> Urgh, I wonder how this is supposed to work?
>>>
>>> If we end up having to check the KASAN shadow for *_to_virt(), then why
>>> do we need to store anything in the page flags at all? Andrey?
>>
>> As per 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via
>> pagealloc") we should only save a non-0xff tag in page flags for non
>> slab pages.
> 
> Thanks, that makes sense. Hopefully the patch from Andrey R will solve
> both of the reported splats, since I'd not realised they were both on the
> kfree() path.
> 
>> Could you share your .config so I can reproduce this?
> 
> This is in the iopgtable code, so it's probably pretty tricky to trigger
> at runtime unless you have the write IOMMU hardware, unfortunately.

If simply freeing any entry from the l2_tables cache is sufficient, then 
the short-descriptor selftest should do the job, and that ought to run 
on anything (modulo insane RAM layouts).

Robin.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 14:35           ` Robin Murphy
@ 2019-08-19 14:43             ` Will Deacon
  0 siblings, 0 replies; 13+ messages in thread
From: Will Deacon @ 2019-08-19 14:43 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Will Deacon, Andrey Konovalov, Mark Rutland, Walter Wu,
	wsd_upstream, Catalin Marinas, LKML, kasan-dev, linux-mediatek,
	Alexander Potapenko, Linux ARM, Matthias Brugger,
	Andrey Ryabinin, Andrew Morton, Dmitry Vyukov

On Mon, Aug 19, 2019 at 03:35:16PM +0100, Robin Murphy wrote:
> On 19/08/2019 15:22, Will Deacon wrote:
> > On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote:
> > > On Mon, Aug 19, 2019 at 3:34 PM Will Deacon <will@kernel.org> wrote:
> > > > 
> > > > On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> > > > > On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> > > > > > On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > > > > > > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > > > > > > but it will modify pointer tag into 0xff, so there is a false positive.
> > > > > > > 
> > > > > > > When enable tag-based kasan, phys_to_virt() function need to rewrite
> > > > > > > its original pointer tag in order to avoid kasan report an incorrect
> > > > > > > memory corruption.
> > > > > > 
> > > > > > Hmm. Which tree did you see this on? We've recently queued a load of fixes
> > > > > > in this area, but I /thought/ they were only needed after the support for
> > > > > > 52-bit virtual addressing in the kernel.
> > > > > 
> > > > > I'm seeing similar issues in the virtio blk code (splat below), atop of
> > > > > the arm64 for-next/core branch. I think this is a latent issue, and
> > > > > people are only just starting to test with KASAN_SW_TAGS.
> > > > > 
> > > > > It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
> > > > > virt->page->virt, losing the per-object tag in the process.
> > > > > 
> > > > > Our page_to_virt() seems to get a per-page tag, but this only makes
> > > > > sense if you're dealing with the page allocator, rather than something
> > > > > like SLUB which carves a page into smaller objects giving each object a
> > > > > distinct tag.
> > > > > 
> > > > > Any round-trip of a pointer from SLUB is going to lose the per-object
> > > > > tag.
> > > > 
> > > > Urgh, I wonder how this is supposed to work?
> > > > 
> > > > If we end up having to check the KASAN shadow for *_to_virt(), then why
> > > > do we need to store anything in the page flags at all? Andrey?
> > > 
> > > As per 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via
> > > pagealloc") we should only save a non-0xff tag in page flags for non
> > > slab pages.
> > 
> > Thanks, that makes sense. Hopefully the patch from Andrey R will solve
> > both of the reported splats, since I'd not realised they were both on the
> > kfree() path.
> > 
> > > Could you share your .config so I can reproduce this?
> > 
> > This is in the iopgtable code, so it's probably pretty tricky to trigger
> > at runtime unless you have the write IOMMU hardware, unfortunately.
> 
> If simply freeing any entry from the l2_tables cache is sufficient, then the
> short-descriptor selftest should do the job, and that ought to run on
> anything (modulo insane RAM layouts).

Ok, so that would be defconfig + CONFIG_IOMMU_IO_PGTABLE_ARMV7S +
CONFIG_IOMMU_IO_PGTABLE_ARMV7S_SELFTEST + KASAN...

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 14:05       ` Andrey Konovalov
  2019-08-19 14:22         ` Will Deacon
@ 2019-08-19 15:03         ` Mark Rutland
  2019-08-19 15:37           ` Andrey Konovalov
  1 sibling, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2019-08-19 15:03 UTC (permalink / raw)
  To: Andrey Konovalov, Andrey Ryabinin, Will Deacon
  Cc: Walter Wu, Alexander Potapenko, Dmitry Vyukov, Catalin Marinas,
	Will Deacon, Matthias Brugger, Andrew Morton, wsd_upstream, LKML,
	kasan-dev, linux-mediatek, Linux ARM

On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote:
> On Mon, Aug 19, 2019 at 3:34 PM Will Deacon <will@kernel.org> wrote:
> >
> > On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> > > On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> > > > On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > > > > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > > > > but it will modify pointer tag into 0xff, so there is a false positive.
> > > > >
> > > > > When enable tag-based kasan, phys_to_virt() function need to rewrite
> > > > > its original pointer tag in order to avoid kasan report an incorrect
> > > > > memory corruption.
> > > >
> > > > Hmm. Which tree did you see this on? We've recently queued a load of fixes
> > > > in this area, but I /thought/ they were only needed after the support for
> > > > 52-bit virtual addressing in the kernel.
> > >
> > > I'm seeing similar issues in the virtio blk code (splat below), atop of
> > > the arm64 for-next/core branch. I think this is a latent issue, and
> > > people are only just starting to test with KASAN_SW_TAGS.
> > >
> > > It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
> > > virt->page->virt, losing the per-object tag in the process.
> > >
> > > Our page_to_virt() seems to get a per-page tag, but this only makes
> > > sense if you're dealing with the page allocator, rather than something
> > > like SLUB which carves a page into smaller objects giving each object a
> > > distinct tag.
> > >
> > > Any round-trip of a pointer from SLUB is going to lose the per-object
> > > tag.
> >
> > Urgh, I wonder how this is supposed to work?
> >
> > If we end up having to check the KASAN shadow for *_to_virt(), then why
> > do we need to store anything in the page flags at all? Andrey?
> 
> As per 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via
> pagealloc") we should only save a non-0xff tag in page flags for non
> slab pages.
> 
> Could you share your .config so I can reproduce this?

I wrote a test (below) to do so. :)

It fires with arm64 defconfig, + CONFIG_TEST_KASAN=m.

With Andrey Ryabinin's patch it works as expected with no KASAN splats
for the two new test cases.

Thanks,
Mark.

---->8----
From 7e8569b558fca21ad4e80fddae659591bc84ce1f Mon Sep 17 00:00:00 2001
From: Mark Rutland <mark.rutland@arm.com>
Date: Mon, 19 Aug 2019 15:39:32 +0100
Subject: [PATCH] lib/test_kasan: add roundtrip tests

In several places we needs to be able to operate on pointers which have
gone via a roundtrip:

	virt -> {phys,page} -> virt

With KASAN_SW_TAGS, we can't preserve the tag for SLUB objects, and the
{phys,page} -> virt conversion will use KASAN_TAG_KERNEL.

This patch adds tests to ensure that this works as expected, without
false positives.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 lib/test_kasan.c | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index b63b367a94e8..cf7b93f0d90c 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -19,6 +19,8 @@
 #include <linux/string.h>
 #include <linux/uaccess.h>
 
+#include <asm/page.h>
+
 /*
  * Note: test functions are marked noinline so that their names appear in
  * reports.
@@ -337,6 +339,42 @@ static noinline void __init kmalloc_uaf2(void)
 	kfree(ptr2);
 }
 
+static noinline void __init kfree_via_page(void)
+{
+	char *ptr;
+	size_t size = 8;
+	struct page *page;
+	unsigned long offset;
+
+	pr_info("invalid-free false positive (via page)\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	page = virt_to_page(ptr);
+	offset = offset_in_page(ptr);
+	kfree(page_address(page) + offset);
+}
+
+static noinline void __init kfree_via_phys(void)
+{
+	char *ptr;
+	size_t size = 8;
+	phys_addr_t phys;
+
+	pr_info("invalid-free false positive (via phys)\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	phys = virt_to_phys(ptr);
+	kfree(phys_to_virt(phys));
+}
+
 static noinline void __init kmem_cache_oob(void)
 {
 	char *p;
@@ -737,6 +775,8 @@ static int __init kmalloc_tests_init(void)
 	kmalloc_uaf();
 	kmalloc_uaf_memset();
 	kmalloc_uaf2();
+	kfree_via_page();
+	kfree_via_phys();
 	kmem_cache_oob();
 	memcg_accounted_kmem_cache();
 	kasan_stack_oob();
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 15:03         ` Mark Rutland
@ 2019-08-19 15:37           ` Andrey Konovalov
  2019-08-19 16:03             ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Andrey Konovalov @ 2019-08-19 15:37 UTC (permalink / raw)
  To: Mark Rutland, Will Deacon
  Cc: Andrey Ryabinin, Walter Wu, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Matthias Brugger, Andrew Morton,
	wsd_upstream, LKML, kasan-dev, linux-mediatek, Linux ARM

On Mon, Aug 19, 2019 at 5:03 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote:
> > On Mon, Aug 19, 2019 at 3:34 PM Will Deacon <will@kernel.org> wrote:
> > >
> > > On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> > > > On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> > > > > On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > > > > > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > > > > > but it will modify pointer tag into 0xff, so there is a false positive.
> > > > > >
> > > > > > When enable tag-based kasan, phys_to_virt() function need to rewrite
> > > > > > its original pointer tag in order to avoid kasan report an incorrect
> > > > > > memory corruption.
> > > > >
> > > > > Hmm. Which tree did you see this on? We've recently queued a load of fixes
> > > > > in this area, but I /thought/ they were only needed after the support for
> > > > > 52-bit virtual addressing in the kernel.
> > > >
> > > > I'm seeing similar issues in the virtio blk code (splat below), atop of
> > > > the arm64 for-next/core branch. I think this is a latent issue, and
> > > > people are only just starting to test with KASAN_SW_TAGS.
> > > >
> > > > It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
> > > > virt->page->virt, losing the per-object tag in the process.
> > > >
> > > > Our page_to_virt() seems to get a per-page tag, but this only makes
> > > > sense if you're dealing with the page allocator, rather than something
> > > > like SLUB which carves a page into smaller objects giving each object a
> > > > distinct tag.
> > > >
> > > > Any round-trip of a pointer from SLUB is going to lose the per-object
> > > > tag.
> > >
> > > Urgh, I wonder how this is supposed to work?
> > >
> > > If we end up having to check the KASAN shadow for *_to_virt(), then why
> > > do we need to store anything in the page flags at all? Andrey?
> >
> > As per 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via
> > pagealloc") we should only save a non-0xff tag in page flags for non
> > slab pages.
> >
> > Could you share your .config so I can reproduce this?
>
> I wrote a test (below) to do so. :)
>
> It fires with arm64 defconfig, + CONFIG_TEST_KASAN=m.
>
> With Andrey Ryabinin's patch it works as expected with no KASAN splats
> for the two new test cases.

OK, Andrey's patch makes sense and fixes both Mark's test patch and
reports from CONFIG_IOMMU_IO_PGTABLE_ARMV7S_SELFTEST.

Tested-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>

on both patches.

>
> Thanks,
> Mark.
>
> ---->8----
> From 7e8569b558fca21ad4e80fddae659591bc84ce1f Mon Sep 17 00:00:00 2001
> From: Mark Rutland <mark.rutland@arm.com>
> Date: Mon, 19 Aug 2019 15:39:32 +0100
> Subject: [PATCH] lib/test_kasan: add roundtrip tests
>
> In several places we needs to be able to operate on pointers which have

"needs" => "need"

> gone via a roundtrip:
>
>         virt -> {phys,page} -> virt
>
> With KASAN_SW_TAGS, we can't preserve the tag for SLUB objects, and the
> {phys,page} -> virt conversion will use KASAN_TAG_KERNEL.
>
> This patch adds tests to ensure that this works as expected, without
> false positives.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Andrey Konovalov <andreyknvl@google.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  lib/test_kasan.c | 40 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 40 insertions(+)
>
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index b63b367a94e8..cf7b93f0d90c 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -19,6 +19,8 @@
>  #include <linux/string.h>
>  #include <linux/uaccess.h>
>
> +#include <asm/page.h>
> +
>  /*
>   * Note: test functions are marked noinline so that their names appear in
>   * reports.
> @@ -337,6 +339,42 @@ static noinline void __init kmalloc_uaf2(void)
>         kfree(ptr2);
>  }
>
> +static noinline void __init kfree_via_page(void)
> +{
> +       char *ptr;
> +       size_t size = 8;
> +       struct page *page;
> +       unsigned long offset;
> +
> +       pr_info("invalid-free false positive (via page)\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       page = virt_to_page(ptr);
> +       offset = offset_in_page(ptr);
> +       kfree(page_address(page) + offset);
> +}
> +
> +static noinline void __init kfree_via_phys(void)
> +{
> +       char *ptr;
> +       size_t size = 8;
> +       phys_addr_t phys;
> +
> +       pr_info("invalid-free false positive (via phys)\n");
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       if (!ptr) {
> +               pr_err("Allocation failed\n");
> +               return;
> +       }
> +
> +       phys = virt_to_phys(ptr);
> +       kfree(phys_to_virt(phys));
> +}
> +
>  static noinline void __init kmem_cache_oob(void)
>  {
>         char *p;
> @@ -737,6 +775,8 @@ static int __init kmalloc_tests_init(void)
>         kmalloc_uaf();
>         kmalloc_uaf_memset();
>         kmalloc_uaf2();
> +       kfree_via_page();
> +       kfree_via_phys();
>         kmem_cache_oob();
>         memcg_accounted_kmem_cache();
>         kasan_stack_oob();
> --
> 2.11.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan
  2019-08-19 15:37           ` Andrey Konovalov
@ 2019-08-19 16:03             ` Mark Rutland
  0 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2019-08-19 16:03 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Will Deacon, Andrey Ryabinin, Walter Wu, Alexander Potapenko,
	Dmitry Vyukov, Catalin Marinas, Will Deacon, Matthias Brugger,
	Andrew Morton, wsd_upstream, LKML, kasan-dev, linux-mediatek,
	Linux ARM

On Mon, Aug 19, 2019 at 05:37:36PM +0200, Andrey Konovalov wrote:
> On Mon, Aug 19, 2019 at 5:03 PM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote:
> > > On Mon, Aug 19, 2019 at 3:34 PM Will Deacon <will@kernel.org> wrote:
> > > >
> > > > On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> > > > > On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> > > > > > On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > > > > > > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > > > > > > but it will modify pointer tag into 0xff, so there is a false positive.
> > > > > > >
> > > > > > > When enable tag-based kasan, phys_to_virt() function need to rewrite
> > > > > > > its original pointer tag in order to avoid kasan report an incorrect
> > > > > > > memory corruption.
> > > > > >
> > > > > > Hmm. Which tree did you see this on? We've recently queued a load of fixes
> > > > > > in this area, but I /thought/ they were only needed after the support for
> > > > > > 52-bit virtual addressing in the kernel.
> > > > >
> > > > > I'm seeing similar issues in the virtio blk code (splat below), atop of
> > > > > the arm64 for-next/core branch. I think this is a latent issue, and
> > > > > people are only just starting to test with KASAN_SW_TAGS.
> > > > >
> > > > > It looks like the virtio blk code will round-trip a SLUB-allocated pointer from
> > > > > virt->page->virt, losing the per-object tag in the process.
> > > > >
> > > > > Our page_to_virt() seems to get a per-page tag, but this only makes
> > > > > sense if you're dealing with the page allocator, rather than something
> > > > > like SLUB which carves a page into smaller objects giving each object a
> > > > > distinct tag.
> > > > >
> > > > > Any round-trip of a pointer from SLUB is going to lose the per-object
> > > > > tag.
> > > >
> > > > Urgh, I wonder how this is supposed to work?
> > > >
> > > > If we end up having to check the KASAN shadow for *_to_virt(), then why
> > > > do we need to store anything in the page flags at all? Andrey?
> > >
> > > As per 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via
> > > pagealloc") we should only save a non-0xff tag in page flags for non
> > > slab pages.
> > >
> > > Could you share your .config so I can reproduce this?
> >
> > I wrote a test (below) to do so. :)
> >
> > It fires with arm64 defconfig, + CONFIG_TEST_KASAN=m.
> >
> > With Andrey Ryabinin's patch it works as expected with no KASAN splats
> > for the two new test cases.
> 
> OK, Andrey's patch makes sense and fixes both Mark's test patch and
> reports from CONFIG_IOMMU_IO_PGTABLE_ARMV7S_SELFTEST.
> 
> Tested-by: Andrey Konovalov <andreyknvl@google.com>
> Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
> 
> on both patches.
> 
> >
> > Thanks,
> > Mark.
> >
> > ---->8----
> > From 7e8569b558fca21ad4e80fddae659591bc84ce1f Mon Sep 17 00:00:00 2001
> > From: Mark Rutland <mark.rutland@arm.com>
> > Date: Mon, 19 Aug 2019 15:39:32 +0100
> > Subject: [PATCH] lib/test_kasan: add roundtrip tests
> >
> > In several places we needs to be able to operate on pointers which have
> 
> "needs" => "need"

Thanks! 

I'll spin a standalone v2 of this with that fixed and your tags folded
in.

Mark.

> 
> > gone via a roundtrip:
> >
> >         virt -> {phys,page} -> virt
> >
> > With KASAN_SW_TAGS, we can't preserve the tag for SLUB objects, and the
> > {phys,page} -> virt conversion will use KASAN_TAG_KERNEL.
> >
> > This patch adds tests to ensure that this works as expected, without
> > false positives.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > Cc: Andrey Konovalov <andreyknvl@google.com>
> > Cc: Will Deacon <will.deacon@arm.com>
> > ---
> >  lib/test_kasan.c | 40 ++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 40 insertions(+)
> >
> > diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> > index b63b367a94e8..cf7b93f0d90c 100644
> > --- a/lib/test_kasan.c
> > +++ b/lib/test_kasan.c
> > @@ -19,6 +19,8 @@
> >  #include <linux/string.h>
> >  #include <linux/uaccess.h>
> >
> > +#include <asm/page.h>
> > +
> >  /*
> >   * Note: test functions are marked noinline so that their names appear in
> >   * reports.
> > @@ -337,6 +339,42 @@ static noinline void __init kmalloc_uaf2(void)
> >         kfree(ptr2);
> >  }
> >
> > +static noinline void __init kfree_via_page(void)
> > +{
> > +       char *ptr;
> > +       size_t size = 8;
> > +       struct page *page;
> > +       unsigned long offset;
> > +
> > +       pr_info("invalid-free false positive (via page)\n");
> > +       ptr = kmalloc(size, GFP_KERNEL);
> > +       if (!ptr) {
> > +               pr_err("Allocation failed\n");
> > +               return;
> > +       }
> > +
> > +       page = virt_to_page(ptr);
> > +       offset = offset_in_page(ptr);
> > +       kfree(page_address(page) + offset);
> > +}
> > +
> > +static noinline void __init kfree_via_phys(void)
> > +{
> > +       char *ptr;
> > +       size_t size = 8;
> > +       phys_addr_t phys;
> > +
> > +       pr_info("invalid-free false positive (via phys)\n");
> > +       ptr = kmalloc(size, GFP_KERNEL);
> > +       if (!ptr) {
> > +               pr_err("Allocation failed\n");
> > +               return;
> > +       }
> > +
> > +       phys = virt_to_phys(ptr);
> > +       kfree(phys_to_virt(phys));
> > +}
> > +
> >  static noinline void __init kmem_cache_oob(void)
> >  {
> >         char *p;
> > @@ -737,6 +775,8 @@ static int __init kmalloc_tests_init(void)
> >         kmalloc_uaf();
> >         kmalloc_uaf_memset();
> >         kmalloc_uaf2();
> > +       kfree_via_page();
> > +       kfree_via_phys();
> >         kmem_cache_oob();
> >         memcg_accounted_kmem_cache();
> >         kasan_stack_oob();
> > --
> > 2.11.0
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-08-19 16:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-19 11:44 [PATCH] arm64: kasan: fix phys_to_virt() false positive on tag-based kasan Walter Wu
2019-08-19 12:56 ` Will Deacon
2019-08-19 13:23   ` Mark Rutland
2019-08-19 13:34     ` Will Deacon
2019-08-19 14:05       ` Andrey Konovalov
2019-08-19 14:22         ` Will Deacon
2019-08-19 14:35           ` Robin Murphy
2019-08-19 14:43             ` Will Deacon
2019-08-19 15:03         ` Mark Rutland
2019-08-19 15:37           ` Andrey Konovalov
2019-08-19 16:03             ` Mark Rutland
2019-08-19 14:06       ` Andrey Ryabinin
2019-08-19 14:21         ` Walter Wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).