All of lore.kernel.org
 help / color / mirror / Atom feed
* [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead
@ 2022-05-25 14:26 syzbot
  2022-05-25 16:58 ` Andrew Morton
  0 siblings, 1 reply; 8+ messages in thread
From: syzbot @ 2022-05-25 14:26 UTC (permalink / raw)
  To: akpm, linux-kernel, linux-mm, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    3b5e1590a267 Merge tag 'gpio-fixes-for-v5.18' of git://git..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=15ae2bfdf00000
kernel config:  https://syzkaller.appspot.com/x/.config?x=d84df8e1a4c4d5a4
dashboard link: https://syzkaller.appspot.com/bug?extid=5b96d55e5b54924c77ad
compiler:       Debian clang version 13.0.1-++20220126092033+75e33f71c2da-1~exp1~20220126212112.63, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: use-after-free in do_sync_mmap_readahead+0x465/0x9f0 mm/filemap.c:3006
Read of size 8 at addr ffff88801fedb050 by task syz-executor.5/1755

CPU: 0 PID: 1755 Comm: syz-executor.5 Not tainted 5.18.0-rc7-syzkaller-00136-g3b5e1590a267 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 print_address_description+0x65/0x4b0 mm/kasan/report.c:313
 print_report+0xf4/0x210 mm/kasan/report.c:429
 kasan_report+0xfb/0x130 mm/kasan/report.c:491
 do_sync_mmap_readahead+0x465/0x9f0 mm/filemap.c:3006
 filemap_fault+0x349/0xf20 mm/filemap.c:3138
 __do_fault+0x139/0x4f0 mm/memory.c:3915
 do_read_fault mm/memory.c:4240 [inline]
 do_fault mm/memory.c:4369 [inline]
 handle_pte_fault mm/memory.c:4627 [inline]
 __handle_mm_fault mm/memory.c:4763 [inline]
 handle_mm_fault+0x2bbb/0x3940 mm/memory.c:4861
 faultin_page mm/gup.c:878 [inline]
 __get_user_pages+0x4fd/0x1150 mm/gup.c:1099
 populate_vma_page_range+0x216/0x2b0 mm/gup.c:1442
 __mm_populate+0x2ea/0x4d0 mm/gup.c:1555
 mm_populate include/linux/mm.h:2701 [inline]
 vm_mmap_pgoff+0x241/0x2f0 mm/util.c:524
 ksys_mmap_pgoff+0x48c/0x6d0 mm/mmap.c:1628
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7faf9f4890e9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fafa06c9168 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007faf9f59bf60 RCX: 00007faf9f4890e9
RDX: 0000000001000006 RSI: 0000000000b34900 RDI: 0000000020000000
RBP: 00007faf9f4e308d R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff03616e1f R14: 00007fafa06c9300 R15: 0000000000022000
 </TASK>

Allocated by task 1755:
 kasan_save_stack mm/kasan/common.c:38 [inline]
 kasan_set_track mm/kasan/common.c:45 [inline]
 set_alloc_info mm/kasan/common.c:436 [inline]
 __kasan_slab_alloc+0xb2/0xe0 mm/kasan/common.c:469
 kasan_slab_alloc include/linux/kasan.h:224 [inline]
 slab_post_alloc_hook mm/slab.h:749 [inline]
 slab_alloc_node mm/slub.c:3217 [inline]
 slab_alloc mm/slub.c:3225 [inline]
 __kmem_cache_alloc_lru mm/slub.c:3232 [inline]
 kmem_cache_alloc+0x199/0x2f0 mm/slub.c:3242
 vm_area_alloc+0x20/0xe0 kernel/fork.c:459
 mmap_region+0xbef/0x1730 mm/mmap.c:1771
 do_mmap+0x7a7/0xdf0 mm/mmap.c:1582
 vm_mmap_pgoff+0x1e5/0x2f0 mm/util.c:519
 ksys_mmap_pgoff+0x48c/0x6d0 mm/mmap.c:1628
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

Freed by task 1759:
 kasan_save_stack mm/kasan/common.c:38 [inline]
 kasan_set_track+0x4c/0x70 mm/kasan/common.c:45
 kasan_set_free_info+0x1f/0x40 mm/kasan/generic.c:370
 ____kasan_slab_free+0xd8/0x110 mm/kasan/common.c:366
 kasan_slab_free include/linux/kasan.h:200 [inline]
 slab_free_hook mm/slub.c:1728 [inline]
 slab_free_freelist_hook+0x12e/0x1a0 mm/slub.c:1754
 slab_free mm/slub.c:3510 [inline]
 kmem_cache_free+0xc7/0x270 mm/slub.c:3527
 remove_vma mm/mmap.c:189 [inline]
 remove_vma_list mm/mmap.c:2625 [inline]
 __do_munmap+0x1b4f/0x1cd0 mm/mmap.c:2863
 do_munmap mm/mmap.c:2871 [inline]
 munmap_vma_range mm/mmap.c:604 [inline]
 mmap_region+0x97a/0x1730 mm/mmap.c:1746
 do_mmap+0x7a7/0xdf0 mm/mmap.c:1582
 vm_mmap_pgoff+0x1e5/0x2f0 mm/util.c:519
 ksys_mmap_pgoff+0x48c/0x6d0 mm/mmap.c:1628
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

The buggy address belongs to the object at ffff88801fedb000
 which belongs to the cache vm_area_struct of size 200
The buggy address is located 80 bytes inside of
 200-byte region [ffff88801fedb000, ffff88801fedb0c8)

The buggy address belongs to the physical page:
page:ffffea00007fb6c0 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1fedb
memcg:ffff88801b6a0001
flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff00000000200 dead000000000100 dead000000000122 ffff888140006a00
raw: 0000000000000000 00000000000f000f 00000001ffffffff ffff88801b6a0001
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112cc0(GFP_USER|__GFP_NOWARN|__GFP_NORETRY), pid 20293, tgid 20293 (modprobe), ts 461559890566, free_ts 461429792689
 prep_new_page mm/page_alloc.c:2441 [inline]
 get_page_from_freelist+0x72e/0x7a0 mm/page_alloc.c:4182
 __alloc_pages+0x26c/0x5f0 mm/page_alloc.c:5408
 alloc_slab_page+0x70/0xf0 mm/slub.c:1799
 allocate_slab+0x5e/0x560 mm/slub.c:1944
 new_slab mm/slub.c:2004 [inline]
 ___slab_alloc+0x41e/0xcd0 mm/slub.c:3005
 __slab_alloc mm/slub.c:3092 [inline]
 slab_alloc_node mm/slub.c:3183 [inline]
 slab_alloc mm/slub.c:3225 [inline]
 __kmem_cache_alloc_lru mm/slub.c:3232 [inline]
 kmem_cache_alloc+0x246/0x2f0 mm/slub.c:3242
 vm_area_alloc+0x20/0xe0 kernel/fork.c:459
 mmap_region+0xbef/0x1730 mm/mmap.c:1771
 do_mmap+0x7a7/0xdf0 mm/mmap.c:1582
 vm_mmap_pgoff+0x1e5/0x2f0 mm/util.c:519
 ksys_mmap_pgoff+0x48c/0x6d0 mm/mmap.c:1628
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
page last free stack trace:
 reset_page_owner include/linux/page_owner.h:24 [inline]
 free_pages_prepare mm/page_alloc.c:1356 [inline]
 free_pcp_prepare+0x812/0x900 mm/page_alloc.c:1406
 free_unref_page_prepare mm/page_alloc.c:3328 [inline]
 free_unref_page_list+0x12c/0x890 mm/page_alloc.c:3460
 release_pages+0x2a04/0x2cb0 mm/swap.c:980
 __pagevec_release+0x7d/0xf0 mm/swap.c:1000
 pagevec_release include/linux/pagevec.h:82 [inline]
 folio_batch_release include/linux/pagevec.h:146 [inline]
 truncate_inode_pages_range+0x4a2/0x17b0 mm/truncate.c:373
 kill_bdev block/bdev.c:77 [inline]
 blkdev_flush_mapping+0x181/0x350 block/bdev.c:656
 blkdev_put_whole block/bdev.c:687 [inline]
 blkdev_put+0x49f/0x790 block/bdev.c:947
 blkdev_close+0x55/0x80 block/fops.c:512
 __fput+0x3b9/0x820 fs/file_table.c:317
 task_work_run+0x146/0x1c0 kernel/task_work.c:164
 exit_task_work include/linux/task_work.h:37 [inline]
 do_exit+0x547/0x1eb0 kernel/exit.c:795
 do_group_exit+0x23b/0x2f0 kernel/exit.c:925
 get_signal+0x172f/0x1780 kernel/signal.c:2864
 arch_do_signal_or_restart+0x8d/0x750 arch/x86/kernel/signal.c:867
 exit_to_user_mode_loop+0x74/0x160 kernel/entry/common.c:166
 exit_to_user_mode_prepare+0xad/0x110 kernel/entry/common.c:201

Memory state around the buggy address:
 ffff88801fedaf00: fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc fc
 ffff88801fedaf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff88801fedb000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                                 ^
 ffff88801fedb080: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
 ffff88801fedb100: fc fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead
  2022-05-25 14:26 [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead syzbot
@ 2022-05-25 16:58 ` Andrew Morton
  2022-05-25 17:57   ` Matthew Wilcox
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2022-05-25 16:58 UTC (permalink / raw)
  To: syzbot; +Cc: linux-kernel, linux-mm, syzkaller-bugs, Matthew Wilcox

On Wed, 25 May 2022 07:26:22 -0700 syzbot <syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com> wrote:

> Hello,
> 
> syzbot found the following issue on:
> 
> HEAD commit:    3b5e1590a267 Merge tag 'gpio-fixes-for-v5.18' of git://git..
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=15ae2bfdf00000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=d84df8e1a4c4d5a4
> dashboard link: https://syzkaller.appspot.com/bug?extid=5b96d55e5b54924c77ad
> compiler:       Debian clang version 13.0.1-++20220126092033+75e33f71c2da-1~exp1~20220126212112.63, GNU ld (GNU Binutils for Debian) 2.35.2
> 
> Unfortunately, I don't have any reproducer for this issue yet.
> 
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com
> 
> ==================================================================
> BUG: KASAN: use-after-free in do_sync_mmap_readahead+0x465/0x9f0 mm/filemap.c:3006
> Read of size 8 at addr ffff88801fedb050 by task syz-executor.5/1755

A race?

#ifdef CONFIG_TRANSPARENT_HUGEPAGE
	/* Use the readahead code, even if readahead is disabled */
	if (vmf->vma->vm_flags & VM_HUGEPAGE) {
		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
		ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
		ra->size = HPAGE_PMD_NR;
		/*
		 * Fetch two PMD folios, so we get the chance to actually
		 * readahead, unless we've been told not to.
		 */
-->		if (!(vmf->vma->vm_flags & VM_RAND_READ))
			ra->size *= 2;
		ra->async_size = HPAGE_PMD_NR;
		page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
		return fpin;
	}
#endif

Reading from vmf->vma->vm_flags was OK, then it suddenly wasn't.

> 
> CPU: 0 PID: 1755 Comm: syz-executor.5 Not tainted 5.18.0-rc7-syzkaller-00136-g3b5e1590a267 #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> Call Trace:
>  <TASK>
>  __dump_stack lib/dump_stack.c:88 [inline]
>  dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
>  print_address_description+0x65/0x4b0 mm/kasan/report.c:313
>  print_report+0xf4/0x210 mm/kasan/report.c:429
>  kasan_report+0xfb/0x130 mm/kasan/report.c:491
>  do_sync_mmap_readahead+0x465/0x9f0 mm/filemap.c:3006
>  filemap_fault+0x349/0xf20 mm/filemap.c:3138
>  __do_fault+0x139/0x4f0 mm/memory.c:3915
>  do_read_fault mm/memory.c:4240 [inline]
>  do_fault mm/memory.c:4369 [inline]
>  handle_pte_fault mm/memory.c:4627 [inline]
>  __handle_mm_fault mm/memory.c:4763 [inline]
>  handle_mm_fault+0x2bbb/0x3940 mm/memory.c:4861
>  faultin_page mm/gup.c:878 [inline]
>  __get_user_pages+0x4fd/0x1150 mm/gup.c:1099
>  populate_vma_page_range+0x216/0x2b0 mm/gup.c:1442
>  __mm_populate+0x2ea/0x4d0 mm/gup.c:1555
>  mm_populate include/linux/mm.h:2701 [inline]
>  vm_mmap_pgoff+0x241/0x2f0 mm/util.c:524
>  ksys_mmap_pgoff+0x48c/0x6d0 mm/mmap.c:1628
>  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
>  do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
>  entry_SYSCALL_64_after_hwframe+0x44/0xae
> RIP: 0033:0x7faf9f4890e9
> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007fafa06c9168 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
> RAX: ffffffffffffffda RBX: 00007faf9f59bf60 RCX: 00007faf9f4890e9
> RDX: 0000000001000006 RSI: 0000000000b34900 RDI: 0000000020000000
> RBP: 00007faf9f4e308d R08: 0000000000000004 R09: 0000000000000000
> R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007fff03616e1f R14: 00007fafa06c9300 R15: 0000000000022000
>  </TASK>
> 
> Allocated by task 1755:
>  kasan_save_stack mm/kasan/common.c:38 [inline]
>  kasan_set_track mm/kasan/common.c:45 [inline]
>  set_alloc_info mm/kasan/common.c:436 [inline]
>  __kasan_slab_alloc+0xb2/0xe0 mm/kasan/common.c:469
>  kasan_slab_alloc include/linux/kasan.h:224 [inline]
>  slab_post_alloc_hook mm/slab.h:749 [inline]
>  slab_alloc_node mm/slub.c:3217 [inline]
>  slab_alloc mm/slub.c:3225 [inline]
>  __kmem_cache_alloc_lru mm/slub.c:3232 [inline]
>  kmem_cache_alloc+0x199/0x2f0 mm/slub.c:3242
>  vm_area_alloc+0x20/0xe0 kernel/fork.c:459
>  mmap_region+0xbef/0x1730 mm/mmap.c:1771
>  do_mmap+0x7a7/0xdf0 mm/mmap.c:1582
>  vm_mmap_pgoff+0x1e5/0x2f0 mm/util.c:519
>  ksys_mmap_pgoff+0x48c/0x6d0 mm/mmap.c:1628
>  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
>  do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
>  entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> Freed by task 1759:
>  kasan_save_stack mm/kasan/common.c:38 [inline]
>  kasan_set_track+0x4c/0x70 mm/kasan/common.c:45
>  kasan_set_free_info+0x1f/0x40 mm/kasan/generic.c:370
>  ____kasan_slab_free+0xd8/0x110 mm/kasan/common.c:366
>  kasan_slab_free include/linux/kasan.h:200 [inline]
>  slab_free_hook mm/slub.c:1728 [inline]
>  slab_free_freelist_hook+0x12e/0x1a0 mm/slub.c:1754
>  slab_free mm/slub.c:3510 [inline]
>  kmem_cache_free+0xc7/0x270 mm/slub.c:3527
>  remove_vma mm/mmap.c:189 [inline]
>  remove_vma_list mm/mmap.c:2625 [inline]
>  __do_munmap+0x1b4f/0x1cd0 mm/mmap.c:2863
>  do_munmap mm/mmap.c:2871 [inline]
>  munmap_vma_range mm/mmap.c:604 [inline]
>  mmap_region+0x97a/0x1730 mm/mmap.c:1746
>  do_mmap+0x7a7/0xdf0 mm/mmap.c:1582
>  vm_mmap_pgoff+0x1e5/0x2f0 mm/util.c:519
>  ksys_mmap_pgoff+0x48c/0x6d0 mm/mmap.c:1628
>  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
>  do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
>  entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> The buggy address belongs to the object at ffff88801fedb000
>  which belongs to the cache vm_area_struct of size 200
> The buggy address is located 80 bytes inside of
>  200-byte region [ffff88801fedb000, ffff88801fedb0c8)
> 
> The buggy address belongs to the physical page:
> page:ffffea00007fb6c0 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1fedb
> memcg:ffff88801b6a0001
> flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff)
> raw: 00fff00000000200 dead000000000100 dead000000000122 ffff888140006a00
> raw: 0000000000000000 00000000000f000f 00000001ffffffff ffff88801b6a0001
> page dumped because: kasan: bad access detected
> page_owner tracks the page as allocated
> page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112cc0(GFP_USER|__GFP_NOWARN|__GFP_NORETRY), pid 20293, tgid 20293 (modprobe), ts 461559890566, free_ts 461429792689
>  prep_new_page mm/page_alloc.c:2441 [inline]
>  get_page_from_freelist+0x72e/0x7a0 mm/page_alloc.c:4182
>  __alloc_pages+0x26c/0x5f0 mm/page_alloc.c:5408
>  alloc_slab_page+0x70/0xf0 mm/slub.c:1799
>  allocate_slab+0x5e/0x560 mm/slub.c:1944
>  new_slab mm/slub.c:2004 [inline]
>  ___slab_alloc+0x41e/0xcd0 mm/slub.c:3005
>  __slab_alloc mm/slub.c:3092 [inline]
>  slab_alloc_node mm/slub.c:3183 [inline]
>  slab_alloc mm/slub.c:3225 [inline]
>  __kmem_cache_alloc_lru mm/slub.c:3232 [inline]
>  kmem_cache_alloc+0x246/0x2f0 mm/slub.c:3242
>  vm_area_alloc+0x20/0xe0 kernel/fork.c:459
>  mmap_region+0xbef/0x1730 mm/mmap.c:1771
>  do_mmap+0x7a7/0xdf0 mm/mmap.c:1582
>  vm_mmap_pgoff+0x1e5/0x2f0 mm/util.c:519
>  ksys_mmap_pgoff+0x48c/0x6d0 mm/mmap.c:1628
>  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
>  do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
>  entry_SYSCALL_64_after_hwframe+0x44/0xae
> page last free stack trace:
>  reset_page_owner include/linux/page_owner.h:24 [inline]
>  free_pages_prepare mm/page_alloc.c:1356 [inline]
>  free_pcp_prepare+0x812/0x900 mm/page_alloc.c:1406
>  free_unref_page_prepare mm/page_alloc.c:3328 [inline]
>  free_unref_page_list+0x12c/0x890 mm/page_alloc.c:3460
>  release_pages+0x2a04/0x2cb0 mm/swap.c:980
>  __pagevec_release+0x7d/0xf0 mm/swap.c:1000
>  pagevec_release include/linux/pagevec.h:82 [inline]
>  folio_batch_release include/linux/pagevec.h:146 [inline]
>  truncate_inode_pages_range+0x4a2/0x17b0 mm/truncate.c:373
>  kill_bdev block/bdev.c:77 [inline]
>  blkdev_flush_mapping+0x181/0x350 block/bdev.c:656
>  blkdev_put_whole block/bdev.c:687 [inline]
>  blkdev_put+0x49f/0x790 block/bdev.c:947
>  blkdev_close+0x55/0x80 block/fops.c:512
>  __fput+0x3b9/0x820 fs/file_table.c:317
>  task_work_run+0x146/0x1c0 kernel/task_work.c:164
>  exit_task_work include/linux/task_work.h:37 [inline]
>  do_exit+0x547/0x1eb0 kernel/exit.c:795
>  do_group_exit+0x23b/0x2f0 kernel/exit.c:925
>  get_signal+0x172f/0x1780 kernel/signal.c:2864
>  arch_do_signal_or_restart+0x8d/0x750 arch/x86/kernel/signal.c:867
>  exit_to_user_mode_loop+0x74/0x160 kernel/entry/common.c:166
>  exit_to_user_mode_prepare+0xad/0x110 kernel/entry/common.c:201
> 
> Memory state around the buggy address:
>  ffff88801fedaf00: fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc fc
>  ffff88801fedaf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> >ffff88801fedb000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>                                                  ^
>  ffff88801fedb080: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
>  ffff88801fedb100: fc fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ==================================================================
> 
> 
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
> 
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead
  2022-05-25 16:58 ` Andrew Morton
@ 2022-05-25 17:57   ` Matthew Wilcox
  2022-05-25 18:33     ` Matthew Wilcox
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Wilcox @ 2022-05-25 17:57 UTC (permalink / raw)
  To: Andrew Morton; +Cc: syzbot, linux-kernel, linux-mm, syzkaller-bugs

On Wed, May 25, 2022 at 09:58:42AM -0700, Andrew Morton wrote:
> On Wed, 25 May 2022 07:26:22 -0700 syzbot <syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com> wrote:
> > BUG: KASAN: use-after-free in do_sync_mmap_readahead+0x465/0x9f0 mm/filemap.c:3006
> > Read of size 8 at addr ffff88801fedb050 by task syz-executor.5/1755
> 
> A race?
> 
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> 	/* Use the readahead code, even if readahead is disabled */
> 	if (vmf->vma->vm_flags & VM_HUGEPAGE) {
> 		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
> 		ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
> 		ra->size = HPAGE_PMD_NR;
> 		/*
> 		 * Fetch two PMD folios, so we get the chance to actually
> 		 * readahead, unless we've been told not to.
> 		 */
> -->		if (!(vmf->vma->vm_flags & VM_RAND_READ))
> 			ra->size *= 2;
> 		ra->async_size = HPAGE_PMD_NR;
> 		page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
> 		return fpin;
> 	}
> #endif
> 
> Reading from vmf->vma->vm_flags was OK, then it suddenly wasn't.

Ohh, that makes sense.  We unlocked the mmap_sem, so the file is
pinned, but the VMA isn't.  I'll whip up a patch.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead
  2022-05-25 17:57   ` Matthew Wilcox
@ 2022-05-25 18:33     ` Matthew Wilcox
  2022-05-25 18:33       ` syzbot
                         ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Matthew Wilcox @ 2022-05-25 18:33 UTC (permalink / raw)
  To: Andrew Morton; +Cc: syzbot, linux-kernel, linux-mm, syzkaller-bugs

On Wed, May 25, 2022 at 06:57:55PM +0100, Matthew Wilcox wrote:
> 
> Ohh, that makes sense.  We unlocked the mmap_sem, so the file is
> pinned, but the VMA isn't.  I'll whip up a patch.

#syz test

From 01a4917c4cfe400eb310eba4f2fa466d381623c1 Mon Sep 17 00:00:00 2001
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Date: Wed, 25 May 2022 14:23:45 -0400
Subject: [PATCH] mm/filemap: Cache the value of vm_flags

After we have unlocked the mmap_lock for I/O, the file is pinned, but
the VMA is not.  Checking this flag after that can be a use-after-free.
It's not a terribly interesting use-after-free as it can only read one
bit, and it's used to decide whether to read 2MB or 4MB.  But it
upsets the automated tools and it's generally bad practice anyway,
so let's fix it.

Reported-by: syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com
Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/filemap.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 9a1eef6c5d35..61dd39990fda 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2991,11 +2991,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
 	struct address_space *mapping = file->f_mapping;
 	DEFINE_READAHEAD(ractl, file, ra, mapping, vmf->pgoff);
 	struct file *fpin = NULL;
+	unsigned long vm_flags = vmf->vma->vm_flags;
 	unsigned int mmap_miss;
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	/* Use the readahead code, even if readahead is disabled */
-	if (vmf->vma->vm_flags & VM_HUGEPAGE) {
+	if (vm_flags & VM_HUGEPAGE) {
 		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
 		ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
 		ra->size = HPAGE_PMD_NR;
@@ -3003,7 +3004,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
 		 * Fetch two PMD folios, so we get the chance to actually
 		 * readahead, unless we've been told not to.
 		 */
-		if (!(vmf->vma->vm_flags & VM_RAND_READ))
+		if (!(vm_flags & VM_RAND_READ))
 			ra->size *= 2;
 		ra->async_size = HPAGE_PMD_NR;
 		page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
@@ -3012,12 +3013,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
 #endif
 
 	/* If we don't want any read-ahead, don't bother */
-	if (vmf->vma->vm_flags & VM_RAND_READ)
+	if (vm_flags & VM_RAND_READ)
 		return fpin;
 	if (!ra->ra_pages)
 		return fpin;
 
-	if (vmf->vma->vm_flags & VM_SEQ_READ) {
+	if (vm_flags & VM_SEQ_READ) {
 		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
 		page_cache_sync_ra(&ractl, ra->ra_pages);
 		return fpin;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead
  2022-05-25 18:33     ` Matthew Wilcox
@ 2022-05-25 18:33       ` syzbot
  2022-05-25 18:33       ` syzbot
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: syzbot @ 2022-05-25 18:33 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: akpm, linux-kernel, linux-mm, syzkaller-bugs, willy

> On Wed, May 25, 2022 at 06:57:55PM +0100, Matthew Wilcox wrote:
>> 
>> Ohh, that makes sense.  We unlocked the mmap_sem, so the file is
>> pinned, but the VMA isn't.  I'll whip up a patch.
>
> #syz test

want 2 args (repo, branch), got 7

>
> From 01a4917c4cfe400eb310eba4f2fa466d381623c1 Mon Sep 17 00:00:00 2001
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Date: Wed, 25 May 2022 14:23:45 -0400
> Subject: [PATCH] mm/filemap: Cache the value of vm_flags
>
> After we have unlocked the mmap_lock for I/O, the file is pinned, but
> the VMA is not.  Checking this flag after that can be a use-after-free.
> It's not a terribly interesting use-after-free as it can only read one
> bit, and it's used to decide whether to read 2MB or 4MB.  But it
> upsets the automated tools and it's generally bad practice anyway,
> so let's fix it.
>
> Reported-by: syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com
> Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings")
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/filemap.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 9a1eef6c5d35..61dd39990fda 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2991,11 +2991,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  	struct address_space *mapping = file->f_mapping;
>  	DEFINE_READAHEAD(ractl, file, ra, mapping, vmf->pgoff);
>  	struct file *fpin = NULL;
> +	unsigned long vm_flags = vmf->vma->vm_flags;
>  	unsigned int mmap_miss;
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	/* Use the readahead code, even if readahead is disabled */
> -	if (vmf->vma->vm_flags & VM_HUGEPAGE) {
> +	if (vm_flags & VM_HUGEPAGE) {
>  		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>  		ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
>  		ra->size = HPAGE_PMD_NR;
> @@ -3003,7 +3004,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  		 * Fetch two PMD folios, so we get the chance to actually
>  		 * readahead, unless we've been told not to.
>  		 */
> -		if (!(vmf->vma->vm_flags & VM_RAND_READ))
> +		if (!(vm_flags & VM_RAND_READ))
>  			ra->size *= 2;
>  		ra->async_size = HPAGE_PMD_NR;
>  		page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
> @@ -3012,12 +3013,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  #endif
>  
>  	/* If we don't want any read-ahead, don't bother */
> -	if (vmf->vma->vm_flags & VM_RAND_READ)
> +	if (vm_flags & VM_RAND_READ)
>  		return fpin;
>  	if (!ra->ra_pages)
>  		return fpin;
>  
> -	if (vmf->vma->vm_flags & VM_SEQ_READ) {
> +	if (vm_flags & VM_SEQ_READ) {
>  		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>  		page_cache_sync_ra(&ractl, ra->ra_pages);
>  		return fpin;
> -- 
> 2.34.1
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead
  2022-05-25 18:33     ` Matthew Wilcox
  2022-05-25 18:33       ` syzbot
@ 2022-05-25 18:33       ` syzbot
  2022-05-25 18:59       ` Andrew Morton
  2022-05-27 12:35       ` Dmitry Vyukov
  3 siblings, 0 replies; 8+ messages in thread
From: syzbot @ 2022-05-25 18:33 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: akpm, linux-kernel, linux-mm, syzkaller-bugs, willy

> On Wed, May 25, 2022 at 06:57:55PM +0100, Matthew Wilcox wrote:
>> 
>> Ohh, that makes sense.  We unlocked the mmap_sem, so the file is
>> pinned, but the VMA isn't.  I'll whip up a patch.
>
> #syz test

want 2 args (repo, branch), got 7

>
> From 01a4917c4cfe400eb310eba4f2fa466d381623c1 Mon Sep 17 00:00:00 2001
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Date: Wed, 25 May 2022 14:23:45 -0400
> Subject: [PATCH] mm/filemap: Cache the value of vm_flags
>
> After we have unlocked the mmap_lock for I/O, the file is pinned, but
> the VMA is not.  Checking this flag after that can be a use-after-free.
> It's not a terribly interesting use-after-free as it can only read one
> bit, and it's used to decide whether to read 2MB or 4MB.  But it
> upsets the automated tools and it's generally bad practice anyway,
> so let's fix it.
>
> Reported-by: syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com
> Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings")
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/filemap.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 9a1eef6c5d35..61dd39990fda 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2991,11 +2991,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  	struct address_space *mapping = file->f_mapping;
>  	DEFINE_READAHEAD(ractl, file, ra, mapping, vmf->pgoff);
>  	struct file *fpin = NULL;
> +	unsigned long vm_flags = vmf->vma->vm_flags;
>  	unsigned int mmap_miss;
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	/* Use the readahead code, even if readahead is disabled */
> -	if (vmf->vma->vm_flags & VM_HUGEPAGE) {
> +	if (vm_flags & VM_HUGEPAGE) {
>  		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>  		ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
>  		ra->size = HPAGE_PMD_NR;
> @@ -3003,7 +3004,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  		 * Fetch two PMD folios, so we get the chance to actually
>  		 * readahead, unless we've been told not to.
>  		 */
> -		if (!(vmf->vma->vm_flags & VM_RAND_READ))
> +		if (!(vm_flags & VM_RAND_READ))
>  			ra->size *= 2;
>  		ra->async_size = HPAGE_PMD_NR;
>  		page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
> @@ -3012,12 +3013,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  #endif
>  
>  	/* If we don't want any read-ahead, don't bother */
> -	if (vmf->vma->vm_flags & VM_RAND_READ)
> +	if (vm_flags & VM_RAND_READ)
>  		return fpin;
>  	if (!ra->ra_pages)
>  		return fpin;
>  
> -	if (vmf->vma->vm_flags & VM_SEQ_READ) {
> +	if (vm_flags & VM_SEQ_READ) {
>  		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>  		page_cache_sync_ra(&ractl, ra->ra_pages);
>  		return fpin;
> -- 
> 2.34.1
>
> -- 
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bugs+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/Yo52gzYYOpU0NwDo%40casper.infradead.org.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead
  2022-05-25 18:33     ` Matthew Wilcox
  2022-05-25 18:33       ` syzbot
  2022-05-25 18:33       ` syzbot
@ 2022-05-25 18:59       ` Andrew Morton
  2022-05-27 12:35       ` Dmitry Vyukov
  3 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2022-05-25 18:59 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: syzbot, linux-kernel, linux-mm, syzkaller-bugs

On Wed, 25 May 2022 19:33:39 +0100 Matthew Wilcox <willy@infradead.org> wrote:

> On Wed, May 25, 2022 at 06:57:55PM +0100, Matthew Wilcox wrote:
> > 
> > Ohh, that makes sense.  We unlocked the mmap_sem, so the file is
> > pinned, but the VMA isn't.  I'll whip up a patch.
> 
> #syz test
> 
> >From 01a4917c4cfe400eb310eba4f2fa466d381623c1 Mon Sep 17 00:00:00 2001
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Date: Wed, 25 May 2022 14:23:45 -0400
> Subject: [PATCH] mm/filemap: Cache the value of vm_flags
> 
> After we have unlocked the mmap_lock for I/O, the file is pinned, but
> the VMA is not.  Checking this flag after that can be a use-after-free.
> It's not a terribly interesting use-after-free as it can only read one
> bit, and it's used to decide whether to read 2MB or 4MB.  But it
> upsets the automated tools and it's generally bad practice anyway,
> so let's fix it.
> 
> Reported-by: syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com
> Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings")
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

cc:stable also, please.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead
  2022-05-25 18:33     ` Matthew Wilcox
                         ` (2 preceding siblings ...)
  2022-05-25 18:59       ` Andrew Morton
@ 2022-05-27 12:35       ` Dmitry Vyukov
  3 siblings, 0 replies; 8+ messages in thread
From: Dmitry Vyukov @ 2022-05-27 12:35 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, syzbot, linux-kernel, linux-mm, syzkaller-bugs

On Wed, 25 May 2022 at 20:33, Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, May 25, 2022 at 06:57:55PM +0100, Matthew Wilcox wrote:
> >
> > Ohh, that makes sense.  We unlocked the mmap_sem, so the file is
> > pinned, but the VMA isn't.  I'll whip up a patch.
>
> #syz test

This bug doesn't have a reproducer, so unfortunately it can't be tested.
But it would also need kernel repo/branch to apply the patch, e.g.
something like #syz  test
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
master

> From 01a4917c4cfe400eb310eba4f2fa466d381623c1 Mon Sep 17 00:00:00 2001
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Date: Wed, 25 May 2022 14:23:45 -0400
> Subject: [PATCH] mm/filemap: Cache the value of vm_flags
>
> After we have unlocked the mmap_lock for I/O, the file is pinned, but
> the VMA is not.  Checking this flag after that can be a use-after-free.
> It's not a terribly interesting use-after-free as it can only read one
> bit, and it's used to decide whether to read 2MB or 4MB.  But it
> upsets the automated tools and it's generally bad practice anyway,
> so let's fix it.
>
> Reported-by: syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com
> Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings")
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/filemap.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 9a1eef6c5d35..61dd39990fda 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2991,11 +2991,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>         struct address_space *mapping = file->f_mapping;
>         DEFINE_READAHEAD(ractl, file, ra, mapping, vmf->pgoff);
>         struct file *fpin = NULL;
> +       unsigned long vm_flags = vmf->vma->vm_flags;
>         unsigned int mmap_miss;
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>         /* Use the readahead code, even if readahead is disabled */
> -       if (vmf->vma->vm_flags & VM_HUGEPAGE) {
> +       if (vm_flags & VM_HUGEPAGE) {
>                 fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>                 ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
>                 ra->size = HPAGE_PMD_NR;
> @@ -3003,7 +3004,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>                  * Fetch two PMD folios, so we get the chance to actually
>                  * readahead, unless we've been told not to.
>                  */
> -               if (!(vmf->vma->vm_flags & VM_RAND_READ))
> +               if (!(vm_flags & VM_RAND_READ))
>                         ra->size *= 2;
>                 ra->async_size = HPAGE_PMD_NR;
>                 page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
> @@ -3012,12 +3013,12 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  #endif
>
>         /* If we don't want any read-ahead, don't bother */
> -       if (vmf->vma->vm_flags & VM_RAND_READ)
> +       if (vm_flags & VM_RAND_READ)
>                 return fpin;
>         if (!ra->ra_pages)
>                 return fpin;
>
> -       if (vmf->vma->vm_flags & VM_SEQ_READ) {
> +       if (vm_flags & VM_SEQ_READ) {
>                 fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>                 page_cache_sync_ra(&ractl, ra->ra_pages);
>                 return fpin;
> --
> 2.34.1
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bugs+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/Yo52gzYYOpU0NwDo%40casper.infradead.org.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-05-27 12:43 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-25 14:26 [syzbot] KASAN: use-after-free Read in do_sync_mmap_readahead syzbot
2022-05-25 16:58 ` Andrew Morton
2022-05-25 17:57   ` Matthew Wilcox
2022-05-25 18:33     ` Matthew Wilcox
2022-05-25 18:33       ` syzbot
2022-05-25 18:33       ` syzbot
2022-05-25 18:59       ` Andrew Morton
2022-05-27 12:35       ` Dmitry Vyukov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.