linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* possible deadlock in shmem_mfill_atomic_pte
@ 2020-03-31 17:21 syzbot
  2020-04-11  5:16 ` syzbot
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: syzbot @ 2020-03-31 17:21 UTC (permalink / raw)
  To: akpm, hughd, linux-kernel, linux-mm, syzkaller-bugs

Hello,

syzbot found the following crash on:

HEAD commit:    527630fb Merge tag 'clk-fixes-for-linus' of git://git.kern..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1214875be00000
kernel config:  https://syzkaller.appspot.com/x/.config?x=27392dd2975fd692
dashboard link: https://syzkaller.appspot.com/bug?extid=e27980339d305f2dbfd9
compiler:       gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com

WARNING: possible irq lock inversion dependency detected
5.6.0-rc7-syzkaller #0 Not tainted
--------------------------------------------------------
syz-executor.0/10317 just changed the state of lock:
ffff888021d16568 (&(&info->lock)->rlock){+.+.}, at: spin_lock include/linux/spinlock.h:338 [inline]
ffff888021d16568 (&(&info->lock)->rlock){+.+.}, at: shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
but this lock was taken by another, SOFTIRQ-safe lock in the past:
 (&(&xa->xa_lock)->rlock#5){..-.}


and interrupts could create inverse lock ordering between them.


other info that might help us debug this:
 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&(&info->lock)->rlock);
                               local_irq_disable();
                               lock(&(&xa->xa_lock)->rlock#5);
                               lock(&(&info->lock)->rlock);
  <Interrupt>
    lock(&(&xa->xa_lock)->rlock#5);

 *** DEADLOCK ***

2 locks held by syz-executor.0/10317:
 #0: ffff888011721898 (&mm->mmap_sem#2){++++}, at: __mcopy_atomic mm/userfaultfd.c:474 [inline]
 #0: ffff888011721898 (&mm->mmap_sem#2){++++}, at: mcopy_atomic+0x185/0x2510 mm/userfaultfd.c:607
 #1: ffff888024eda280 (&(ptlock_ptr(page))->rlock#2){+.+.}, at: spin_lock include/linux/spinlock.h:338 [inline]
 #1: ffff888024eda280 (&(ptlock_ptr(page))->rlock#2){+.+.}, at: shmem_mfill_atomic_pte+0xf76/0x21c0 mm/shmem.c:2394

the shortest dependencies between 2nd lock and 1st lock:
 -> (&(&xa->xa_lock)->rlock#5){..-.} {
    IN-SOFTIRQ-W at:
                      lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
                      __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                      _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
                      test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
                      end_page_writeback+0x239/0x520 mm/filemap.c:1317
                      end_buffer_async_write+0x6a9/0xa30 fs/buffer.c:389
                      end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3018
                      bio_endio+0x473/0x820 block/bio.c:1872
                      req_bio_endio block/blk-core.c:245 [inline]
                      blk_update_request+0x3e1/0xdc0 block/blk-core.c:1468
                      scsi_end_request+0x80/0x7a0 drivers/scsi/scsi_lib.c:576
                      scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:960
                      scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1476
                      blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
                      __do_softirq+0x26c/0x99d kernel/softirq.c:292
                      invoke_softirq kernel/softirq.c:373 [inline]
                      irq_exit+0x192/0x1d0 kernel/softirq.c:413
                      exiting_irq arch/x86/include/asm/apic.h:546 [inline]
                      do_IRQ+0xde/0x280 arch/x86/kernel/irq.c:263
                      ret_from_intr+0x0/0x36
                      clear_page_erms+0x7/0x10 arch/x86/lib/clear_page_64.S:48
                      clear_page arch/x86/include/asm/page_64.h:49 [inline]
                      clear_highpage include/linux/highmem.h:214 [inline]
                      kernel_init_free_pages+0x92/0x120 mm/page_alloc.c:1118
                      prep_new_page+0x12e/0x1f0 mm/page_alloc.c:2160
                      get_page_from_freelist+0x14c7/0x3ee0 mm/page_alloc.c:3684
                      __alloc_pages_nodemask+0x2a5/0x820 mm/page_alloc.c:4731
                      alloc_pages_current+0xff/0x200 mm/mempolicy.c:2211
                      alloc_pages include/linux/gfp.h:532 [inline]
                      __page_cache_alloc+0x298/0x480 mm/filemap.c:959
                      __do_page_cache_readahead+0x1a7/0x570 mm/readahead.c:196
                      ra_submit mm/internal.h:62 [inline]
                      ondemand_readahead+0x566/0xd60 mm/readahead.c:492
                      page_cache_async_readahead mm/readahead.c:574 [inline]
                      page_cache_async_readahead+0x43d/0x7c0 mm/readahead.c:547
                      generic_file_buffered_read mm/filemap.c:2037 [inline]
                      generic_file_read_iter+0x124a/0x2b00 mm/filemap.c:2302
                      ext4_file_read_iter fs/ext4/file.c:131 [inline]
                      ext4_file_read_iter+0x1d1/0x600 fs/ext4/file.c:114
                      call_read_iter include/linux/fs.h:1896 [inline]
                      new_sync_read+0x4a2/0x790 fs/read_write.c:414
                      __vfs_read+0xc9/0x100 fs/read_write.c:427
                      integrity_kernel_read+0x143/0x200 security/integrity/iint.c:200
                      ima_calc_file_hash_tfm+0x2aa/0x3b0 security/integrity/ima/ima_crypto.c:360
                      ima_calc_file_shash security/integrity/ima/ima_crypto.c:391 [inline]
                      ima_calc_file_hash+0x199/0x540 security/integrity/ima/ima_crypto.c:456
                      ima_collect_measurement+0x4c4/0x570 security/integrity/ima/ima_api.c:249
                      process_measurement+0xc6d/0x1740 security/integrity/ima/ima_main.c:326
                      ima_bprm_check+0xde/0x210 security/integrity/ima/ima_main.c:417
                      security_bprm_check+0x89/0xb0 security/security.c:819
                      search_binary_handler+0x70/0x580 fs/exec.c:1649
                      exec_binprm fs/exec.c:1705 [inline]
                      __do_execve_file.isra.0+0x12fc/0x2270 fs/exec.c:1825
                      do_execveat_common fs/exec.c:1871 [inline]
                      do_execve fs/exec.c:1888 [inline]
                      __do_sys_execve fs/exec.c:1964 [inline]
                      __se_sys_execve fs/exec.c:1959 [inline]
                      __x64_sys_execve+0x8a/0xb0 fs/exec.c:1959
                      do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
                      entry_SYSCALL_64_after_hwframe+0x49/0xbe
    INITIAL USE at:
                     lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
                     __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
                     _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
                     spin_lock_irq include/linux/spinlock.h:363 [inline]
                     clear_inode+0x1b/0x1e0 fs/inode.c:529
                     shmem_evict_inode+0x1db/0x9f0 mm/shmem.c:1116
                     evict+0x2ed/0x650 fs/inode.c:576
                     iput_final fs/inode.c:1572 [inline]
                     iput+0x536/0x8c0 fs/inode.c:1598
                     dentry_unlink_inode+0x2c0/0x3e0 fs/dcache.c:374
                     d_delete fs/dcache.c:2451 [inline]
                     d_delete+0x117/0x150 fs/dcache.c:2440
                     vfs_unlink+0x4d5/0x620 fs/namei.c:4087
                     handle_remove+0x417/0x720 drivers/base/devtmpfs.c:332
                     handle drivers/base/devtmpfs.c:378 [inline]
                     devtmpfsd.part.0+0x302/0x750 drivers/base/devtmpfs.c:413
                     devtmpfsd+0x107/0x120 drivers/base/devtmpfs.c:403
                     kthread+0x357/0x430 kernel/kthread.c:255
                     ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
  }
  ... key      at: [<ffffffff8c402aa0>] __key.17910+0x0/0x40
  ... acquired at:
   __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
   _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
   shmem_uncharge+0x24/0x270 mm/shmem.c:341
   __split_huge_page mm/huge_memory.c:2540 [inline]
   split_huge_page_to_list+0x2751/0x33c0 mm/huge_memory.c:2813
   split_huge_page include/linux/huge_mm.h:169 [inline]
   shmem_unused_huge_shrink+0x7ba/0x13a0 mm/shmem.c:542
   shmem_unused_huge_scan+0x7a/0xb0 mm/shmem.c:574
   super_cache_scan+0x34f/0x480 fs/super.c:111
   do_shrink_slab+0x3fc/0xab0 mm/vmscan.c:512
   shrink_slab mm/vmscan.c:673 [inline]
   shrink_slab+0x16f/0x5f0 mm/vmscan.c:646
   shrink_node_memcgs mm/vmscan.c:2676 [inline]
   shrink_node+0x477/0x1b20 mm/vmscan.c:2780
   shrink_zones mm/vmscan.c:2983 [inline]
   do_try_to_free_pages+0x38d/0x13a0 mm/vmscan.c:3036
   try_to_free_pages+0x293/0x8d0 mm/vmscan.c:3275
   __perform_reclaim mm/page_alloc.c:4113 [inline]
   __alloc_pages_direct_reclaim mm/page_alloc.c:4134 [inline]
   __alloc_pages_slowpath+0x919/0x26a0 mm/page_alloc.c:4537
   __alloc_pages_nodemask+0x5e1/0x820 mm/page_alloc.c:4751
   __alloc_pages include/linux/gfp.h:496 [inline]
   __alloc_pages_node include/linux/gfp.h:509 [inline]
   alloc_pages_vma+0x3bd/0x600 mm/mempolicy.c:2155
   shmem_alloc_hugepage+0x122/0x210 mm/shmem.c:1484
   shmem_alloc_and_acct_page+0x3ba/0x980 mm/shmem.c:1522
   shmem_getpage_gfp+0xdb9/0x2860 mm/shmem.c:1835
   shmem_getpage mm/shmem.c:154 [inline]
   shmem_write_begin+0x102/0x1e0 mm/shmem.c:2488
   generic_perform_write+0x20a/0x4e0 mm/filemap.c:3287
   __generic_file_write_iter+0x24c/0x610 mm/filemap.c:3416
   generic_file_write_iter+0x3f0/0x62d mm/filemap.c:3448
   call_write_iter include/linux/fs.h:1902 [inline]
   new_sync_write+0x49c/0x700 fs/read_write.c:483
   __vfs_write+0xc9/0x100 fs/read_write.c:496
   vfs_write+0x262/0x5c0 fs/read_write.c:558
   ksys_write+0x127/0x250 fs/read_write.c:611
   do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> (&(&info->lock)->rlock){+.+.} {
   HARDIRQ-ON-W at:
                    lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
                    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
                    spin_lock include/linux/spinlock.h:338 [inline]
                    shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
                    shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2445
                    mfill_atomic_pte mm/userfaultfd.c:434 [inline]
                    __mcopy_atomic mm/userfaultfd.c:557 [inline]
                    mcopy_atomic+0xac7/0x2510 mm/userfaultfd.c:607
                    userfaultfd_copy fs/userfaultfd.c:1736 [inline]
                    userfaultfd_ioctl+0x4d2/0x3b10 fs/userfaultfd.c:1886
                    vfs_ioctl fs/ioctl.c:47 [inline]
                    ksys_ioctl+0x11a/0x180 fs/ioctl.c:763
                    __do_sys_ioctl fs/ioctl.c:772 [inline]
                    __se_sys_ioctl fs/ioctl.c:770 [inline]
                    __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:770
                    do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
                    entry_SYSCALL_64_after_hwframe+0x49/0xbe
   SOFTIRQ-ON-W at:
                    lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
                    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
                    spin_lock include/linux/spinlock.h:338 [inline]
                    shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
                    shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2445
                    mfill_atomic_pte mm/userfaultfd.c:434 [inline]
                    __mcopy_atomic mm/userfaultfd.c:557 [inline]
                    mcopy_atomic+0xac7/0x2510 mm/userfaultfd.c:607
                    userfaultfd_copy fs/userfaultfd.c:1736 [inline]
                    userfaultfd_ioctl+0x4d2/0x3b10 fs/userfaultfd.c:1886
                    vfs_ioctl fs/ioctl.c:47 [inline]
                    ksys_ioctl+0x11a/0x180 fs/ioctl.c:763
                    __do_sys_ioctl fs/ioctl.c:772 [inline]
                    __se_sys_ioctl fs/ioctl.c:770 [inline]
                    __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:770
                    do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
                    entry_SYSCALL_64_after_hwframe+0x49/0xbe
   INITIAL USE at:
                   lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
                   __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
                   _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
                   spin_lock_irq include/linux/spinlock.h:363 [inline]
                   shmem_getpage_gfp+0xf10/0x2860 mm/shmem.c:1887
                   shmem_read_mapping_page_gfp+0xd3/0x170 mm/shmem.c:4218
                   shmem_read_mapping_page include/linux/shmem_fs.h:101 [inline]
                   drm_gem_get_pages+0x293/0x530 drivers/gpu/drm/drm_gem.c:578
                   drm_gem_shmem_get_pages_locked drivers/gpu/drm/drm_gem_shmem_helper.c:146 [inline]
                   drm_gem_shmem_get_pages+0x9d/0x160 drivers/gpu/drm/drm_gem_shmem_helper.c:175
                   virtio_gpu_object_attach+0x121/0x950 drivers/gpu/drm/virtio/virtgpu_vq.c:1090
                   virtio_gpu_object_create+0x26f/0x490 drivers/gpu/drm/virtio/virtgpu_object.c:150
                   virtio_gpu_gem_create+0xaa/0x1d0 drivers/gpu/drm/virtio/virtgpu_gem.c:42
                   virtio_gpu_mode_dumb_create+0x21e/0x360 drivers/gpu/drm/virtio/virtgpu_gem.c:82
                   drm_mode_create_dumb+0x27c/0x300 drivers/gpu/drm/drm_dumb_buffers.c:94
                   drm_client_buffer_create drivers/gpu/drm/drm_client.c:267 [inline]
                   drm_client_framebuffer_create+0x1b7/0x770 drivers/gpu/drm/drm_client.c:412
                   drm_fb_helper_generic_probe+0x1e4/0x810 drivers/gpu/drm/drm_fb_helper.c:2051
                   drm_fb_helper_single_fb_probe drivers/gpu/drm/drm_fb_helper.c:1600 [inline]
                   __drm_fb_helper_initial_config_and_unlock+0xb56/0x11e0 drivers/gpu/drm/drm_fb_helper.c:1758
                   drm_fb_helper_initial_config drivers/gpu/drm/drm_fb_helper.c:1853 [inline]
                   drm_fb_helper_initial_config drivers/gpu/drm/drm_fb_helper.c:1845 [inline]
                   drm_fbdev_client_hotplug+0x30f/0x580 drivers/gpu/drm/drm_fb_helper.c:2145
                   drm_fbdev_generic_setup drivers/gpu/drm/drm_fb_helper.c:2224 [inline]
                   drm_fbdev_generic_setup+0x18b/0x295 drivers/gpu/drm/drm_fb_helper.c:2197
                   virtio_gpu_probe+0x28f/0x2de drivers/gpu/drm/virtio/virtgpu_drv.c:126
                   virtio_dev_probe+0x463/0x710 drivers/virtio/virtio.c:248
                   really_probe+0x281/0x6d0 drivers/base/dd.c:551
                   driver_probe_device+0x104/0x210 drivers/base/dd.c:724
                   device_driver_attach+0x108/0x140 drivers/base/dd.c:998
                   __driver_attach+0xda/0x240 drivers/base/dd.c:1075
                   bus_for_each_dev+0x14b/0x1d0 drivers/base/bus.c:305
                   bus_add_driver+0x4a2/0x5a0 drivers/base/bus.c:622
                   driver_register+0x1c4/0x330 drivers/base/driver.c:171
                   do_one_initcall+0x10a/0x7d0 init/main.c:1152
                   do_initcall_level init/main.c:1225 [inline]
                   do_initcalls init/main.c:1241 [inline]
                   do_basic_setup init/main.c:1261 [inline]
                   kernel_init_freeable+0x501/0x5ae init/main.c:1445
                   kernel_init+0xd/0x1bb init/main.c:1352
                   ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
 }
 ... key      at: [<ffffffff8c3f0420>] __key.55978+0x0/0x40
 ... acquired at:
   mark_lock_irq kernel/locking/lockdep.c:3316 [inline]
   mark_lock+0x50e/0x1220 kernel/locking/lockdep.c:3665
   mark_usage kernel/locking/lockdep.c:3583 [inline]
   __lock_acquire+0x1236/0x3ca0 kernel/locking/lockdep.c:3908
   lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
   __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
   _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
   spin_lock include/linux/spinlock.h:338 [inline]
   shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
   shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2445
   mfill_atomic_pte mm/userfaultfd.c:434 [inline]
   __mcopy_atomic mm/userfaultfd.c:557 [inline]
   mcopy_atomic+0xac7/0x2510 mm/userfaultfd.c:607
   userfaultfd_copy fs/userfaultfd.c:1736 [inline]
   userfaultfd_ioctl+0x4d2/0x3b10 fs/userfaultfd.c:1886
   vfs_ioctl fs/ioctl.c:47 [inline]
   ksys_ioctl+0x11a/0x180 fs/ioctl.c:763
   __do_sys_ioctl fs/ioctl.c:772 [inline]
   __se_sys_ioctl fs/ioctl.c:770 [inline]
   __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:770
   do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
   entry_SYSCALL_64_after_hwframe+0x49/0xbe


stack backtrace:
CPU: 0 PID: 10317 Comm: syz-executor.0 Not tainted 5.6.0-rc7-syzkaller #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x188/0x20d lib/dump_stack.c:118
 print_irq_inversion_bug kernel/locking/lockdep.c:3179 [inline]
 check_usage_backwards.cold+0x1d/0x26 kernel/locking/lockdep.c:3230
 mark_lock_irq kernel/locking/lockdep.c:3316 [inline]
 mark_lock+0x50e/0x1220 kernel/locking/lockdep.c:3665
 mark_usage kernel/locking/lockdep.c:3583 [inline]
 __lock_acquire+0x1236/0x3ca0 kernel/locking/lockdep.c:3908
 lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
 _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
 spin_lock include/linux/spinlock.h:338 [inline]
 shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
 shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2445
 mfill_atomic_pte mm/userfaultfd.c:434 [inline]
 __mcopy_atomic mm/userfaultfd.c:557 [inline]
 mcopy_atomic+0xac7/0x2510 mm/userfaultfd.c:607
 userfaultfd_copy fs/userfaultfd.c:1736 [inline]
 userfaultfd_ioctl+0x4d2/0x3b10 fs/userfaultfd.c:1886
 vfs_ioctl fs/ioctl.c:47 [inline]
 ksys_ioctl+0x11a/0x180 fs/ioctl.c:763
 __do_sys_ioctl fs/ioctl.c:772 [inline]
 __se_sys_ioctl fs/ioctl.c:770 [inline]
 __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:770
 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x45c6e9
Code: bd b1 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 8b b1 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fcff1e50c88 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 000000000072bf00 RCX: 000000000045c6e9
RDX: 00000000200a0fe0 RSI: 00000000c028aa03 RDI: 0000000000000003
RBP: 00007fcff1e516d4 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 00000000000005b3 R14: 00000000004af2cd R15: 00000000006ec420


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: possible deadlock in shmem_mfill_atomic_pte
  2020-03-31 17:21 possible deadlock in shmem_mfill_atomic_pte syzbot
@ 2020-04-11  5:16 ` syzbot
  2020-04-16  3:56   ` Yang Shi
  2020-04-11  8:52 ` syzbot
  2020-04-13 23:19 ` Yang Shi
  2 siblings, 1 reply; 9+ messages in thread
From: syzbot @ 2020-04-11  5:16 UTC (permalink / raw)
  To: akpm, hughd, linux-kernel, linux-mm, syzkaller-bugs

syzbot has found a reproducer for the following crash on:

HEAD commit:    ab6f762f printk: queue wake_up_klogd irq_work only if per-..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=158a6b5de00000
kernel config:  https://syzkaller.appspot.com/x/.config?x=3010ccb0f380f660
dashboard link: https://syzkaller.appspot.com/bug?extid=e27980339d305f2dbfd9
compiler:       clang version 10.0.0 (https://github.com/llvm/llvm-project/ c2443155a0fb245c8f17f2c1c72b6ea391e86e81)
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=12d3c5afe00000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=15e7f51be00000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com

========================================================
WARNING: possible irq lock inversion dependency detected
5.6.0-syzkaller #0 Not tainted
--------------------------------------------------------
syz-executor941/7000 just changed the state of lock:
ffff88808d9b18d8 (&info->lock){+.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
ffff88808d9b18d8 (&info->lock){+.+.}-{2:2}, at: shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
but this lock was taken by another, SOFTIRQ-safe lock in the past:
 (&xa->xa_lock#4){..-.}-{2:2}


and interrupts could create inverse lock ordering between them.


other info that might help us debug this:
 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&info->lock);
                               local_irq_disable();
                               lock(&xa->xa_lock#4);
                               lock(&info->lock);
  <Interrupt>
    lock(&xa->xa_lock#4);

 *** DEADLOCK ***

2 locks held by syz-executor941/7000:
 #0: ffff88809edf10e8 (&mm->mmap_sem#2){++++}-{3:3}, at: __mcopy_atomic mm/userfaultfd.c:491 [inline]
 #0: ffff88809edf10e8 (&mm->mmap_sem#2){++++}-{3:3}, at: mcopy_atomic+0x17a/0x1ba0 mm/userfaultfd.c:632
 #1: ffff888098e211f8 (ptlock_ptr(page)#2){+.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
 #1: ffff888098e211f8 (ptlock_ptr(page)#2){+.+.}-{2:2}, at: shmem_mfill_atomic_pte+0xf73/0x1e10 mm/shmem.c:2389

the shortest dependencies between 2nd lock and 1st lock:
 -> (&xa->xa_lock#4){..-.}-{2:2} {
    IN-SOFTIRQ-W at:
                      lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
                      __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                      _raw_spin_lock_irqsave+0x9e/0xc0 kernel/locking/spinlock.c:159
                      test_clear_page_writeback+0x2d8/0xac0 mm/page-writeback.c:2728
                      end_page_writeback+0x212/0x390 mm/filemap.c:1317
                      end_bio_bh_io_sync+0xb1/0x110 fs/buffer.c:3012
                      req_bio_endio block/blk-core.c:245 [inline]
                      blk_update_request+0x437/0x1070 block/blk-core.c:1472
                      scsi_end_request+0x7a/0x7f0 drivers/scsi/scsi_lib.c:575
                      scsi_io_completion+0x178/0x1be0 drivers/scsi/scsi_lib.c:959
                      blk_done_softirq+0x2f2/0x360 block/blk-softirq.c:37
                      __do_softirq+0x268/0x80c kernel/softirq.c:292
                      invoke_softirq kernel/softirq.c:373 [inline]
                      irq_exit+0x223/0x230 kernel/softirq.c:413
                      exiting_irq arch/x86/include/asm/apic.h:546 [inline]
                      do_IRQ+0xfb/0x1d0 arch/x86/kernel/irq.c:263
                      ret_from_intr+0x0/0x2b
                      orc_find arch/x86/kernel/unwind_orc.c:164 [inline]
                      unwind_next_frame+0x20b/0x1cf0 arch/x86/kernel/unwind_orc.c:407
                      arch_stack_walk+0xb4/0xe0 arch/x86/kernel/stacktrace.c:25
                      stack_trace_save+0xad/0x150 kernel/stacktrace.c:123
                      save_stack mm/kasan/common.c:49 [inline]
                      set_track mm/kasan/common.c:57 [inline]
                      __kasan_kmalloc+0x114/0x160 mm/kasan/common.c:495
                      __do_kmalloc mm/slab.c:3656 [inline]
                      __kmalloc+0x24b/0x330 mm/slab.c:3665
                      kmalloc include/linux/slab.h:560 [inline]
                      tomoyo_realpath_from_path+0xd8/0x630 security/tomoyo/realpath.c:252
                      tomoyo_get_realpath security/tomoyo/file.c:151 [inline]
                      tomoyo_check_open_permission+0x1b6/0x900 security/tomoyo/file.c:771
                      security_file_open+0x50/0xc0 security/security.c:1548
                      do_dentry_open+0x35d/0x10b0 fs/open.c:784
                      do_open fs/namei.c:3229 [inline]
                      path_openat+0x2790/0x38b0 fs/namei.c:3346
                      do_filp_open+0x191/0x3a0 fs/namei.c:3373
                      do_sys_openat2+0x463/0x770 fs/open.c:1148
                      do_sys_open fs/open.c:1164 [inline]
                      ksys_open include/linux/syscalls.h:1386 [inline]
                      __do_sys_open fs/open.c:1170 [inline]
                      __se_sys_open fs/open.c:1168 [inline]
                      __x64_sys_open+0x1af/0x1e0 fs/open.c:1168
                      do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
                      entry_SYSCALL_64_after_hwframe+0x49/0xb3
    INITIAL USE at:
                     lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
                     __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
                     _raw_spin_lock_irq+0x67/0x80 kernel/locking/spinlock.c:167
                     spin_lock_irq include/linux/spinlock.h:378 [inline]
                     __add_to_page_cache_locked+0x53d/0xc70 mm/filemap.c:855
                     add_to_page_cache_lru+0x17f/0x4d0 mm/filemap.c:921
                     do_read_cache_page+0x209/0xd00 mm/filemap.c:2755
                     read_mapping_page include/linux/pagemap.h:397 [inline]
                     read_part_sector+0xd8/0x2d0 block/partitions/core.c:643
                     adfspart_check_ICS+0x45/0x640 block/partitions/acorn.c:360
                     check_partition block/partitions/core.c:140 [inline]
                     blk_add_partitions+0x3ce/0x1240 block/partitions/core.c:571
                     bdev_disk_changed+0x446/0x5d0 fs/block_dev.c:1544
                     __blkdev_get+0xb2b/0x13d0 fs/block_dev.c:1647
                     register_disk block/genhd.c:763 [inline]
                     __device_add_disk+0x95f/0x1040 block/genhd.c:853
                     add_disk include/linux/genhd.h:294 [inline]
                     brd_init+0x349/0x42a drivers/block/brd.c:533
                     do_one_initcall+0x14b/0x350 init/main.c:1157
                     do_initcall_level+0x101/0x14c init/main.c:1230
                     do_initcalls+0x59/0x9b init/main.c:1246
                     kernel_init_freeable+0x2fa/0x418 init/main.c:1450
                     kernel_init+0xd/0x290 init/main.c:1357
                     ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
  }
  ... key      at: [<ffffffff8b5afa68>] xa_init_flags.__key+0x0/0x10
  ... acquired at:
   lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
   __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
   _raw_spin_lock_irqsave+0x9e/0xc0 kernel/locking/spinlock.c:159
   shmem_uncharge+0x34/0x4c0 mm/shmem.c:341
   __split_huge_page+0xda8/0x1900 mm/huge_memory.c:2613
   split_huge_page_to_list+0x10a4/0x15f0 mm/huge_memory.c:2886
   split_huge_page include/linux/huge_mm.h:204 [inline]
   shmem_punch_compound+0x17d/0x1c0 mm/shmem.c:814
   shmem_undo_range+0x5da/0x1d00 mm/shmem.c:870
   shmem_truncate_range mm/shmem.c:980 [inline]
   shmem_setattr+0x4e3/0x8a0 mm/shmem.c:1039
   notify_change+0xad5/0xfb0 fs/attr.c:336
   do_truncate fs/open.c:64 [inline]
   do_sys_ftruncate+0x55f/0x690 fs/open.c:195
   do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
   entry_SYSCALL_64_after_hwframe+0x49/0xb3

-> (&info->lock){+.+.}-{2:2} {
   HARDIRQ-ON-W at:
                    lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
                    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
                    spin_lock include/linux/spinlock.h:353 [inline]
                    shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
                    shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2440
                    mfill_atomic_pte mm/userfaultfd.c:449 [inline]
                    __mcopy_atomic mm/userfaultfd.c:582 [inline]
                    mcopy_atomic+0x84f/0x1ba0 mm/userfaultfd.c:632
                    userfaultfd_copy fs/userfaultfd.c:1743 [inline]
                    userfaultfd_ioctl+0x2289/0x4890 fs/userfaultfd.c:1941
                    vfs_ioctl fs/ioctl.c:47 [inline]
                    ksys_ioctl fs/ioctl.c:763 [inline]
                    __do_sys_ioctl fs/ioctl.c:772 [inline]
                    __se_sys_ioctl+0xf9/0x160 fs/ioctl.c:770
                    do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
                    entry_SYSCALL_64_after_hwframe+0x49/0xb3
   SOFTIRQ-ON-W at:
                    lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
                    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
                    spin_lock include/linux/spinlock.h:353 [inline]
                    shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
                    shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2440
                    mfill_atomic_pte mm/userfaultfd.c:449 [inline]
                    __mcopy_atomic mm/userfaultfd.c:582 [inline]
                    mcopy_atomic+0x84f/0x1ba0 mm/userfaultfd.c:632
                    userfaultfd_copy fs/userfaultfd.c:1743 [inline]
                    userfaultfd_ioctl+0x2289/0x4890 fs/userfaultfd.c:1941
                    vfs_ioctl fs/ioctl.c:47 [inline]
                    ksys_ioctl fs/ioctl.c:763 [inline]
                    __do_sys_ioctl fs/ioctl.c:772 [inline]
                    __se_sys_ioctl+0xf9/0x160 fs/ioctl.c:770
                    do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
                    entry_SYSCALL_64_after_hwframe+0x49/0xb3
   INITIAL USE at:
                   lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
                   __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
                   _raw_spin_lock_irq+0x67/0x80 kernel/locking/spinlock.c:167
                   spin_lock_irq include/linux/spinlock.h:378 [inline]
                   shmem_getpage_gfp+0x2160/0x3120 mm/shmem.c:1882
                   shmem_getpage mm/shmem.c:154 [inline]
                   shmem_write_begin+0xcd/0x1a0 mm/shmem.c:2483
                   generic_perform_write+0x23b/0x4e0 mm/filemap.c:3302
                   __generic_file_write_iter+0x22b/0x4e0 mm/filemap.c:3431
                   generic_file_write_iter+0x4a6/0x650 mm/filemap.c:3463
                   call_write_iter include/linux/fs.h:1907 [inline]
                   new_sync_write fs/read_write.c:484 [inline]
                   __vfs_write+0x54c/0x710 fs/read_write.c:497
                   vfs_write+0x274/0x580 fs/read_write.c:559
                   ksys_write+0x11b/0x220 fs/read_write.c:612
                   do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
                   entry_SYSCALL_64_after_hwframe+0x49/0xb3
 }
 ... key      at: [<ffffffff8b59f840>] shmem_get_inode.__key+0x0/0x10
 ... acquired at:
   mark_lock_irq kernel/locking/lockdep.c:3585 [inline]
   mark_lock+0x529/0x1b00 kernel/locking/lockdep.c:3935
   mark_usage kernel/locking/lockdep.c:3852 [inline]
   __lock_acquire+0xb95/0x2b90 kernel/locking/lockdep.c:4298
   lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
   __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
   _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
   spin_lock include/linux/spinlock.h:353 [inline]
   shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
   shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2440
   mfill_atomic_pte mm/userfaultfd.c:449 [inline]
   __mcopy_atomic mm/userfaultfd.c:582 [inline]
   mcopy_atomic+0x84f/0x1ba0 mm/userfaultfd.c:632
   userfaultfd_copy fs/userfaultfd.c:1743 [inline]
   userfaultfd_ioctl+0x2289/0x4890 fs/userfaultfd.c:1941
   vfs_ioctl fs/ioctl.c:47 [inline]
   ksys_ioctl fs/ioctl.c:763 [inline]
   __do_sys_ioctl fs/ioctl.c:772 [inline]
   __se_sys_ioctl+0xf9/0x160 fs/ioctl.c:770
   do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
   entry_SYSCALL_64_after_hwframe+0x49/0xb3


stack backtrace:
CPU: 1 PID: 7000 Comm: syz-executor941 Not tainted 5.6.0-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x1e9/0x30e lib/dump_stack.c:118
 print_irq_inversion_bug+0xb67/0xe90 kernel/locking/lockdep.c:3447
 check_usage_backwards+0x13f/0x240 kernel/locking/lockdep.c:3499
 mark_lock_irq kernel/locking/lockdep.c:3585 [inline]
 mark_lock+0x529/0x1b00 kernel/locking/lockdep.c:3935
 mark_usage kernel/locking/lockdep.c:3852 [inline]
 __lock_acquire+0xb95/0x2b90 kernel/locking/lockdep.c:4298
 lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
 _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
 spin_lock include/linux/spinlock.h:353 [inline]
 shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
 shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2440
 mfill_atomic_pte mm/userfaultfd.c:449 [inline]
 __mcopy_atomic mm/userfaultfd.c:582 [inline]
 mcopy_atomic+0x84f/0x1ba0 mm/userfaultfd.c:632
 userfaultfd_copy fs/userfaultfd.c:1743 [inline]
 userfaultfd_ioctl+0x2289/0x4890 fs/userfaultfd.c:1941
 vfs_ioctl fs/ioctl.c:47 [inline]
 ksys_ioctl fs/ioctl.c:763 [inline]
 __do_sys_ioctl fs/ioctl.c:772 [inline]
 __se_sys_ioctl+0xf9/0x160 fs/ioctl.c:770
 do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
 entry_SYSCALL_64_after_hwframe+0x49/0xb3
RIP: 0033:0x444399
Code: 0d d8 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db d7 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007ffd0974a4a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00000000004002e0 RCX: 0000000000444399
RDX: 00000000200a0fe0 RSI: 00000000c028aa03 RDI: 0000000000000004
RBP: 00000000006cf018 R08: 00000000004002e0 R09: 00000000004002e0
R10: 00000000004002e0 R11: 0000000000000246 R12: 0000000000402000
R13: 0000000000402090 R14: 0000000000000000 R15: 0000000000000000



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: possible deadlock in shmem_mfill_atomic_pte
  2020-03-31 17:21 possible deadlock in shmem_mfill_atomic_pte syzbot
  2020-04-11  5:16 ` syzbot
@ 2020-04-11  8:52 ` syzbot
  2020-04-13 23:19 ` Yang Shi
  2 siblings, 0 replies; 9+ messages in thread
From: syzbot @ 2020-04-11  8:52 UTC (permalink / raw)
  To: akpm, hughd, linux-kernel, linux-mm, syzkaller-bugs, torvalds

syzbot has bisected this bug to:

commit 71725ed10c40696dc6bdccf8e225815dcef24dba
Author: Hugh Dickins <hughd@google.com>
Date:   Tue Apr 7 03:07:57 2020 +0000

    mm: huge tmpfs: try to split_huge_page() when punching hole

bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=17c463e7e00000
start commit:   ab6f762f printk: queue wake_up_klogd irq_work only if per-..
git tree:       upstream
final crash:    https://syzkaller.appspot.com/x/report.txt?x=142463e7e00000
console output: https://syzkaller.appspot.com/x/log.txt?x=102463e7e00000
kernel config:  https://syzkaller.appspot.com/x/.config?x=3010ccb0f380f660
dashboard link: https://syzkaller.appspot.com/bug?extid=e27980339d305f2dbfd9
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=12d3c5afe00000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=15e7f51be00000

Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole")

For information about bisection process see: https://goo.gl/tpsmEJ#bisection


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: possible deadlock in shmem_mfill_atomic_pte
  2020-03-31 17:21 possible deadlock in shmem_mfill_atomic_pte syzbot
  2020-04-11  5:16 ` syzbot
  2020-04-11  8:52 ` syzbot
@ 2020-04-13 23:19 ` Yang Shi
  2020-04-16  1:27   ` Hugh Dickins
  2 siblings, 1 reply; 9+ messages in thread
From: Yang Shi @ 2020-04-13 23:19 UTC (permalink / raw)
  To: syzbot
  Cc: Andrew Morton, Hugh Dickins, Linux Kernel Mailing List, Linux MM,
	syzkaller-bugs

On Tue, Mar 31, 2020 at 10:21 AM syzbot
<syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:    527630fb Merge tag 'clk-fixes-for-linus' of git://git.kern..
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=1214875be00000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=27392dd2975fd692
> dashboard link: https://syzkaller.appspot.com/bug?extid=e27980339d305f2dbfd9
> compiler:       gcc (GCC) 9.0.0 20181231 (experimental)
>
> Unfortunately, I don't have any reproducer for this crash yet.
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
>
> WARNING: possible irq lock inversion dependency detected
> 5.6.0-rc7-syzkaller #0 Not tainted
> --------------------------------------------------------
> syz-executor.0/10317 just changed the state of lock:
> ffff888021d16568 (&(&info->lock)->rlock){+.+.}, at: spin_lock include/linux/spinlock.h:338 [inline]
> ffff888021d16568 (&(&info->lock)->rlock){+.+.}, at: shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
> but this lock was taken by another, SOFTIRQ-safe lock in the past:
>  (&(&xa->xa_lock)->rlock#5){..-.}
>
>
> and interrupts could create inverse lock ordering between them.
>
>
> other info that might help us debug this:
>  Possible interrupt unsafe locking scenario:
>
>        CPU0                    CPU1
>        ----                    ----
>   lock(&(&info->lock)->rlock);
>                                local_irq_disable();
>                                lock(&(&xa->xa_lock)->rlock#5);
>                                lock(&(&info->lock)->rlock);
>   <Interrupt>
>     lock(&(&xa->xa_lock)->rlock#5);
>
>  *** DEADLOCK ***

This looks possible. shmem_mfill_atomic_pte() acquires info->lock with
irq enabled.

The below patch should be able to fix it:

diff --git a/mm/shmem.c b/mm/shmem.c
index d722eb8..762da6a 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2399,11 +2399,11 @@ static int shmem_mfill_atomic_pte(struct
mm_struct *dst_mm,

        lru_cache_add_anon(page);

-       spin_lock(&info->lock);
+       spin_lock_irq(&info->lock);
        info->alloced++;
        inode->i_blocks += BLOCKS_PER_PAGE;
        shmem_recalc_inode(inode);
-       spin_unlock(&info->lock);
+       spin_unlock_irq(&info->lock);

        inc_mm_counter(dst_mm, mm_counter_file(page));
        page_add_file_rmap(page, false);


>
> 2 locks held by syz-executor.0/10317:
>  #0: ffff888011721898 (&mm->mmap_sem#2){++++}, at: __mcopy_atomic mm/userfaultfd.c:474 [inline]
>  #0: ffff888011721898 (&mm->mmap_sem#2){++++}, at: mcopy_atomic+0x185/0x2510 mm/userfaultfd.c:607
>  #1: ffff888024eda280 (&(ptlock_ptr(page))->rlock#2){+.+.}, at: spin_lock include/linux/spinlock.h:338 [inline]
>  #1: ffff888024eda280 (&(ptlock_ptr(page))->rlock#2){+.+.}, at: shmem_mfill_atomic_pte+0xf76/0x21c0 mm/shmem.c:2394
>
> the shortest dependencies between 2nd lock and 1st lock:
>  -> (&(&xa->xa_lock)->rlock#5){..-.} {
>     IN-SOFTIRQ-W at:
>                       lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
>                       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
>                       _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
>                       test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
>                       end_page_writeback+0x239/0x520 mm/filemap.c:1317
>                       end_buffer_async_write+0x6a9/0xa30 fs/buffer.c:389
>                       end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3018
>                       bio_endio+0x473/0x820 block/bio.c:1872
>                       req_bio_endio block/blk-core.c:245 [inline]
>                       blk_update_request+0x3e1/0xdc0 block/blk-core.c:1468
>                       scsi_end_request+0x80/0x7a0 drivers/scsi/scsi_lib.c:576
>                       scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:960
>                       scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1476
>                       blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
>                       __do_softirq+0x26c/0x99d kernel/softirq.c:292
>                       invoke_softirq kernel/softirq.c:373 [inline]
>                       irq_exit+0x192/0x1d0 kernel/softirq.c:413
>                       exiting_irq arch/x86/include/asm/apic.h:546 [inline]
>                       do_IRQ+0xde/0x280 arch/x86/kernel/irq.c:263
>                       ret_from_intr+0x0/0x36
>                       clear_page_erms+0x7/0x10 arch/x86/lib/clear_page_64.S:48
>                       clear_page arch/x86/include/asm/page_64.h:49 [inline]
>                       clear_highpage include/linux/highmem.h:214 [inline]
>                       kernel_init_free_pages+0x92/0x120 mm/page_alloc.c:1118
>                       prep_new_page+0x12e/0x1f0 mm/page_alloc.c:2160
>                       get_page_from_freelist+0x14c7/0x3ee0 mm/page_alloc.c:3684
>                       __alloc_pages_nodemask+0x2a5/0x820 mm/page_alloc.c:4731
>                       alloc_pages_current+0xff/0x200 mm/mempolicy.c:2211
>                       alloc_pages include/linux/gfp.h:532 [inline]
>                       __page_cache_alloc+0x298/0x480 mm/filemap.c:959
>                       __do_page_cache_readahead+0x1a7/0x570 mm/readahead.c:196
>                       ra_submit mm/internal.h:62 [inline]
>                       ondemand_readahead+0x566/0xd60 mm/readahead.c:492
>                       page_cache_async_readahead mm/readahead.c:574 [inline]
>                       page_cache_async_readahead+0x43d/0x7c0 mm/readahead.c:547
>                       generic_file_buffered_read mm/filemap.c:2037 [inline]
>                       generic_file_read_iter+0x124a/0x2b00 mm/filemap.c:2302
>                       ext4_file_read_iter fs/ext4/file.c:131 [inline]
>                       ext4_file_read_iter+0x1d1/0x600 fs/ext4/file.c:114
>                       call_read_iter include/linux/fs.h:1896 [inline]
>                       new_sync_read+0x4a2/0x790 fs/read_write.c:414
>                       __vfs_read+0xc9/0x100 fs/read_write.c:427
>                       integrity_kernel_read+0x143/0x200 security/integrity/iint.c:200
>                       ima_calc_file_hash_tfm+0x2aa/0x3b0 security/integrity/ima/ima_crypto.c:360
>                       ima_calc_file_shash security/integrity/ima/ima_crypto.c:391 [inline]
>                       ima_calc_file_hash+0x199/0x540 security/integrity/ima/ima_crypto.c:456
>                       ima_collect_measurement+0x4c4/0x570 security/integrity/ima/ima_api.c:249
>                       process_measurement+0xc6d/0x1740 security/integrity/ima/ima_main.c:326
>                       ima_bprm_check+0xde/0x210 security/integrity/ima/ima_main.c:417
>                       security_bprm_check+0x89/0xb0 security/security.c:819
>                       search_binary_handler+0x70/0x580 fs/exec.c:1649
>                       exec_binprm fs/exec.c:1705 [inline]
>                       __do_execve_file.isra.0+0x12fc/0x2270 fs/exec.c:1825
>                       do_execveat_common fs/exec.c:1871 [inline]
>                       do_execve fs/exec.c:1888 [inline]
>                       __do_sys_execve fs/exec.c:1964 [inline]
>                       __se_sys_execve fs/exec.c:1959 [inline]
>                       __x64_sys_execve+0x8a/0xb0 fs/exec.c:1959
>                       do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
>                       entry_SYSCALL_64_after_hwframe+0x49/0xbe
>     INITIAL USE at:
>                      lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
>                      __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
>                      _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
>                      spin_lock_irq include/linux/spinlock.h:363 [inline]
>                      clear_inode+0x1b/0x1e0 fs/inode.c:529
>                      shmem_evict_inode+0x1db/0x9f0 mm/shmem.c:1116
>                      evict+0x2ed/0x650 fs/inode.c:576
>                      iput_final fs/inode.c:1572 [inline]
>                      iput+0x536/0x8c0 fs/inode.c:1598
>                      dentry_unlink_inode+0x2c0/0x3e0 fs/dcache.c:374
>                      d_delete fs/dcache.c:2451 [inline]
>                      d_delete+0x117/0x150 fs/dcache.c:2440
>                      vfs_unlink+0x4d5/0x620 fs/namei.c:4087
>                      handle_remove+0x417/0x720 drivers/base/devtmpfs.c:332
>                      handle drivers/base/devtmpfs.c:378 [inline]
>                      devtmpfsd.part.0+0x302/0x750 drivers/base/devtmpfs.c:413
>                      devtmpfsd+0x107/0x120 drivers/base/devtmpfs.c:403
>                      kthread+0x357/0x430 kernel/kthread.c:255
>                      ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
>   }
>   ... key      at: [<ffffffff8c402aa0>] __key.17910+0x0/0x40
>   ... acquired at:
>    __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
>    _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
>    shmem_uncharge+0x24/0x270 mm/shmem.c:341
>    __split_huge_page mm/huge_memory.c:2540 [inline]
>    split_huge_page_to_list+0x2751/0x33c0 mm/huge_memory.c:2813
>    split_huge_page include/linux/huge_mm.h:169 [inline]
>    shmem_unused_huge_shrink+0x7ba/0x13a0 mm/shmem.c:542
>    shmem_unused_huge_scan+0x7a/0xb0 mm/shmem.c:574
>    super_cache_scan+0x34f/0x480 fs/super.c:111
>    do_shrink_slab+0x3fc/0xab0 mm/vmscan.c:512
>    shrink_slab mm/vmscan.c:673 [inline]
>    shrink_slab+0x16f/0x5f0 mm/vmscan.c:646
>    shrink_node_memcgs mm/vmscan.c:2676 [inline]
>    shrink_node+0x477/0x1b20 mm/vmscan.c:2780
>    shrink_zones mm/vmscan.c:2983 [inline]
>    do_try_to_free_pages+0x38d/0x13a0 mm/vmscan.c:3036
>    try_to_free_pages+0x293/0x8d0 mm/vmscan.c:3275
>    __perform_reclaim mm/page_alloc.c:4113 [inline]
>    __alloc_pages_direct_reclaim mm/page_alloc.c:4134 [inline]
>    __alloc_pages_slowpath+0x919/0x26a0 mm/page_alloc.c:4537
>    __alloc_pages_nodemask+0x5e1/0x820 mm/page_alloc.c:4751
>    __alloc_pages include/linux/gfp.h:496 [inline]
>    __alloc_pages_node include/linux/gfp.h:509 [inline]
>    alloc_pages_vma+0x3bd/0x600 mm/mempolicy.c:2155
>    shmem_alloc_hugepage+0x122/0x210 mm/shmem.c:1484
>    shmem_alloc_and_acct_page+0x3ba/0x980 mm/shmem.c:1522
>    shmem_getpage_gfp+0xdb9/0x2860 mm/shmem.c:1835
>    shmem_getpage mm/shmem.c:154 [inline]
>    shmem_write_begin+0x102/0x1e0 mm/shmem.c:2488
>    generic_perform_write+0x20a/0x4e0 mm/filemap.c:3287
>    __generic_file_write_iter+0x24c/0x610 mm/filemap.c:3416
>    generic_file_write_iter+0x3f0/0x62d mm/filemap.c:3448
>    call_write_iter include/linux/fs.h:1902 [inline]
>    new_sync_write+0x49c/0x700 fs/read_write.c:483
>    __vfs_write+0xc9/0x100 fs/read_write.c:496
>    vfs_write+0x262/0x5c0 fs/read_write.c:558
>    ksys_write+0x127/0x250 fs/read_write.c:611
>    do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
>    entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> -> (&(&info->lock)->rlock){+.+.} {
>    HARDIRQ-ON-W at:
>                     lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
>                     __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
>                     _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
>                     spin_lock include/linux/spinlock.h:338 [inline]
>                     shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
>                     shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2445
>                     mfill_atomic_pte mm/userfaultfd.c:434 [inline]
>                     __mcopy_atomic mm/userfaultfd.c:557 [inline]
>                     mcopy_atomic+0xac7/0x2510 mm/userfaultfd.c:607
>                     userfaultfd_copy fs/userfaultfd.c:1736 [inline]
>                     userfaultfd_ioctl+0x4d2/0x3b10 fs/userfaultfd.c:1886
>                     vfs_ioctl fs/ioctl.c:47 [inline]
>                     ksys_ioctl+0x11a/0x180 fs/ioctl.c:763
>                     __do_sys_ioctl fs/ioctl.c:772 [inline]
>                     __se_sys_ioctl fs/ioctl.c:770 [inline]
>                     __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:770
>                     do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
>                     entry_SYSCALL_64_after_hwframe+0x49/0xbe
>    SOFTIRQ-ON-W at:
>                     lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
>                     __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
>                     _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
>                     spin_lock include/linux/spinlock.h:338 [inline]
>                     shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
>                     shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2445
>                     mfill_atomic_pte mm/userfaultfd.c:434 [inline]
>                     __mcopy_atomic mm/userfaultfd.c:557 [inline]
>                     mcopy_atomic+0xac7/0x2510 mm/userfaultfd.c:607
>                     userfaultfd_copy fs/userfaultfd.c:1736 [inline]
>                     userfaultfd_ioctl+0x4d2/0x3b10 fs/userfaultfd.c:1886
>                     vfs_ioctl fs/ioctl.c:47 [inline]
>                     ksys_ioctl+0x11a/0x180 fs/ioctl.c:763
>                     __do_sys_ioctl fs/ioctl.c:772 [inline]
>                     __se_sys_ioctl fs/ioctl.c:770 [inline]
>                     __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:770
>                     do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
>                     entry_SYSCALL_64_after_hwframe+0x49/0xbe
>    INITIAL USE at:
>                    lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
>                    __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
>                    _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
>                    spin_lock_irq include/linux/spinlock.h:363 [inline]
>                    shmem_getpage_gfp+0xf10/0x2860 mm/shmem.c:1887
>                    shmem_read_mapping_page_gfp+0xd3/0x170 mm/shmem.c:4218
>                    shmem_read_mapping_page include/linux/shmem_fs.h:101 [inline]
>                    drm_gem_get_pages+0x293/0x530 drivers/gpu/drm/drm_gem.c:578
>                    drm_gem_shmem_get_pages_locked drivers/gpu/drm/drm_gem_shmem_helper.c:146 [inline]
>                    drm_gem_shmem_get_pages+0x9d/0x160 drivers/gpu/drm/drm_gem_shmem_helper.c:175
>                    virtio_gpu_object_attach+0x121/0x950 drivers/gpu/drm/virtio/virtgpu_vq.c:1090
>                    virtio_gpu_object_create+0x26f/0x490 drivers/gpu/drm/virtio/virtgpu_object.c:150
>                    virtio_gpu_gem_create+0xaa/0x1d0 drivers/gpu/drm/virtio/virtgpu_gem.c:42
>                    virtio_gpu_mode_dumb_create+0x21e/0x360 drivers/gpu/drm/virtio/virtgpu_gem.c:82
>                    drm_mode_create_dumb+0x27c/0x300 drivers/gpu/drm/drm_dumb_buffers.c:94
>                    drm_client_buffer_create drivers/gpu/drm/drm_client.c:267 [inline]
>                    drm_client_framebuffer_create+0x1b7/0x770 drivers/gpu/drm/drm_client.c:412
>                    drm_fb_helper_generic_probe+0x1e4/0x810 drivers/gpu/drm/drm_fb_helper.c:2051
>                    drm_fb_helper_single_fb_probe drivers/gpu/drm/drm_fb_helper.c:1600 [inline]
>                    __drm_fb_helper_initial_config_and_unlock+0xb56/0x11e0 drivers/gpu/drm/drm_fb_helper.c:1758
>                    drm_fb_helper_initial_config drivers/gpu/drm/drm_fb_helper.c:1853 [inline]
>                    drm_fb_helper_initial_config drivers/gpu/drm/drm_fb_helper.c:1845 [inline]
>                    drm_fbdev_client_hotplug+0x30f/0x580 drivers/gpu/drm/drm_fb_helper.c:2145
>                    drm_fbdev_generic_setup drivers/gpu/drm/drm_fb_helper.c:2224 [inline]
>                    drm_fbdev_generic_setup+0x18b/0x295 drivers/gpu/drm/drm_fb_helper.c:2197
>                    virtio_gpu_probe+0x28f/0x2de drivers/gpu/drm/virtio/virtgpu_drv.c:126
>                    virtio_dev_probe+0x463/0x710 drivers/virtio/virtio.c:248
>                    really_probe+0x281/0x6d0 drivers/base/dd.c:551
>                    driver_probe_device+0x104/0x210 drivers/base/dd.c:724
>                    device_driver_attach+0x108/0x140 drivers/base/dd.c:998
>                    __driver_attach+0xda/0x240 drivers/base/dd.c:1075
>                    bus_for_each_dev+0x14b/0x1d0 drivers/base/bus.c:305
>                    bus_add_driver+0x4a2/0x5a0 drivers/base/bus.c:622
>                    driver_register+0x1c4/0x330 drivers/base/driver.c:171
>                    do_one_initcall+0x10a/0x7d0 init/main.c:1152
>                    do_initcall_level init/main.c:1225 [inline]
>                    do_initcalls init/main.c:1241 [inline]
>                    do_basic_setup init/main.c:1261 [inline]
>                    kernel_init_freeable+0x501/0x5ae init/main.c:1445
>                    kernel_init+0xd/0x1bb init/main.c:1352
>                    ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
>  }
>  ... key      at: [<ffffffff8c3f0420>] __key.55978+0x0/0x40
>  ... acquired at:
>    mark_lock_irq kernel/locking/lockdep.c:3316 [inline]
>    mark_lock+0x50e/0x1220 kernel/locking/lockdep.c:3665
>    mark_usage kernel/locking/lockdep.c:3583 [inline]
>    __lock_acquire+0x1236/0x3ca0 kernel/locking/lockdep.c:3908
>    lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
>    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
>    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
>    spin_lock include/linux/spinlock.h:338 [inline]
>    shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
>    shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2445
>    mfill_atomic_pte mm/userfaultfd.c:434 [inline]
>    __mcopy_atomic mm/userfaultfd.c:557 [inline]
>    mcopy_atomic+0xac7/0x2510 mm/userfaultfd.c:607
>    userfaultfd_copy fs/userfaultfd.c:1736 [inline]
>    userfaultfd_ioctl+0x4d2/0x3b10 fs/userfaultfd.c:1886
>    vfs_ioctl fs/ioctl.c:47 [inline]
>    ksys_ioctl+0x11a/0x180 fs/ioctl.c:763
>    __do_sys_ioctl fs/ioctl.c:772 [inline]
>    __se_sys_ioctl fs/ioctl.c:770 [inline]
>    __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:770
>    do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
>    entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
>
> stack backtrace:
> CPU: 0 PID: 10317 Comm: syz-executor.0 Not tainted 5.6.0-rc7-syzkaller #0
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
> Call Trace:
>  __dump_stack lib/dump_stack.c:77 [inline]
>  dump_stack+0x188/0x20d lib/dump_stack.c:118
>  print_irq_inversion_bug kernel/locking/lockdep.c:3179 [inline]
>  check_usage_backwards.cold+0x1d/0x26 kernel/locking/lockdep.c:3230
>  mark_lock_irq kernel/locking/lockdep.c:3316 [inline]
>  mark_lock+0x50e/0x1220 kernel/locking/lockdep.c:3665
>  mark_usage kernel/locking/lockdep.c:3583 [inline]
>  __lock_acquire+0x1236/0x3ca0 kernel/locking/lockdep.c:3908
>  lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484
>  __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
>  _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
>  spin_lock include/linux/spinlock.h:338 [inline]
>  shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
>  shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2445
>  mfill_atomic_pte mm/userfaultfd.c:434 [inline]
>  __mcopy_atomic mm/userfaultfd.c:557 [inline]
>  mcopy_atomic+0xac7/0x2510 mm/userfaultfd.c:607
>  userfaultfd_copy fs/userfaultfd.c:1736 [inline]
>  userfaultfd_ioctl+0x4d2/0x3b10 fs/userfaultfd.c:1886
>  vfs_ioctl fs/ioctl.c:47 [inline]
>  ksys_ioctl+0x11a/0x180 fs/ioctl.c:763
>  __do_sys_ioctl fs/ioctl.c:772 [inline]
>  __se_sys_ioctl fs/ioctl.c:770 [inline]
>  __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:770
>  do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294
>  entry_SYSCALL_64_after_hwframe+0x49/0xbe
> RIP: 0033:0x45c6e9
> Code: bd b1 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 8b b1 fb ff c3 66 2e 0f 1f 84 00 00 00 00
> RSP: 002b:00007fcff1e50c88 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> RAX: ffffffffffffffda RBX: 000000000072bf00 RCX: 000000000045c6e9
> RDX: 00000000200a0fe0 RSI: 00000000c028aa03 RDI: 0000000000000003
> RBP: 00007fcff1e516d4 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
> R13: 00000000000005b3 R14: 00000000004af2cd R15: 00000000006ec420
>
>
> ---
> This bug is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this bug report. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: possible deadlock in shmem_mfill_atomic_pte
  2020-04-13 23:19 ` Yang Shi
@ 2020-04-16  1:27   ` Hugh Dickins
  2020-04-16  2:22     ` Yang Shi
  0 siblings, 1 reply; 9+ messages in thread
From: Hugh Dickins @ 2020-04-16  1:27 UTC (permalink / raw)
  To: Yang Shi
  Cc: syzbot, Andrew Morton, Hugh Dickins, Linux Kernel Mailing List,
	Linux MM, syzkaller-bugs

On Mon, 13 Apr 2020, Yang Shi wrote:
> On Tue, Mar 31, 2020 at 10:21 AM syzbot
> <syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com> wrote:
> >
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit:    527630fb Merge tag 'clk-fixes-for-linus' of git://git.kern..
> > git tree:       upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=1214875be00000
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=27392dd2975fd692
> > dashboard link: https://syzkaller.appspot.com/bug?extid=e27980339d305f2dbfd9
> > compiler:       gcc (GCC) 9.0.0 20181231 (experimental)
> >
> > Unfortunately, I don't have any reproducer for this crash yet.
> >
> > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
> >
> > WARNING: possible irq lock inversion dependency detected
> > 5.6.0-rc7-syzkaller #0 Not tainted
> > --------------------------------------------------------
> > syz-executor.0/10317 just changed the state of lock:
> > ffff888021d16568 (&(&info->lock)->rlock){+.+.}, at: spin_lock include/linux/spinlock.h:338 [inline]
> > ffff888021d16568 (&(&info->lock)->rlock){+.+.}, at: shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
> > but this lock was taken by another, SOFTIRQ-safe lock in the past:
> >  (&(&xa->xa_lock)->rlock#5){..-.}
> >
> >
> > and interrupts could create inverse lock ordering between them.
> >
> >
> > other info that might help us debug this:
> >  Possible interrupt unsafe locking scenario:
> >
> >        CPU0                    CPU1
> >        ----                    ----
> >   lock(&(&info->lock)->rlock);
> >                                local_irq_disable();
> >                                lock(&(&xa->xa_lock)->rlock#5);
> >                                lock(&(&info->lock)->rlock);
> >   <Interrupt>
> >     lock(&(&xa->xa_lock)->rlock#5);
> >
> >  *** DEADLOCK ***
> 
> This looks possible. shmem_mfill_atomic_pte() acquires info->lock with
> irq enabled.
> 
> The below patch should be able to fix it:

I agree, thank you: please send to akpm with your signoff and

Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Acked-by: Hugh Dickins <hughd@google.com>

I bet that 4.11 commit was being worked on before 4.8 reversed the
ordering of info->lock and tree_lock, changing spin_lock(&info->lock)s
to spin_lock_irq*(&info->lock)s - this one is the only hold-out; and
not using userfaultfd, I wouldn't have seen the lockdep report.

> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index d722eb8..762da6a 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2399,11 +2399,11 @@ static int shmem_mfill_atomic_pte(struct
> mm_struct *dst_mm,
> 
>         lru_cache_add_anon(page);
> 
> -       spin_lock(&info->lock);
> +       spin_lock_irq(&info->lock);
>         info->alloced++;
>         inode->i_blocks += BLOCKS_PER_PAGE;
>         shmem_recalc_inode(inode);
> -       spin_unlock(&info->lock);
> +       spin_unlock_irq(&info->lock);
> 
>         inc_mm_counter(dst_mm, mm_counter_file(page));
>         page_add_file_rmap(page, false);


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: possible deadlock in shmem_mfill_atomic_pte
  2020-04-16  1:27   ` Hugh Dickins
@ 2020-04-16  2:22     ` Yang Shi
  2020-04-16  3:10       ` Hugh Dickins
  0 siblings, 1 reply; 9+ messages in thread
From: Yang Shi @ 2020-04-16  2:22 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: syzbot, Andrew Morton, Linux Kernel Mailing List, Linux MM,
	syzkaller-bugs

On Wed, Apr 15, 2020 at 6:27 PM Hugh Dickins <hughd@google.com> wrote:
>
> On Mon, 13 Apr 2020, Yang Shi wrote:
> > On Tue, Mar 31, 2020 at 10:21 AM syzbot
> > <syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com> wrote:
> > >
> > > Hello,
> > >
> > > syzbot found the following crash on:
> > >
> > > HEAD commit:    527630fb Merge tag 'clk-fixes-for-linus' of git://git.kern..
> > > git tree:       upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=1214875be00000
> > > kernel config:  https://syzkaller.appspot.com/x/.config?x=27392dd2975fd692
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=e27980339d305f2dbfd9
> > > compiler:       gcc (GCC) 9.0.0 20181231 (experimental)
> > >
> > > Unfortunately, I don't have any reproducer for this crash yet.
> > >
> > > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > > Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
> > >
> > > WARNING: possible irq lock inversion dependency detected
> > > 5.6.0-rc7-syzkaller #0 Not tainted
> > > --------------------------------------------------------
> > > syz-executor.0/10317 just changed the state of lock:
> > > ffff888021d16568 (&(&info->lock)->rlock){+.+.}, at: spin_lock include/linux/spinlock.h:338 [inline]
> > > ffff888021d16568 (&(&info->lock)->rlock){+.+.}, at: shmem_mfill_atomic_pte+0x1012/0x21c0 mm/shmem.c:2407
> > > but this lock was taken by another, SOFTIRQ-safe lock in the past:
> > >  (&(&xa->xa_lock)->rlock#5){..-.}
> > >
> > >
> > > and interrupts could create inverse lock ordering between them.
> > >
> > >
> > > other info that might help us debug this:
> > >  Possible interrupt unsafe locking scenario:
> > >
> > >        CPU0                    CPU1
> > >        ----                    ----
> > >   lock(&(&info->lock)->rlock);
> > >                                local_irq_disable();
> > >                                lock(&(&xa->xa_lock)->rlock#5);
> > >                                lock(&(&info->lock)->rlock);
> > >   <Interrupt>
> > >     lock(&(&xa->xa_lock)->rlock#5);
> > >
> > >  *** DEADLOCK ***
> >
> > This looks possible. shmem_mfill_atomic_pte() acquires info->lock with
> > irq enabled.
> >
> > The below patch should be able to fix it:
>
> I agree, thank you: please send to akpm with your signoff and
>
> Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
> Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
> Acked-by: Hugh Dickins <hughd@google.com>
>
> I bet that 4.11 commit was being worked on before 4.8 reversed the
> ordering of info->lock and tree_lock, changing spin_lock(&info->lock)s
> to spin_lock_irq*(&info->lock)s - this one is the only hold-out; and
> not using userfaultfd, I wouldn't have seen the lockdep report.

Thanks, Hugh. I believe this commit could fix the splat. I'm trying to
push my test tree to github to let syzkaller test it. I will send the
formal patch once I get it tested. It is just slow to push to github,
less than 50KB/s...


>
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index d722eb8..762da6a 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -2399,11 +2399,11 @@ static int shmem_mfill_atomic_pte(struct
> > mm_struct *dst_mm,
> >
> >         lru_cache_add_anon(page);
> >
> > -       spin_lock(&info->lock);
> > +       spin_lock_irq(&info->lock);
> >         info->alloced++;
> >         inode->i_blocks += BLOCKS_PER_PAGE;
> >         shmem_recalc_inode(inode);
> > -       spin_unlock(&info->lock);
> > +       spin_unlock_irq(&info->lock);
> >
> >         inc_mm_counter(dst_mm, mm_counter_file(page));
> >         page_add_file_rmap(page, false);


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: possible deadlock in shmem_mfill_atomic_pte
  2020-04-16  2:22     ` Yang Shi
@ 2020-04-16  3:10       ` Hugh Dickins
  0 siblings, 0 replies; 9+ messages in thread
From: Hugh Dickins @ 2020-04-16  3:10 UTC (permalink / raw)
  To: Yang Shi
  Cc: Hugh Dickins, syzbot, Andrew Morton, Linux Kernel Mailing List,
	Linux MM, syzkaller-bugs

On Wed, 15 Apr 2020, Yang Shi wrote:
> 
> Thanks, Hugh. I believe this commit could fix the splat. I'm trying to
> push my test tree to github to let syzkaller test it. I will send the
> formal patch once I get it tested. It is just slow to push to github,
> less than 50KB/s...

Your diligence is admirable.  With straightforward ones like this,
I tend to just rely on syzbot to call me out later if I've bluffed.

Hugh


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: possible deadlock in shmem_mfill_atomic_pte
  2020-04-11  5:16 ` syzbot
@ 2020-04-16  3:56   ` Yang Shi
  2020-04-16  6:58     ` syzbot
  0 siblings, 1 reply; 9+ messages in thread
From: Yang Shi @ 2020-04-16  3:56 UTC (permalink / raw)
  To: syzbot
  Cc: Andrew Morton, Hugh Dickins, Linux Kernel Mailing List, Linux MM,
	syzkaller-bugs

#syz test: https://github.com/yang-shi/linux.git
8f9c86c99d278d375ae24b7ea426e1662c5e4009

On Fri, Apr 10, 2020 at 10:16 PM syzbot
<syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com> wrote:
>
> syzbot has found a reproducer for the following crash on:
>
> HEAD commit:    ab6f762f printk: queue wake_up_klogd irq_work only if per-..
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=158a6b5de00000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=3010ccb0f380f660
> dashboard link: https://syzkaller.appspot.com/bug?extid=e27980339d305f2dbfd9
> compiler:       clang version 10.0.0 (https://github.com/llvm/llvm-project/ c2443155a0fb245c8f17f2c1c72b6ea391e86e81)
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=12d3c5afe00000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=15e7f51be00000
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
>
> ========================================================
> WARNING: possible irq lock inversion dependency detected
> 5.6.0-syzkaller #0 Not tainted
> --------------------------------------------------------
> syz-executor941/7000 just changed the state of lock:
> ffff88808d9b18d8 (&info->lock){+.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
> ffff88808d9b18d8 (&info->lock){+.+.}-{2:2}, at: shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
> but this lock was taken by another, SOFTIRQ-safe lock in the past:
>  (&xa->xa_lock#4){..-.}-{2:2}
>
>
> and interrupts could create inverse lock ordering between them.
>
>
> other info that might help us debug this:
>  Possible interrupt unsafe locking scenario:
>
>        CPU0                    CPU1
>        ----                    ----
>   lock(&info->lock);
>                                local_irq_disable();
>                                lock(&xa->xa_lock#4);
>                                lock(&info->lock);
>   <Interrupt>
>     lock(&xa->xa_lock#4);
>
>  *** DEADLOCK ***
>
> 2 locks held by syz-executor941/7000:
>  #0: ffff88809edf10e8 (&mm->mmap_sem#2){++++}-{3:3}, at: __mcopy_atomic mm/userfaultfd.c:491 [inline]
>  #0: ffff88809edf10e8 (&mm->mmap_sem#2){++++}-{3:3}, at: mcopy_atomic+0x17a/0x1ba0 mm/userfaultfd.c:632
>  #1: ffff888098e211f8 (ptlock_ptr(page)#2){+.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
>  #1: ffff888098e211f8 (ptlock_ptr(page)#2){+.+.}-{2:2}, at: shmem_mfill_atomic_pte+0xf73/0x1e10 mm/shmem.c:2389
>
> the shortest dependencies between 2nd lock and 1st lock:
>  -> (&xa->xa_lock#4){..-.}-{2:2} {
>     IN-SOFTIRQ-W at:
>                       lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
>                       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
>                       _raw_spin_lock_irqsave+0x9e/0xc0 kernel/locking/spinlock.c:159
>                       test_clear_page_writeback+0x2d8/0xac0 mm/page-writeback.c:2728
>                       end_page_writeback+0x212/0x390 mm/filemap.c:1317
>                       end_bio_bh_io_sync+0xb1/0x110 fs/buffer.c:3012
>                       req_bio_endio block/blk-core.c:245 [inline]
>                       blk_update_request+0x437/0x1070 block/blk-core.c:1472
>                       scsi_end_request+0x7a/0x7f0 drivers/scsi/scsi_lib.c:575
>                       scsi_io_completion+0x178/0x1be0 drivers/scsi/scsi_lib.c:959
>                       blk_done_softirq+0x2f2/0x360 block/blk-softirq.c:37
>                       __do_softirq+0x268/0x80c kernel/softirq.c:292
>                       invoke_softirq kernel/softirq.c:373 [inline]
>                       irq_exit+0x223/0x230 kernel/softirq.c:413
>                       exiting_irq arch/x86/include/asm/apic.h:546 [inline]
>                       do_IRQ+0xfb/0x1d0 arch/x86/kernel/irq.c:263
>                       ret_from_intr+0x0/0x2b
>                       orc_find arch/x86/kernel/unwind_orc.c:164 [inline]
>                       unwind_next_frame+0x20b/0x1cf0 arch/x86/kernel/unwind_orc.c:407
>                       arch_stack_walk+0xb4/0xe0 arch/x86/kernel/stacktrace.c:25
>                       stack_trace_save+0xad/0x150 kernel/stacktrace.c:123
>                       save_stack mm/kasan/common.c:49 [inline]
>                       set_track mm/kasan/common.c:57 [inline]
>                       __kasan_kmalloc+0x114/0x160 mm/kasan/common.c:495
>                       __do_kmalloc mm/slab.c:3656 [inline]
>                       __kmalloc+0x24b/0x330 mm/slab.c:3665
>                       kmalloc include/linux/slab.h:560 [inline]
>                       tomoyo_realpath_from_path+0xd8/0x630 security/tomoyo/realpath.c:252
>                       tomoyo_get_realpath security/tomoyo/file.c:151 [inline]
>                       tomoyo_check_open_permission+0x1b6/0x900 security/tomoyo/file.c:771
>                       security_file_open+0x50/0xc0 security/security.c:1548
>                       do_dentry_open+0x35d/0x10b0 fs/open.c:784
>                       do_open fs/namei.c:3229 [inline]
>                       path_openat+0x2790/0x38b0 fs/namei.c:3346
>                       do_filp_open+0x191/0x3a0 fs/namei.c:3373
>                       do_sys_openat2+0x463/0x770 fs/open.c:1148
>                       do_sys_open fs/open.c:1164 [inline]
>                       ksys_open include/linux/syscalls.h:1386 [inline]
>                       __do_sys_open fs/open.c:1170 [inline]
>                       __se_sys_open fs/open.c:1168 [inline]
>                       __x64_sys_open+0x1af/0x1e0 fs/open.c:1168
>                       do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
>                       entry_SYSCALL_64_after_hwframe+0x49/0xb3
>     INITIAL USE at:
>                      lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
>                      __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
>                      _raw_spin_lock_irq+0x67/0x80 kernel/locking/spinlock.c:167
>                      spin_lock_irq include/linux/spinlock.h:378 [inline]
>                      __add_to_page_cache_locked+0x53d/0xc70 mm/filemap.c:855
>                      add_to_page_cache_lru+0x17f/0x4d0 mm/filemap.c:921
>                      do_read_cache_page+0x209/0xd00 mm/filemap.c:2755
>                      read_mapping_page include/linux/pagemap.h:397 [inline]
>                      read_part_sector+0xd8/0x2d0 block/partitions/core.c:643
>                      adfspart_check_ICS+0x45/0x640 block/partitions/acorn.c:360
>                      check_partition block/partitions/core.c:140 [inline]
>                      blk_add_partitions+0x3ce/0x1240 block/partitions/core.c:571
>                      bdev_disk_changed+0x446/0x5d0 fs/block_dev.c:1544
>                      __blkdev_get+0xb2b/0x13d0 fs/block_dev.c:1647
>                      register_disk block/genhd.c:763 [inline]
>                      __device_add_disk+0x95f/0x1040 block/genhd.c:853
>                      add_disk include/linux/genhd.h:294 [inline]
>                      brd_init+0x349/0x42a drivers/block/brd.c:533
>                      do_one_initcall+0x14b/0x350 init/main.c:1157
>                      do_initcall_level+0x101/0x14c init/main.c:1230
>                      do_initcalls+0x59/0x9b init/main.c:1246
>                      kernel_init_freeable+0x2fa/0x418 init/main.c:1450
>                      kernel_init+0xd/0x290 init/main.c:1357
>                      ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
>   }
>   ... key      at: [<ffffffff8b5afa68>] xa_init_flags.__key+0x0/0x10
>   ... acquired at:
>    lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
>    __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
>    _raw_spin_lock_irqsave+0x9e/0xc0 kernel/locking/spinlock.c:159
>    shmem_uncharge+0x34/0x4c0 mm/shmem.c:341
>    __split_huge_page+0xda8/0x1900 mm/huge_memory.c:2613
>    split_huge_page_to_list+0x10a4/0x15f0 mm/huge_memory.c:2886
>    split_huge_page include/linux/huge_mm.h:204 [inline]
>    shmem_punch_compound+0x17d/0x1c0 mm/shmem.c:814
>    shmem_undo_range+0x5da/0x1d00 mm/shmem.c:870
>    shmem_truncate_range mm/shmem.c:980 [inline]
>    shmem_setattr+0x4e3/0x8a0 mm/shmem.c:1039
>    notify_change+0xad5/0xfb0 fs/attr.c:336
>    do_truncate fs/open.c:64 [inline]
>    do_sys_ftruncate+0x55f/0x690 fs/open.c:195
>    do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
>    entry_SYSCALL_64_after_hwframe+0x49/0xb3
>
> -> (&info->lock){+.+.}-{2:2} {
>    HARDIRQ-ON-W at:
>                     lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
>                     __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
>                     _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
>                     spin_lock include/linux/spinlock.h:353 [inline]
>                     shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
>                     shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2440
>                     mfill_atomic_pte mm/userfaultfd.c:449 [inline]
>                     __mcopy_atomic mm/userfaultfd.c:582 [inline]
>                     mcopy_atomic+0x84f/0x1ba0 mm/userfaultfd.c:632
>                     userfaultfd_copy fs/userfaultfd.c:1743 [inline]
>                     userfaultfd_ioctl+0x2289/0x4890 fs/userfaultfd.c:1941
>                     vfs_ioctl fs/ioctl.c:47 [inline]
>                     ksys_ioctl fs/ioctl.c:763 [inline]
>                     __do_sys_ioctl fs/ioctl.c:772 [inline]
>                     __se_sys_ioctl+0xf9/0x160 fs/ioctl.c:770
>                     do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
>                     entry_SYSCALL_64_after_hwframe+0x49/0xb3
>    SOFTIRQ-ON-W at:
>                     lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
>                     __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
>                     _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
>                     spin_lock include/linux/spinlock.h:353 [inline]
>                     shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
>                     shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2440
>                     mfill_atomic_pte mm/userfaultfd.c:449 [inline]
>                     __mcopy_atomic mm/userfaultfd.c:582 [inline]
>                     mcopy_atomic+0x84f/0x1ba0 mm/userfaultfd.c:632
>                     userfaultfd_copy fs/userfaultfd.c:1743 [inline]
>                     userfaultfd_ioctl+0x2289/0x4890 fs/userfaultfd.c:1941
>                     vfs_ioctl fs/ioctl.c:47 [inline]
>                     ksys_ioctl fs/ioctl.c:763 [inline]
>                     __do_sys_ioctl fs/ioctl.c:772 [inline]
>                     __se_sys_ioctl+0xf9/0x160 fs/ioctl.c:770
>                     do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
>                     entry_SYSCALL_64_after_hwframe+0x49/0xb3
>    INITIAL USE at:
>                    lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
>                    __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
>                    _raw_spin_lock_irq+0x67/0x80 kernel/locking/spinlock.c:167
>                    spin_lock_irq include/linux/spinlock.h:378 [inline]
>                    shmem_getpage_gfp+0x2160/0x3120 mm/shmem.c:1882
>                    shmem_getpage mm/shmem.c:154 [inline]
>                    shmem_write_begin+0xcd/0x1a0 mm/shmem.c:2483
>                    generic_perform_write+0x23b/0x4e0 mm/filemap.c:3302
>                    __generic_file_write_iter+0x22b/0x4e0 mm/filemap.c:3431
>                    generic_file_write_iter+0x4a6/0x650 mm/filemap.c:3463
>                    call_write_iter include/linux/fs.h:1907 [inline]
>                    new_sync_write fs/read_write.c:484 [inline]
>                    __vfs_write+0x54c/0x710 fs/read_write.c:497
>                    vfs_write+0x274/0x580 fs/read_write.c:559
>                    ksys_write+0x11b/0x220 fs/read_write.c:612
>                    do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
>                    entry_SYSCALL_64_after_hwframe+0x49/0xb3
>  }
>  ... key      at: [<ffffffff8b59f840>] shmem_get_inode.__key+0x0/0x10
>  ... acquired at:
>    mark_lock_irq kernel/locking/lockdep.c:3585 [inline]
>    mark_lock+0x529/0x1b00 kernel/locking/lockdep.c:3935
>    mark_usage kernel/locking/lockdep.c:3852 [inline]
>    __lock_acquire+0xb95/0x2b90 kernel/locking/lockdep.c:4298
>    lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
>    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
>    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
>    spin_lock include/linux/spinlock.h:353 [inline]
>    shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
>    shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2440
>    mfill_atomic_pte mm/userfaultfd.c:449 [inline]
>    __mcopy_atomic mm/userfaultfd.c:582 [inline]
>    mcopy_atomic+0x84f/0x1ba0 mm/userfaultfd.c:632
>    userfaultfd_copy fs/userfaultfd.c:1743 [inline]
>    userfaultfd_ioctl+0x2289/0x4890 fs/userfaultfd.c:1941
>    vfs_ioctl fs/ioctl.c:47 [inline]
>    ksys_ioctl fs/ioctl.c:763 [inline]
>    __do_sys_ioctl fs/ioctl.c:772 [inline]
>    __se_sys_ioctl+0xf9/0x160 fs/ioctl.c:770
>    do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
>    entry_SYSCALL_64_after_hwframe+0x49/0xb3
>
>
> stack backtrace:
> CPU: 1 PID: 7000 Comm: syz-executor941 Not tainted 5.6.0-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> Call Trace:
>  __dump_stack lib/dump_stack.c:77 [inline]
>  dump_stack+0x1e9/0x30e lib/dump_stack.c:118
>  print_irq_inversion_bug+0xb67/0xe90 kernel/locking/lockdep.c:3447
>  check_usage_backwards+0x13f/0x240 kernel/locking/lockdep.c:3499
>  mark_lock_irq kernel/locking/lockdep.c:3585 [inline]
>  mark_lock+0x529/0x1b00 kernel/locking/lockdep.c:3935
>  mark_usage kernel/locking/lockdep.c:3852 [inline]
>  __lock_acquire+0xb95/0x2b90 kernel/locking/lockdep.c:4298
>  lock_acquire+0x169/0x480 kernel/locking/lockdep.c:4923
>  __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
>  _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
>  spin_lock include/linux/spinlock.h:353 [inline]
>  shmem_mfill_atomic_pte+0x13f4/0x1e10 mm/shmem.c:2402
>  shmem_mcopy_atomic_pte+0x3a/0x50 mm/shmem.c:2440
>  mfill_atomic_pte mm/userfaultfd.c:449 [inline]
>  __mcopy_atomic mm/userfaultfd.c:582 [inline]
>  mcopy_atomic+0x84f/0x1ba0 mm/userfaultfd.c:632
>  userfaultfd_copy fs/userfaultfd.c:1743 [inline]
>  userfaultfd_ioctl+0x2289/0x4890 fs/userfaultfd.c:1941
>  vfs_ioctl fs/ioctl.c:47 [inline]
>  ksys_ioctl fs/ioctl.c:763 [inline]
>  __do_sys_ioctl fs/ioctl.c:772 [inline]
>  __se_sys_ioctl+0xf9/0x160 fs/ioctl.c:770
>  do_syscall_64+0xf3/0x1b0 arch/x86/entry/common.c:295
>  entry_SYSCALL_64_after_hwframe+0x49/0xb3
> RIP: 0033:0x444399
> Code: 0d d8 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db d7 fb ff c3 66 2e 0f 1f 84 00 00 00 00
> RSP: 002b:00007ffd0974a4a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> RAX: ffffffffffffffda RBX: 00000000004002e0 RCX: 0000000000444399
> RDX: 00000000200a0fe0 RSI: 00000000c028aa03 RDI: 0000000000000004
> RBP: 00000000006cf018 R08: 00000000004002e0 R09: 00000000004002e0
> R10: 00000000004002e0 R11: 0000000000000246 R12: 0000000000402000
> R13: 0000000000402090 R14: 0000000000000000 R15: 0000000000000000
>
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: possible deadlock in shmem_mfill_atomic_pte
  2020-04-16  3:56   ` Yang Shi
@ 2020-04-16  6:58     ` syzbot
  0 siblings, 0 replies; 9+ messages in thread
From: syzbot @ 2020-04-16  6:58 UTC (permalink / raw)
  To: akpm, hughd, linux-kernel, linux-mm, shy828301, syzkaller-bugs

Hello,

syzbot has tested the proposed patch and the reproducer did not trigger crash:

Reported-and-tested-by: syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com

Tested on:

commit:         8f9c86c9 mm: shmem: disable interrupt when acquiring info-..
git tree:       https://github.com/yang-shi/linux.git
kernel config:  https://syzkaller.appspot.com/x/.config?x=11f10cc27c63cade
dashboard link: https://syzkaller.appspot.com/bug?extid=e27980339d305f2dbfd9
compiler:       clang version 10.0.0 (https://github.com/llvm/llvm-project/ c2443155a0fb245c8f17f2c1c72b6ea391e86e81)

Note: testing is done by a robot and is best-effort only.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-04-16  6:58 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-31 17:21 possible deadlock in shmem_mfill_atomic_pte syzbot
2020-04-11  5:16 ` syzbot
2020-04-16  3:56   ` Yang Shi
2020-04-16  6:58     ` syzbot
2020-04-11  8:52 ` syzbot
2020-04-13 23:19 ` Yang Shi
2020-04-16  1:27   ` Hugh Dickins
2020-04-16  2:22     ` Yang Shi
2020-04-16  3:10       ` Hugh Dickins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).