linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* possible deadlock in aio_poll
@ 2018-09-10  7:41 syzbot
  2018-09-10 16:53 ` Christoph Hellwig
  2018-10-27  6:16 ` syzbot
  0 siblings, 2 replies; 7+ messages in thread
From: syzbot @ 2018-09-10  7:41 UTC (permalink / raw)
  To: bcrl, linux-aio, linux-fsdevel, linux-kernel, syzkaller-bugs, viro

Hello,

syzbot found the following crash on:

HEAD commit:    f8f65382c98a Merge tag 'for-linus' of git://git.kernel.org..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1587e266400000
kernel config:  https://syzkaller.appspot.com/x/.config?x=8f59875069d721b6
dashboard link: https://syzkaller.appspot.com/bug?extid=5b1df0420c523b45a953
compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1753bdca400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+5b1df0420c523b45a953@syzkaller.appspotmail.com

8021q: adding VLAN 0 to HW filter on device team0

=====================================================
WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
4.19.0-rc2+ #229 Not tainted
-----------------------------------------------------
syz-executor2/9399 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
00000000126506e0 (&ctx->fd_wqh){+.+.}, at: spin_lock  
include/linux/spinlock.h:329 [inline]
00000000126506e0 (&ctx->fd_wqh){+.+.}, at: aio_poll+0x760/0x1420  
fs/aio.c:1747

and this task is already holding:
000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq  
include/linux/spinlock.h:354 [inline]
000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at:  
aio_poll+0x738/0x1420 fs/aio.c:1746
which would create a new lock dependency:
  (&(&ctx->ctx_lock)->rlock){..-.} -> (&ctx->fd_wqh){+.+.}

but this new dependency connects a SOFTIRQ-irq-safe lock:
  (&(&ctx->ctx_lock)->rlock){..-.}

... which became SOFTIRQ-irq-safe at:
   lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
   __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
   _raw_spin_lock_irq+0x61/0x80 kernel/locking/spinlock.c:160
   spin_lock_irq include/linux/spinlock.h:354 [inline]
   free_ioctx_users+0xbc/0x710 fs/aio.c:603
   percpu_ref_put_many include/linux/percpu-refcount.h:284 [inline]
   percpu_ref_put include/linux/percpu-refcount.h:300 [inline]
   percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123 [inline]
   percpu_ref_switch_to_atomic_rcu+0x62c/0x820 lib/percpu-refcount.c:158
   __rcu_reclaim kernel/rcu/rcu.h:236 [inline]
   rcu_do_batch kernel/rcu/tree.c:2576 [inline]
   invoke_rcu_callbacks kernel/rcu/tree.c:2880 [inline]
   __rcu_process_callbacks kernel/rcu/tree.c:2847 [inline]
   rcu_process_callbacks+0xf23/0x2670 kernel/rcu/tree.c:2864
   __do_softirq+0x30b/0xad8 kernel/softirq.c:292
   invoke_softirq kernel/softirq.c:372 [inline]
   irq_exit+0x17f/0x1c0 kernel/softirq.c:412
   exiting_irq arch/x86/include/asm/apic.h:536 [inline]
   smp_apic_timer_interrupt+0x1cb/0x760 arch/x86/kernel/apic/apic.c:1056
   apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:864
   native_safe_halt+0x6/0x10 arch/x86/include/asm/irqflags.h:57
   arch_safe_halt arch/x86/include/asm/paravirt.h:94 [inline]
   default_idle+0xbf/0x490 arch/x86/kernel/process.c:498
   arch_cpu_idle+0x10/0x20 arch/x86/kernel/process.c:489
   default_idle_call+0x6d/0x90 kernel/sched/idle.c:93
   cpuidle_idle_call kernel/sched/idle.c:153 [inline]
   do_idle+0x3db/0x5b0 kernel/sched/idle.c:262
   cpu_startup_entry+0x10c/0x120 kernel/sched/idle.c:368
   start_secondary+0x523/0x750 arch/x86/kernel/smpboot.c:271
   secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:242

to a SOFTIRQ-irq-unsafe lock:
  (&ctx->fd_wqh){+.+.}

... which became SOFTIRQ-irq-unsafe at:
...
   lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
   __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
   _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
   spin_lock include/linux/spinlock.h:329 [inline]
   userfaultfd_ctx_read+0x2e4/0x2180 fs/userfaultfd.c:1029
   userfaultfd_read+0x1e2/0x2c0 fs/userfaultfd.c:1191
   do_loop_readv_writev fs/read_write.c:700 [inline]
   do_iter_read+0x4a3/0x650 fs/read_write.c:924
   vfs_readv+0x175/0x1c0 fs/read_write.c:986
   do_readv+0x11a/0x310 fs/read_write.c:1019
   __do_sys_readv fs/read_write.c:1106 [inline]
   __se_sys_readv fs/read_write.c:1103 [inline]
   __x64_sys_readv+0x75/0xb0 fs/read_write.c:1103
   do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

  Possible interrupt unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&ctx->fd_wqh);
                                local_irq_disable();
                                lock(&(&ctx->ctx_lock)->rlock);
                                lock(&ctx->fd_wqh);
   <Interrupt>
     lock(&(&ctx->ctx_lock)->rlock);

  *** DEADLOCK ***

1 lock held by syz-executor2/9399:
  #0: 000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq  
include/linux/spinlock.h:354 [inline]
  #0: 000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at:  
aio_poll+0x738/0x1420 fs/aio.c:1746

the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
-> (&(&ctx->ctx_lock)->rlock){..-.} ops: 387 {
    IN-SOFTIRQ-W at:
                     lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
                     __raw_spin_lock_irq  
include/linux/spinlock_api_smp.h:128 [inline]
                     _raw_spin_lock_irq+0x61/0x80  
kernel/locking/spinlock.c:160
                     spin_lock_irq include/linux/spinlock.h:354 [inline]
                     free_ioctx_users+0xbc/0x710 fs/aio.c:603
                     percpu_ref_put_many include/linux/percpu-refcount.h:284  
[inline]
                     percpu_ref_put include/linux/percpu-refcount.h:300  
[inline]
                     percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123  
[inline]
                     percpu_ref_switch_to_atomic_rcu+0x62c/0x820  
lib/percpu-refcount.c:158
                     __rcu_reclaim kernel/rcu/rcu.h:236 [inline]
                     rcu_do_batch kernel/rcu/tree.c:2576 [inline]
                     invoke_rcu_callbacks kernel/rcu/tree.c:2880 [inline]
                     __rcu_process_callbacks kernel/rcu/tree.c:2847 [inline]
                     rcu_process_callbacks+0xf23/0x2670  
kernel/rcu/tree.c:2864
                     __do_softirq+0x30b/0xad8 kernel/softirq.c:292
                     invoke_softirq kernel/softirq.c:372 [inline]
                     irq_exit+0x17f/0x1c0 kernel/softirq.c:412
                     exiting_irq arch/x86/include/asm/apic.h:536 [inline]
                     smp_apic_timer_interrupt+0x1cb/0x760  
arch/x86/kernel/apic/apic.c:1056
                     apic_timer_interrupt+0xf/0x20  
arch/x86/entry/entry_64.S:864
                     native_safe_halt+0x6/0x10  
arch/x86/include/asm/irqflags.h:57
                     arch_safe_halt arch/x86/include/asm/paravirt.h:94  
[inline]
                     default_idle+0xbf/0x490 arch/x86/kernel/process.c:498
                     arch_cpu_idle+0x10/0x20 arch/x86/kernel/process.c:489
                     default_idle_call+0x6d/0x90 kernel/sched/idle.c:93
                     cpuidle_idle_call kernel/sched/idle.c:153 [inline]
                     do_idle+0x3db/0x5b0 kernel/sched/idle.c:262
                     cpu_startup_entry+0x10c/0x120 kernel/sched/idle.c:368
                     start_secondary+0x523/0x750  
arch/x86/kernel/smpboot.c:271
                     secondary_startup_64+0xa4/0xb0  
arch/x86/kernel/head_64.S:242
    INITIAL USE at:
                    lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
                    __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128  
[inline]
                    _raw_spin_lock_irq+0x61/0x80  
kernel/locking/spinlock.c:160
                    spin_lock_irq include/linux/spinlock.h:354 [inline]
                    free_ioctx_users+0xbc/0x710 fs/aio.c:603
                    percpu_ref_put_many include/linux/percpu-refcount.h:284  
[inline]
                    percpu_ref_put include/linux/percpu-refcount.h:300  
[inline]
                    percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123  
[inline]
                    percpu_ref_switch_to_atomic_rcu+0x62c/0x820  
lib/percpu-refcount.c:158
                    __rcu_reclaim kernel/rcu/rcu.h:236 [inline]
                    rcu_do_batch kernel/rcu/tree.c:2576 [inline]
                    invoke_rcu_callbacks kernel/rcu/tree.c:2880 [inline]
                    __rcu_process_callbacks kernel/rcu/tree.c:2847 [inline]
                    rcu_process_callbacks+0xf23/0x2670 kernel/rcu/tree.c:2864
                    __do_softirq+0x30b/0xad8 kernel/softirq.c:292
                    invoke_softirq kernel/softirq.c:372 [inline]
                    irq_exit+0x17f/0x1c0 kernel/softirq.c:412
                    exiting_irq arch/x86/include/asm/apic.h:536 [inline]
                    smp_apic_timer_interrupt+0x1cb/0x760  
arch/x86/kernel/apic/apic.c:1056
                    apic_timer_interrupt+0xf/0x20  
arch/x86/entry/entry_64.S:864
                    native_safe_halt+0x6/0x10  
arch/x86/include/asm/irqflags.h:57
                    arch_safe_halt arch/x86/include/asm/paravirt.h:94  
[inline]
                    default_idle+0xbf/0x490 arch/x86/kernel/process.c:498
                    arch_cpu_idle+0x10/0x20 arch/x86/kernel/process.c:489
                    default_idle_call+0x6d/0x90 kernel/sched/idle.c:93
                    cpuidle_idle_call kernel/sched/idle.c:153 [inline]
                    do_idle+0x3db/0x5b0 kernel/sched/idle.c:262
                    cpu_startup_entry+0x10c/0x120 kernel/sched/idle.c:368
                    start_secondary+0x523/0x750 arch/x86/kernel/smpboot.c:271
                    secondary_startup_64+0xa4/0xb0  
arch/x86/kernel/head_64.S:242
  }
  ... key      at: [<ffffffff8b3dc960>] __key.50120+0x0/0x40
  ... acquired at:
    lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
    _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
    spin_lock include/linux/spinlock.h:329 [inline]
    aio_poll+0x760/0x1420 fs/aio.c:1747
    io_submit_one+0xab8/0x1090 fs/aio.c:1850
    __do_sys_io_submit fs/aio.c:1916 [inline]
    __se_sys_io_submit fs/aio.c:1887 [inline]
    __x64_sys_io_submit+0x1b9/0x5d0 fs/aio.c:1887
    do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
    entry_SYSCALL_64_after_hwframe+0x49/0xbe


the dependencies between the lock to be acquired
  and SOFTIRQ-irq-unsafe lock:
-> (&ctx->fd_wqh){+.+.} ops: 2209 {
    HARDIRQ-ON-W at:
                     lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
                     __raw_spin_lock include/linux/spinlock_api_smp.h:142  
[inline]
                     _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
                     spin_lock include/linux/spinlock.h:329 [inline]
                     userfaultfd_ctx_read+0x2e4/0x2180 fs/userfaultfd.c:1029
                     userfaultfd_read+0x1e2/0x2c0 fs/userfaultfd.c:1191
                     do_loop_readv_writev fs/read_write.c:700 [inline]
                     do_iter_read+0x4a3/0x650 fs/read_write.c:924
                     vfs_readv+0x175/0x1c0 fs/read_write.c:986
                     do_readv+0x11a/0x310 fs/read_write.c:1019
                     __do_sys_readv fs/read_write.c:1106 [inline]
                     __se_sys_readv fs/read_write.c:1103 [inline]
                     __x64_sys_readv+0x75/0xb0 fs/read_write.c:1103
                     do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
                     entry_SYSCALL_64_after_hwframe+0x49/0xbe
    SOFTIRQ-ON-W at:
                     lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
                     __raw_spin_lock include/linux/spinlock_api_smp.h:142  
[inline]
                     _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
                     spin_lock include/linux/spinlock.h:329 [inline]
                     userfaultfd_ctx_read+0x2e4/0x2180 fs/userfaultfd.c:1029
                     userfaultfd_read+0x1e2/0x2c0 fs/userfaultfd.c:1191
                     do_loop_readv_writev fs/read_write.c:700 [inline]
                     do_iter_read+0x4a3/0x650 fs/read_write.c:924
                     vfs_readv+0x175/0x1c0 fs/read_write.c:986
                     do_readv+0x11a/0x310 fs/read_write.c:1019
                     __do_sys_readv fs/read_write.c:1106 [inline]
                     __se_sys_readv fs/read_write.c:1103 [inline]
                     __x64_sys_readv+0x75/0xb0 fs/read_write.c:1103
                     do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
                     entry_SYSCALL_64_after_hwframe+0x49/0xbe
    INITIAL USE at:
                    lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
                    __raw_spin_lock include/linux/spinlock_api_smp.h:142  
[inline]
                    _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
                    spin_lock include/linux/spinlock.h:329 [inline]
                    userfaultfd_ctx_read+0x2e4/0x2180 fs/userfaultfd.c:1029
                    userfaultfd_read+0x1e2/0x2c0 fs/userfaultfd.c:1191
                    do_loop_readv_writev fs/read_write.c:700 [inline]
                    do_iter_read+0x4a3/0x650 fs/read_write.c:924
                    vfs_readv+0x175/0x1c0 fs/read_write.c:986
                    do_readv+0x11a/0x310 fs/read_write.c:1019
                    __do_sys_readv fs/read_write.c:1106 [inline]
                    __se_sys_readv fs/read_write.c:1103 [inline]
                    __x64_sys_readv+0x75/0xb0 fs/read_write.c:1103
                    do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
                    entry_SYSCALL_64_after_hwframe+0x49/0xbe
  }
  ... key      at: [<ffffffff8b3dc6e0>] __key.43670+0x0/0x40
  ... acquired at:
    lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
    _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
    spin_lock include/linux/spinlock.h:329 [inline]
    aio_poll+0x760/0x1420 fs/aio.c:1747
    io_submit_one+0xab8/0x1090 fs/aio.c:1850
    __do_sys_io_submit fs/aio.c:1916 [inline]
    __se_sys_io_submit fs/aio.c:1887 [inline]
    __x64_sys_io_submit+0x1b9/0x5d0 fs/aio.c:1887
    do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
    entry_SYSCALL_64_after_hwframe+0x49/0xbe


stack backtrace:
CPU: 0 PID: 9399 Comm: syz-executor2 Not tainted 4.19.0-rc2+ #229
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS  
Google 01/01/2011
Call Trace:
  __dump_stack lib/dump_stack.c:77 [inline]
  dump_stack+0x1c4/0x2b4 lib/dump_stack.c:113
  print_bad_irq_dependency kernel/locking/lockdep.c:1569 [inline]
  check_usage.cold.58+0x6d5/0xad1 kernel/locking/lockdep.c:1601
  check_irq_usage kernel/locking/lockdep.c:1657 [inline]
  check_prev_add_irq kernel/locking/lockdep_states.h:8 [inline]
  check_prev_add kernel/locking/lockdep.c:1867 [inline]
  check_prevs_add kernel/locking/lockdep.c:1975 [inline]
  validate_chain kernel/locking/lockdep.c:2416 [inline]
  __lock_acquire+0x2400/0x4ec0 kernel/locking/lockdep.c:3412
  lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3901
  __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
  _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
  spin_lock include/linux/spinlock.h:329 [inline]
  aio_poll+0x760/0x1420 fs/aio.c:1747
  io_submit_one+0xab8/0x1090 fs/aio.c:1850
  __do_sys_io_submit fs/aio.c:1916 [inline]
  __se_sys_io_submit fs/aio.c:1887 [inline]
  __x64_sys_io_submit+0x1b9/0x5d0 fs/aio.c:1887
  do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
  entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x457099
Code: fd b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7  
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff  
ff 0f 83 cb b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fa4bd11bc78 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
RAX: ffffffffffffffda RBX: 00007fa4bd11c6d4 RCX: 0000000000457099
RDX: 0000000020000b00 RSI: 0000000000000001 RDI: 00007fa4bd13e000
RBP: 00000000009301e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 00000000004cd990 R14: 00000000004c40a7 R15: 0000000000000002
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop1' (00000000fd2f09a6): kobject_uevent_env
kobject: 'loop1' (00000000fd2f09a6): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop6' (00000000790743e9): kobject_uevent_env
kobject: 'loop6' (00000000790743e9): fill_kobj_path: path  
= '/devices/virtual/block/loop6'
kobject: 'loop7' (00000000b52cfd8a): kobject_uevent_env
kobject: 'loop7' (00000000b52cfd8a): fill_kobj_path: path  
= '/devices/virtual/block/loop7'
kobject: 'loop2' (000000002d051810): kobject_uevent_env
kobject: 'loop2' (000000002d051810): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop5' (0000000044e01f3d): kobject_uevent_env
kobject: 'loop5' (0000000044e01f3d): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop4' (000000003f6c580a): kobject_uevent_env
kobject: 'loop4' (000000003f6c580a): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop3' (000000001d0a0601): kobject_uevent_env
kobject: 'loop3' (000000001d0a0601): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop0' (000000005501af28): kobject_uevent_env
kobject: 'loop0' (000000005501af28): fill_kobj_path: path  
= '/devices/virtual/block/loop0'


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#bug-status-tracking for how to communicate with  
syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in aio_poll
  2018-09-10  7:41 possible deadlock in aio_poll syzbot
@ 2018-09-10 16:53 ` Christoph Hellwig
  2018-09-10 18:14   ` Miklos Szeredi
  2018-10-17 23:55   ` Andrea Arcangeli
  2018-10-27  6:16 ` syzbot
  1 sibling, 2 replies; 7+ messages in thread
From: Christoph Hellwig @ 2018-09-10 16:53 UTC (permalink / raw)
  To: syzbot
  Cc: bcrl, linux-aio, linux-fsdevel, linux-kernel, syzkaller-bugs,
	viro, Andrea Arcangeli, akpm

On Mon, Sep 10, 2018 at 12:41:05AM -0700, syzbot wrote:
> =====================================================
> WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
> 4.19.0-rc2+ #229 Not tainted
> -----------------------------------------------------
> syz-executor2/9399 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
> 00000000126506e0 (&ctx->fd_wqh){+.+.}, at: spin_lock
> include/linux/spinlock.h:329 [inline]
> 00000000126506e0 (&ctx->fd_wqh){+.+.}, at: aio_poll+0x760/0x1420
> fs/aio.c:1747
> 
> and this task is already holding:
> 000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq
> include/linux/spinlock.h:354 [inline]
> 000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at: aio_poll+0x738/0x1420
> fs/aio.c:1746
> which would create a new lock dependency:
>  (&(&ctx->ctx_lock)->rlock){..-.} -> (&ctx->fd_wqh){+.+.}

ctx->fd_wqh seems to only exist in userfaultfd, which indeed seems
to do strange open coded waitqueue locking, and seems to fail to disable
irqs.  Something like this should fix it:

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index bfa0ec69f924..356d2b8568c1 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -1026,7 +1026,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
 	struct userfaultfd_ctx *fork_nctx = NULL;
 
 	/* always take the fd_wqh lock before the fault_pending_wqh lock */
-	spin_lock(&ctx->fd_wqh.lock);
+	spin_lock_irq(&ctx->fd_wqh.lock);
 	__add_wait_queue(&ctx->fd_wqh, &wait);
 	for (;;) {
 		set_current_state(TASK_INTERRUPTIBLE);
@@ -1112,13 +1112,13 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
 			ret = -EAGAIN;
 			break;
 		}
-		spin_unlock(&ctx->fd_wqh.lock);
+		spin_unlock_irq(&ctx->fd_wqh.lock);
 		schedule();
-		spin_lock(&ctx->fd_wqh.lock);
+		spin_lock_irq(&ctx->fd_wqh.lock);
 	}
 	__remove_wait_queue(&ctx->fd_wqh, &wait);
 	__set_current_state(TASK_RUNNING);
-	spin_unlock(&ctx->fd_wqh.lock);
+	spin_unlock_irq(&ctx->fd_wqh.lock);
 
 	if (!ret && msg->event == UFFD_EVENT_FORK) {
 		ret = resolve_userfault_fork(ctx, fork_nctx, msg);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: possible deadlock in aio_poll
  2018-09-10 16:53 ` Christoph Hellwig
@ 2018-09-10 18:14   ` Miklos Szeredi
  2018-09-11  6:33     ` Christoph Hellwig
  2018-10-17 23:55   ` Andrea Arcangeli
  1 sibling, 1 reply; 7+ messages in thread
From: Miklos Szeredi @ 2018-09-10 18:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: syzbot, bcrl, linux-aio, linux-fsdevel, linux-kernel,
	syzkaller-bugs, Al Viro, Andrea Arcangeli, Andrew Morton

On Mon, Sep 10, 2018 at 6:53 PM, Christoph Hellwig <hch@infradead.org> wrote:
> On Mon, Sep 10, 2018 at 12:41:05AM -0700, syzbot wrote:
>> =====================================================
>> WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
>> 4.19.0-rc2+ #229 Not tainted
>> -----------------------------------------------------
>> syz-executor2/9399 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
>> 00000000126506e0 (&ctx->fd_wqh){+.+.}, at: spin_lock
>> include/linux/spinlock.h:329 [inline]
>> 00000000126506e0 (&ctx->fd_wqh){+.+.}, at: aio_poll+0x760/0x1420
>> fs/aio.c:1747
>>
>> and this task is already holding:
>> 000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq
>> include/linux/spinlock.h:354 [inline]
>> 000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at: aio_poll+0x738/0x1420
>> fs/aio.c:1746
>> which would create a new lock dependency:
>>  (&(&ctx->ctx_lock)->rlock){..-.} -> (&ctx->fd_wqh){+.+.}
>
> ctx->fd_wqh seems to only exist in userfaultfd, which indeed seems
> to do strange open coded waitqueue locking, and seems to fail to disable
> irqs.  Something like this should fix it:

Why do pollable waitqueues need to disable interrupts generally?

I don't see anything fundamental in the poll interface to force this
requirement on users of that interface.

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in aio_poll
  2018-09-10 18:14   ` Miklos Szeredi
@ 2018-09-11  6:33     ` Christoph Hellwig
  2018-09-11  7:20       ` Miklos Szeredi
  0 siblings, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2018-09-11  6:33 UTC (permalink / raw)
  To: Miklos Szeredi
  Cc: Christoph Hellwig, syzbot, bcrl, linux-aio, linux-fsdevel,
	linux-kernel, syzkaller-bugs, Al Viro, Andrea Arcangeli,
	Andrew Morton

On Mon, Sep 10, 2018 at 08:14:20PM +0200, Miklos Szeredi wrote:
> Why do pollable waitqueues need to disable interrupts generally?

Any waitqueue needs to disable interrupts for consistency.  We
always use spin_lock_irqsave in __wake_up_common_lock() for example.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in aio_poll
  2018-09-11  6:33     ` Christoph Hellwig
@ 2018-09-11  7:20       ` Miklos Szeredi
  0 siblings, 0 replies; 7+ messages in thread
From: Miklos Szeredi @ 2018-09-11  7:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: syzbot, bcrl, linux-aio, linux-fsdevel, linux-kernel,
	syzkaller-bugs, Al Viro, Andrea Arcangeli, Andrew Morton

On Tue, Sep 11, 2018 at 8:33 AM, Christoph Hellwig <hch@infradead.org> wrote:
> On Mon, Sep 10, 2018 at 08:14:20PM +0200, Miklos Szeredi wrote:
>> Why do pollable waitqueues need to disable interrupts generally?
>
> Any waitqueue needs to disable interrupts for consistency.  We
> always use spin_lock_irqsave in __wake_up_common_lock() for example.

There are the _locked (non _irq) variants that do not.

And poll/select/etc don't impose non-interuptibility on wakeups
either.  So it looks like it's just aio that has weird spin lock
dependencies that forces this requirement on a waitq used in ->poll().

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in aio_poll
  2018-09-10 16:53 ` Christoph Hellwig
  2018-09-10 18:14   ` Miklos Szeredi
@ 2018-10-17 23:55   ` Andrea Arcangeli
  1 sibling, 0 replies; 7+ messages in thread
From: Andrea Arcangeli @ 2018-10-17 23:55 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: syzbot, bcrl, linux-aio, linux-fsdevel, linux-kernel,
	syzkaller-bugs, viro, akpm

On Mon, Sep 10, 2018 at 09:53:17AM -0700, Christoph Hellwig wrote:
> On Mon, Sep 10, 2018 at 12:41:05AM -0700, syzbot wrote:
> > =====================================================
> > WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
> > 4.19.0-rc2+ #229 Not tainted
> > -----------------------------------------------------
> > syz-executor2/9399 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
> > 00000000126506e0 (&ctx->fd_wqh){+.+.}, at: spin_lock
> > include/linux/spinlock.h:329 [inline]
> > 00000000126506e0 (&ctx->fd_wqh){+.+.}, at: aio_poll+0x760/0x1420
> > fs/aio.c:1747
> > 
> > and this task is already holding:
> > 000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq
> > include/linux/spinlock.h:354 [inline]
> > 000000002bed6bf6 (&(&ctx->ctx_lock)->rlock){..-.}, at: aio_poll+0x738/0x1420
> > fs/aio.c:1746
> > which would create a new lock dependency:
> >  (&(&ctx->ctx_lock)->rlock){..-.} -> (&ctx->fd_wqh){+.+.}
> 
> ctx->fd_wqh seems to only exist in userfaultfd, which indeed seems
> to do strange open coded waitqueue locking, and seems to fail to disable
> irqs.  Something like this should fix it:
> 
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index bfa0ec69f924..356d2b8568c1 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -1026,7 +1026,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
>  	struct userfaultfd_ctx *fork_nctx = NULL;
>  
>  	/* always take the fd_wqh lock before the fault_pending_wqh lock */
> -	spin_lock(&ctx->fd_wqh.lock);
> +	spin_lock_irq(&ctx->fd_wqh.lock);
>  	__add_wait_queue(&ctx->fd_wqh, &wait);
>  	for (;;) {
>  		set_current_state(TASK_INTERRUPTIBLE);
> @@ -1112,13 +1112,13 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
>  			ret = -EAGAIN;
>  			break;
>  		}
> -		spin_unlock(&ctx->fd_wqh.lock);
> +		spin_unlock_irq(&ctx->fd_wqh.lock);
>  		schedule();
> -		spin_lock(&ctx->fd_wqh.lock);
> +		spin_lock_irq(&ctx->fd_wqh.lock);
>  	}
>  	__remove_wait_queue(&ctx->fd_wqh, &wait);
>  	__set_current_state(TASK_RUNNING);
> -	spin_unlock(&ctx->fd_wqh.lock);
> +	spin_unlock_irq(&ctx->fd_wqh.lock);
>  
>  	if (!ret && msg->event == UFFD_EVENT_FORK) {
>  		ret = resolve_userfault_fork(ctx, fork_nctx, msg);
> 

Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>

This is lock inversion with userfaultfd_poll that takes the fq_wqh
after the irqsafe aio lock. And the aio lock can be taken from softirq
(so potentially for interrupts) leading to a potential lock inversion
deadlock.

I suggest to add a comment about the above in the code before the
first spin_lock_irq to explain why it needs to be _irq or it's not
obvious.

c430d1e848ff1240d126e79780f3c26208b8aed9 was just a false positive
instead.

I didn't comment on c430d1e848ff1240d126e79780f3c26208b8aed9 because I
was too busy with the speculative execution issues at the time and it
was just fine to drop the microoptimization, but while at it can we
look in how to add a spin_acquire or find a way to teach lockdep in
another way, so it's happy even if we restore the microoptimization?

If we do that, in addition we should also initialize the
ctx->fault_wqh spinlock to locked in the same patch (a spin_lock
during uffd ctx creation will do) to be sure nobody takes it as
further robustness feature against future modification (it gets more
self documenting too that it's not supposed to be taken and the
fault_pending_wq.lock has to be taken instead).

Thanks,
Andrea

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in aio_poll
  2018-09-10  7:41 possible deadlock in aio_poll syzbot
  2018-09-10 16:53 ` Christoph Hellwig
@ 2018-10-27  6:16 ` syzbot
  1 sibling, 0 replies; 7+ messages in thread
From: syzbot @ 2018-10-27  6:16 UTC (permalink / raw)
  To: aarcange, akpm, bcrl, hch, linux-aio, linux-fsdevel,
	linux-kernel, miklos, syzkaller-bugs, viro

syzbot has found a reproducer for the following crash on:

HEAD commit:    18d0eae30e6a Merge tag 'char-misc-4.20-rc1' of git://git.k..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=14728be5400000
kernel config:  https://syzkaller.appspot.com/x/.config?x=342f43de913c81b9
dashboard link: https://syzkaller.appspot.com/bug?extid=5b1df0420c523b45a953
compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=161d6999400000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=110f4cf5400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+5b1df0420c523b45a953@syzkaller.appspotmail.com


=====================================================
WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
4.19.0+ #84 Not tainted
-----------------------------------------------------
syz-executor781/7254 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
00000000e70e96f7 (&ctx->fd_wqh){+.+.}, at: spin_lock  
include/linux/spinlock.h:329 [inline]
00000000e70e96f7 (&ctx->fd_wqh){+.+.}, at: aio_poll+0x760/0x1420  
fs/aio.c:1747

and this task is already holding:
000000009957d7d7 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq  
include/linux/spinlock.h:354 [inline]
000000009957d7d7 (&(&ctx->ctx_lock)->rlock){..-.}, at:  
aio_poll+0x738/0x1420 fs/aio.c:1746
which would create a new lock dependency:
  (&(&ctx->ctx_lock)->rlock){..-.} -> (&ctx->fd_wqh){+.+.}

but this new dependency connects a SOFTIRQ-irq-safe lock:
  (&(&ctx->ctx_lock)->rlock){..-.}

... which became SOFTIRQ-irq-safe at:
   lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
   __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
   _raw_spin_lock_irq+0x61/0x80 kernel/locking/spinlock.c:160
   spin_lock_irq include/linux/spinlock.h:354 [inline]
   free_ioctx_users+0xbc/0x710 fs/aio.c:603
   percpu_ref_put_many include/linux/percpu-refcount.h:285 [inline]
   percpu_ref_put include/linux/percpu-refcount.h:301 [inline]
   percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123 [inline]
   percpu_ref_switch_to_atomic_rcu+0x563/0x730 lib/percpu-refcount.c:158
   __rcu_reclaim kernel/rcu/rcu.h:240 [inline]
   rcu_do_batch kernel/rcu/tree.c:2437 [inline]
   invoke_rcu_callbacks kernel/rcu/tree.c:2716 [inline]
   rcu_process_callbacks+0x100a/0x1ac0 kernel/rcu/tree.c:2697
   __do_softirq+0x308/0xb7e kernel/softirq.c:292
   invoke_softirq kernel/softirq.c:373 [inline]
   irq_exit+0x17f/0x1c0 kernel/softirq.c:413
   exiting_irq arch/x86/include/asm/apic.h:536 [inline]
   smp_apic_timer_interrupt+0x1cb/0x760 arch/x86/kernel/apic/apic.c:1061
   apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:801
   native_safe_halt+0x6/0x10 arch/x86/include/asm/irqflags.h:57
   arch_safe_halt arch/x86/include/asm/paravirt.h:151 [inline]
   default_idle+0xbf/0x490 arch/x86/kernel/process.c:498
   arch_cpu_idle+0x10/0x20 arch/x86/kernel/process.c:489
   default_idle_call+0x6d/0x90 kernel/sched/idle.c:93
   cpuidle_idle_call kernel/sched/idle.c:153 [inline]
   do_idle+0x49b/0x5c0 kernel/sched/idle.c:262
   cpu_startup_entry+0x18/0x20 kernel/sched/idle.c:353
   start_secondary+0x487/0x5f0 arch/x86/kernel/smpboot.c:271
   secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:243

to a SOFTIRQ-irq-unsafe lock:
  (&ctx->fd_wqh){+.+.}

... which became SOFTIRQ-irq-unsafe at:
...
   lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
   __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
   _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
   spin_lock include/linux/spinlock.h:329 [inline]
   userfaultfd_ctx_read+0x2e4/0x2180 fs/userfaultfd.c:1029
   userfaultfd_read+0x1e2/0x2c0 fs/userfaultfd.c:1191
   __vfs_read+0x117/0x9b0 fs/read_write.c:416
   vfs_read+0x17f/0x3c0 fs/read_write.c:452
   ksys_read+0x101/0x260 fs/read_write.c:578
   __do_sys_read fs/read_write.c:588 [inline]
   __se_sys_read fs/read_write.c:586 [inline]
   __x64_sys_read+0x73/0xb0 fs/read_write.c:586
   do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

  Possible interrupt unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&ctx->fd_wqh);
                                local_irq_disable();
                                lock(&(&ctx->ctx_lock)->rlock);
                                lock(&ctx->fd_wqh);
   <Interrupt>
     lock(&(&ctx->ctx_lock)->rlock);

  *** DEADLOCK ***

1 lock held by syz-executor781/7254:
  #0: 000000009957d7d7 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq  
include/linux/spinlock.h:354 [inline]
  #0: 000000009957d7d7 (&(&ctx->ctx_lock)->rlock){..-.}, at:  
aio_poll+0x738/0x1420 fs/aio.c:1746

the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
-> (&(&ctx->ctx_lock)->rlock){..-.} {
    IN-SOFTIRQ-W at:
                     lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
                     __raw_spin_lock_irq  
include/linux/spinlock_api_smp.h:128 [inline]
                     _raw_spin_lock_irq+0x61/0x80  
kernel/locking/spinlock.c:160
                     spin_lock_irq include/linux/spinlock.h:354 [inline]
                     free_ioctx_users+0xbc/0x710 fs/aio.c:603
                     percpu_ref_put_many include/linux/percpu-refcount.h:285  
[inline]
                     percpu_ref_put include/linux/percpu-refcount.h:301  
[inline]
                     percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123  
[inline]
                     percpu_ref_switch_to_atomic_rcu+0x563/0x730  
lib/percpu-refcount.c:158
                     __rcu_reclaim kernel/rcu/rcu.h:240 [inline]
                     rcu_do_batch kernel/rcu/tree.c:2437 [inline]
                     invoke_rcu_callbacks kernel/rcu/tree.c:2716 [inline]
                     rcu_process_callbacks+0x100a/0x1ac0  
kernel/rcu/tree.c:2697
                     __do_softirq+0x308/0xb7e kernel/softirq.c:292
                     invoke_softirq kernel/softirq.c:373 [inline]
                     irq_exit+0x17f/0x1c0 kernel/softirq.c:413
                     exiting_irq arch/x86/include/asm/apic.h:536 [inline]
                     smp_apic_timer_interrupt+0x1cb/0x760  
arch/x86/kernel/apic/apic.c:1061
                     apic_timer_interrupt+0xf/0x20  
arch/x86/entry/entry_64.S:801
                     native_safe_halt+0x6/0x10  
arch/x86/include/asm/irqflags.h:57
                     arch_safe_halt arch/x86/include/asm/paravirt.h:151  
[inline]
                     default_idle+0xbf/0x490 arch/x86/kernel/process.c:498
                     arch_cpu_idle+0x10/0x20 arch/x86/kernel/process.c:489
                     default_idle_call+0x6d/0x90 kernel/sched/idle.c:93
                     cpuidle_idle_call kernel/sched/idle.c:153 [inline]
                     do_idle+0x49b/0x5c0 kernel/sched/idle.c:262
                     cpu_startup_entry+0x18/0x20 kernel/sched/idle.c:353
                     start_secondary+0x487/0x5f0  
arch/x86/kernel/smpboot.c:271
                     secondary_startup_64+0xa4/0xb0  
arch/x86/kernel/head_64.S:243
    INITIAL USE at:
                    lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
                    __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128  
[inline]
                    _raw_spin_lock_irq+0x61/0x80  
kernel/locking/spinlock.c:160
                    spin_lock_irq include/linux/spinlock.h:354 [inline]
                    free_ioctx_users+0xbc/0x710 fs/aio.c:603
                    percpu_ref_put_many include/linux/percpu-refcount.h:285  
[inline]
                    percpu_ref_put include/linux/percpu-refcount.h:301  
[inline]
                    percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123  
[inline]
                    percpu_ref_switch_to_atomic_rcu+0x563/0x730  
lib/percpu-refcount.c:158
                    __rcu_reclaim kernel/rcu/rcu.h:240 [inline]
                    rcu_do_batch kernel/rcu/tree.c:2437 [inline]
                    invoke_rcu_callbacks kernel/rcu/tree.c:2716 [inline]
                    rcu_process_callbacks+0x100a/0x1ac0  
kernel/rcu/tree.c:2697
                    __do_softirq+0x308/0xb7e kernel/softirq.c:292
                    invoke_softirq kernel/softirq.c:373 [inline]
                    irq_exit+0x17f/0x1c0 kernel/softirq.c:413
                    exiting_irq arch/x86/include/asm/apic.h:536 [inline]
                    smp_apic_timer_interrupt+0x1cb/0x760  
arch/x86/kernel/apic/apic.c:1061
                    apic_timer_interrupt+0xf/0x20  
arch/x86/entry/entry_64.S:801
                    native_safe_halt+0x6/0x10  
arch/x86/include/asm/irqflags.h:57
                    arch_safe_halt arch/x86/include/asm/paravirt.h:151  
[inline]
                    default_idle+0xbf/0x490 arch/x86/kernel/process.c:498
                    arch_cpu_idle+0x10/0x20 arch/x86/kernel/process.c:489
                    default_idle_call+0x6d/0x90 kernel/sched/idle.c:93
                    cpuidle_idle_call kernel/sched/idle.c:153 [inline]
                    do_idle+0x49b/0x5c0 kernel/sched/idle.c:262
                    cpu_startup_entry+0x18/0x20 kernel/sched/idle.c:353
                    start_secondary+0x487/0x5f0 arch/x86/kernel/smpboot.c:271
                    secondary_startup_64+0xa4/0xb0  
arch/x86/kernel/head_64.S:243
  }
  ... key      at: [<ffffffff8aed9b20>] __key.50623+0x0/0x40
  ... acquired at:
    lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
    _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
    spin_lock include/linux/spinlock.h:329 [inline]
    aio_poll+0x760/0x1420 fs/aio.c:1747
    io_submit_one+0xa49/0xf80 fs/aio.c:1850
    __do_sys_io_submit fs/aio.c:1916 [inline]
    __se_sys_io_submit fs/aio.c:1887 [inline]
    __x64_sys_io_submit+0x1b7/0x580 fs/aio.c:1887
    do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
    entry_SYSCALL_64_after_hwframe+0x49/0xbe


the dependencies between the lock to be acquired
  and SOFTIRQ-irq-unsafe lock:
-> (&ctx->fd_wqh){+.+.} {
    HARDIRQ-ON-W at:
                     lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
                     __raw_spin_lock include/linux/spinlock_api_smp.h:142  
[inline]
                     _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
                     spin_lock include/linux/spinlock.h:329 [inline]
                     userfaultfd_ctx_read+0x2e4/0x2180 fs/userfaultfd.c:1029
                     userfaultfd_read+0x1e2/0x2c0 fs/userfaultfd.c:1191
                     __vfs_read+0x117/0x9b0 fs/read_write.c:416
                     vfs_read+0x17f/0x3c0 fs/read_write.c:452
                     ksys_read+0x101/0x260 fs/read_write.c:578
                     __do_sys_read fs/read_write.c:588 [inline]
                     __se_sys_read fs/read_write.c:586 [inline]
                     __x64_sys_read+0x73/0xb0 fs/read_write.c:586
                     do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
                     entry_SYSCALL_64_after_hwframe+0x49/0xbe
    SOFTIRQ-ON-W at:
                     lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
                     __raw_spin_lock include/linux/spinlock_api_smp.h:142  
[inline]
                     _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
                     spin_lock include/linux/spinlock.h:329 [inline]
                     userfaultfd_ctx_read+0x2e4/0x2180 fs/userfaultfd.c:1029
                     userfaultfd_read+0x1e2/0x2c0 fs/userfaultfd.c:1191
                     __vfs_read+0x117/0x9b0 fs/read_write.c:416
                     vfs_read+0x17f/0x3c0 fs/read_write.c:452
                     ksys_read+0x101/0x260 fs/read_write.c:578
                     __do_sys_read fs/read_write.c:588 [inline]
                     __se_sys_read fs/read_write.c:586 [inline]
                     __x64_sys_read+0x73/0xb0 fs/read_write.c:586
                     do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
                     entry_SYSCALL_64_after_hwframe+0x49/0xbe
    INITIAL USE at:
                    lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
                    __raw_spin_lock include/linux/spinlock_api_smp.h:142  
[inline]
                    _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
                    spin_lock include/linux/spinlock.h:329 [inline]
                    userfaultfd_ctx_read+0x2e4/0x2180 fs/userfaultfd.c:1029
                    userfaultfd_read+0x1e2/0x2c0 fs/userfaultfd.c:1191
                    __vfs_read+0x117/0x9b0 fs/read_write.c:416
                    vfs_read+0x17f/0x3c0 fs/read_write.c:452
                    ksys_read+0x101/0x260 fs/read_write.c:578
                    __do_sys_read fs/read_write.c:588 [inline]
                    __se_sys_read fs/read_write.c:586 [inline]
                    __x64_sys_read+0x73/0xb0 fs/read_write.c:586
                    do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
                    entry_SYSCALL_64_after_hwframe+0x49/0xbe
  }
  ... key      at: [<ffffffff8aed98a0>] __key.44253+0x0/0x40
  ... acquired at:
    lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
    _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
    spin_lock include/linux/spinlock.h:329 [inline]
    aio_poll+0x760/0x1420 fs/aio.c:1747
    io_submit_one+0xa49/0xf80 fs/aio.c:1850
    __do_sys_io_submit fs/aio.c:1916 [inline]
    __se_sys_io_submit fs/aio.c:1887 [inline]
    __x64_sys_io_submit+0x1b7/0x580 fs/aio.c:1887
    do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
    entry_SYSCALL_64_after_hwframe+0x49/0xbe


stack backtrace:
CPU: 0 PID: 7254 Comm: syz-executor781 Not tainted 4.19.0+ #84
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS  
Google 01/01/2011
Call Trace:
  __dump_stack lib/dump_stack.c:77 [inline]
  dump_stack+0x244/0x39d lib/dump_stack.c:113
  print_bad_irq_dependency kernel/locking/lockdep.c:1570 [inline]
  check_usage.cold.58+0x6d5/0xad1 kernel/locking/lockdep.c:1602
  check_irq_usage kernel/locking/lockdep.c:1658 [inline]
  check_prev_add_irq kernel/locking/lockdep_states.h:8 [inline]
  check_prev_add kernel/locking/lockdep.c:1868 [inline]
  check_prevs_add kernel/locking/lockdep.c:1976 [inline]
  validate_chain kernel/locking/lockdep.c:2347 [inline]
  __lock_acquire+0x238a/0x4c20 kernel/locking/lockdep.c:3341
  lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
  __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
  _raw_spin_lock+0x2d/0x40 kernel/locking/spinlock.c:144
  spin_lock include/linux/spinlock.h:329 [inline]
  aio_poll+0x760/0x1420 fs/aio.c:1747
  io_submit_one+0xa49/0xf80 fs/aio.c:1850
  __do_sys_io_submit fs/aio.c:1916 [inline]
  __se_sys_io_submit fs/aio.c:1887 [inline]
  __x64_sys_io_submit+0x1b7/0x580 fs/aio.c:1887
  do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
  entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x447dc9
Code: e8 9c ba 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7  
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff  
ff 0f 83 5b 07 fc ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fc840e69da8 EFLAGS: 00000293 ORIG_RAX: 00000000000000d1
RAX: ffffffffffffffda RBX: 00000000006e39e8 RCX: 0000000000447dc9
RDX: 0000000020000b00 RSI: 0000000000000001 RDI: 00007fc840e39000
RBP: 00000000006e39e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000293 R12: 00000000006e39ec
R13: 702f74656e2f666c R14: 65732f636f72702f R15: 0000000000000000

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-10-27 14:55 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-10  7:41 possible deadlock in aio_poll syzbot
2018-09-10 16:53 ` Christoph Hellwig
2018-09-10 18:14   ` Miklos Szeredi
2018-09-11  6:33     ` Christoph Hellwig
2018-09-11  7:20       ` Miklos Szeredi
2018-10-17 23:55   ` Andrea Arcangeli
2018-10-27  6:16 ` syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).