linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* INFO: task hung in pipe_read (2)
@ 2020-08-01  8:45 syzbot
  2020-08-01 15:29 ` Tetsuo Handa
  0 siblings, 1 reply; 11+ messages in thread
From: syzbot @ 2020-08-01  8:45 UTC (permalink / raw)
  To: linux-fsdevel, linux-kernel, syzkaller-bugs, viro

Hello,

syzbot found the following issue on:

HEAD commit:    01830e6c Add linux-next specific files for 20200731
git tree:       linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=11b922e0900000
kernel config:  https://syzkaller.appspot.com/x/.config?x=2e226b2d1364112c
dashboard link: https://syzkaller.appspot.com/bug?extid=96cc7aba7e969b1d305c
compiler:       gcc (GCC) 10.1.0-syz 20200507
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=140e5d5c900000

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+96cc7aba7e969b1d305c@syzkaller.appspotmail.com

INFO: task syz-execprog:6857 blocked for more than 143 seconds.
      Not tainted 5.8.0-rc7-next-20200731-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-execprog    D27640  6857   6837 0x00004000
Call Trace:
 context_switch kernel/sched/core.c:3669 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4418
 schedule+0xd0/0x2a0 kernel/sched/core.c:4493
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4552
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 __pipe_lock fs/pipe.c:87 [inline]
 pipe_read+0x136/0x13d0 fs/pipe.c:247
 call_read_iter include/linux/fs.h:1870 [inline]
 new_sync_read+0x5b3/0x6e0 fs/read_write.c:414
 vfs_read+0x383/0x5a0 fs/read_write.c:493
 ksys_read+0x1ee/0x250 fs/read_write.c:624
 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x4ad88b
Code: Bad RIP value.
RSP: 002b:000000c00002ae10 EFLAGS: 00000202 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 000000c000020800 RCX: 00000000004ad88b
RDX: 0000000000010000 RSI: 000000c000390000 RDI: 0000000000000008
RBP: 000000c00002ae60 R08: 0000000000000001 R09: 0000000000000002
R10: 000000c000380000 R11: 0000000000000202 R12: 0000000000000003
R13: 000000c000082a80 R14: 000000c000310600 R15: 0000000000000000
INFO: task syz-executor.0:17080 blocked for more than 143 seconds.
      Not tainted 5.8.0-rc7-next-20200731-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.0  D29144 17080  16608 0x00000000
Call Trace:
 context_switch kernel/sched/core.c:3669 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4418
 schedule+0xd0/0x2a0 kernel/sched/core.c:4493
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4552
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 __pipe_lock fs/pipe.c:87 [inline]
 pipe_write+0x12c/0x16c0 fs/pipe.c:435
 call_write_iter include/linux/fs.h:1876 [inline]
 new_sync_write+0x422/0x650 fs/read_write.c:515
 vfs_write+0x5ad/0x730 fs/read_write.c:595
 ksys_write+0x1ee/0x250 fs/read_write.c:648
 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x45cc79
Code: Bad RIP value.
RSP: 002b:00007fff6c963cf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000037d40 RCX: 000000000045cc79
RDX: 000000000208e24b RSI: 0000000020000040 RDI: 0000000000000000
RBP: 000000000078bf40 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000790378
R13: 0000000000000000 R14: 0000000000000df5 R15: 000000000078bf0c
INFO: task syz-executor.0:17140 blocked for more than 144 seconds.
      Not tainted 5.8.0-rc7-next-20200731-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.0  D29144 17140  16608 0x00000000
Call Trace:
 context_switch kernel/sched/core.c:3669 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4418
 schedule+0xd0/0x2a0 kernel/sched/core.c:4493
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4552
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 __pipe_lock fs/pipe.c:87 [inline]
 pipe_write+0x12c/0x16c0 fs/pipe.c:435
 call_write_iter include/linux/fs.h:1876 [inline]
 new_sync_write+0x422/0x650 fs/read_write.c:515
 vfs_write+0x5ad/0x730 fs/read_write.c:595
 ksys_write+0x1ee/0x250 fs/read_write.c:648
 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x45cc79
Code: Bad RIP value.
RSP: 002b:00007fff6c963cf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000037d40 RCX: 000000000045cc79
RDX: 000000000208e24b RSI: 0000000020000040 RDI: 0000000000000000
RBP: 000000000078bf40 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000790378
R13: 0000000000000000 R14: 0000000000000df5 R15: 000000000078bf0c
INFO: task syz-executor.0:17145 blocked for more than 145 seconds.
      Not tainted 5.8.0-rc7-next-20200731-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.0  D29144 17145  16608 0x00000000
Call Trace:
 context_switch kernel/sched/core.c:3669 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4418
 schedule+0xd0/0x2a0 kernel/sched/core.c:4493
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4552
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 __pipe_lock fs/pipe.c:87 [inline]
 pipe_write+0x12c/0x16c0 fs/pipe.c:435
 call_write_iter include/linux/fs.h:1876 [inline]
 new_sync_write+0x422/0x650 fs/read_write.c:515
 vfs_write+0x5ad/0x730 fs/read_write.c:595
 ksys_write+0x1ee/0x250 fs/read_write.c:648
 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x45cc79
Code: Bad RIP value.
RSP: 002b:00007fff6c963cf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000037d40 RCX: 000000000045cc79
RDX: 000000000208e24b RSI: 0000000020000040 RDI: 0000000000000000
RBP: 000000000078bf40 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000790378
R13: 0000000000000000 R14: 0000000000000df5 R15: 000000000078bf0c

Showing all locks held in the system:
1 lock held by khungtaskd/1170:
 #0: ffffffff89c52a80 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:5823
1 lock held by systemd-udevd/3906:
1 lock held by in:imklog/6629:
 #0: ffff8880996bf930 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:930
1 lock held by syz-execprog/6857:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_read+0x136/0x13d0 fs/pipe.c:247
1 lock held by syz-executor.0/16822:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x5bd/0x16c0 fs/pipe.c:580
1 lock held by syz-executor.0/17080:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17140:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17145:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17209:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17222:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17312:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17314:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17317:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17333:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17360:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17363:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17369:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17377:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17382:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17385:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17390:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17400:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17410:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17415:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17441:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17489:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17501:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17516:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17524:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17531:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17574:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17579:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17582:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17584:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17587:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17590:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17593:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17595:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17600:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17605:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17610:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17612:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17615:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17625:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17632:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17637:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17661:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17691:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17731:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17752:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17764:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17781:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17786:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17791:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17794:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17819:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17829:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17876:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17891:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17894:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17896:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/17906:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18001:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18044:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18054:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18067:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18087:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18093:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18098:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18105:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18130:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18133:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18146:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18149:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18153:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18165:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18173:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18185:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18190:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18195:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18202:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18207:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18217:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18222:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18225:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18237:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18250:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18255:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18269:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18286:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18296:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18304:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18306:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18311:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18321:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18335:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18340:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18343:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18358:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18361:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18363:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18378:
1 lock held by syz-executor.0/18409:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/18792:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/19006:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/19154:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/19338:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/20014:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/20107:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/20266:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/20673:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/20961:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/21389:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/21601:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/21778:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/21835:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/21965:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/22161:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/22243:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/22293:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/22530:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/22681:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/22732:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/22837:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/23154:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/23328:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/23804:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/24439:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/24641:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/24706:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/24903:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25118:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25179:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25463:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25472:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25518:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25592:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25594:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25880:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25891:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25902:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/25953:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/26037:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/26090:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/26218:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/26292:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/26496:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/26671:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/26755:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/26764:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/27070:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/27105:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/27176:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/27637:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435
1 lock held by syz-executor.0/27730:
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:87 [inline]
 #0: ffff888092562068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x12c/0x16c0 fs/pipe.c:435

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 1170 Comm: khungtaskd Not tainted 5.8.0-rc7-next-20200731-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x18f/0x20d lib/dump_stack.c:118
 nmi_cpu_backtrace.cold+0x44/0xd7 lib/nmi_backtrace.c:105
 nmi_trigger_cpumask_backtrace+0x1b3/0x223 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:147 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:253 [inline]
 watchdog+0xd89/0xf30 kernel/hung_task.c:339
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 3905 Comm: systemd-journal Not tainted 5.8.0-rc7-next-20200731-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0033:0x7ffc39794990
Code: ff ff ff 7f 48 8d 05 0f b7 ff ff 48 8d 15 08 e7 ff ff 48 0f 44 c2 48 85 ff 48 8b 40 20 74 03 48 89 07 c3 0f 1f 80 00 00 00 00 <55> 48 89 e5 41 55 4c 63 ef 41 54 49 89 f4 48 83 ec 08 41 83 fd 0f
RSP: 002b:00007ffc397271d8 EFLAGS: 00000206
RAX: 00007ffc39794990 RBX: 0000000000000000 RCX: 00000000000000cc
RDX: 00000000000003e7 RSI: 00007ffc39727200 RDI: 0000000000000000
RBP: 00007ffc39727200 R08: 00005615fdccd3e5 R09: 0000000000000018
R10: 0000000000000069 R11: 0000000000000246 R12: 000000000000014d
R13: 00000000000012bf R14: 0000000000000033 R15: 00007ffc397276f0
FS:  00007fade38dd8c0 GS:  0000000000000000


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this issue, for details see:
https://goo.gl/tpsmEJ#testing-patches

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: INFO: task hung in pipe_read (2)
  2020-08-01  8:45 INFO: task hung in pipe_read (2) syzbot
@ 2020-08-01 15:29 ` Tetsuo Handa
  2020-08-01 17:39   ` Linus Torvalds
  0 siblings, 1 reply; 11+ messages in thread
From: Tetsuo Handa @ 2020-08-01 15:29 UTC (permalink / raw)
  To: syzbot, syzkaller-bugs; +Cc: linux-fsdevel, linux-kernel, viro

Waiting for response at https://lkml.kernel.org/r/45a9b2c8-d0b7-8f00-5b30-0cfe3e028b28@I-love.SAKURA.ne.jp .

#syz dup: INFO: task hung in pipe_write (4)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: INFO: task hung in pipe_read (2)
  2020-08-01 15:29 ` Tetsuo Handa
@ 2020-08-01 17:39   ` Linus Torvalds
  2020-08-07  5:31     ` Andrea Arcangeli
  0 siblings, 1 reply; 11+ messages in thread
From: Linus Torvalds @ 2020-08-01 17:39 UTC (permalink / raw)
  To: Tetsuo Handa, Andrea Arcangeli
  Cc: syzbot, syzkaller-bugs, linux-fsdevel, Linux Kernel Mailing List,
	Al Viro

[-- Attachment #1: Type: text/plain, Size: 1210 bytes --]

On Sat, Aug 1, 2020 at 8:30 AM Tetsuo Handa
<penguin-kernel@i-love.sakura.ne.jp> wrote:
>
> Waiting for response at https://lkml.kernel.org/r/45a9b2c8-d0b7-8f00-5b30-0cfe3e028b28@I-love.SAKURA.ne.jp .

I think handle_userfault() should have a (shortish) timeout, and just
return VM_FAULT_RETRY.

The code is overly complex anyway, because it predates the "just return RETRY".

And because we can't wait forever when the source of the fault is a
kernel exception, I think we should add some extra logic to just say
"if this is a retry, we've already done this once, just return an
error".

This is a TEST PATCH ONLY. I think we'll actually have to do something
like this, but I think the final version might need to allow a couple
of retries, rather than just give up after just one second.

But for testing your case, this patch might be enough to at least show
that "yeah, this kind of approach works".

Andrea? Comments? As mentioned, this is probably much too aggressive,
but I do think we need to limit the time that the kernel will wait for
page faults.

Because userfaultfd has become a huge source of security holes as a
way to time kernel faults or delay them indefinitely.

                     Linus

[-- Attachment #2: patch --]
[-- Type: application/octet-stream, Size: 1829 bytes --]

 fs/userfaultfd.c | 35 ++++++++++++-----------------------
 1 file changed, 12 insertions(+), 23 deletions(-)

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index 52de29000c7e..bd739488bb29 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -473,6 +473,16 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
 		goto out;
 	}
 
+	/*
+	 * If this is a kernel fault, and we're retrying, consider
+	 * it fatal. Otherwise we have deadlocks and other nasty
+	 * stuff.
+	 */
+	if (vmf->flags & FAULT_FLAG_TRIED) {
+		if (WARN_ON_ONCE(!(vmf->flags & FAULT_FLAG_USER)))
+			goto out;
+	}
+
 	/*
 	 * Handle nowait, not much to do other than tell it to retry
 	 * and wait.
@@ -516,33 +526,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
 						       vmf->flags, reason);
 	mmap_read_unlock(mm);
 
+	/* We'll wait for up to a second, and then return VM_FAULT_RETRY */
 	if (likely(must_wait && !READ_ONCE(ctx->released) &&
 		   !userfaultfd_signal_pending(vmf->flags))) {
 		wake_up_poll(&ctx->fd_wqh, EPOLLIN);
-		schedule();
+		schedule_timeout(HZ);
 		ret |= VM_FAULT_MAJOR;
-
-		/*
-		 * False wakeups can orginate even from rwsem before
-		 * up_read() however userfaults will wait either for a
-		 * targeted wakeup on the specific uwq waitqueue from
-		 * wake_userfault() or for signals or for uffd
-		 * release.
-		 */
-		while (!READ_ONCE(uwq.waken)) {
-			/*
-			 * This needs the full smp_store_mb()
-			 * guarantee as the state write must be
-			 * visible to other CPUs before reading
-			 * uwq.waken from other CPUs.
-			 */
-			set_current_state(blocking_state);
-			if (READ_ONCE(uwq.waken) ||
-			    READ_ONCE(ctx->released) ||
-			    userfaultfd_signal_pending(vmf->flags))
-				break;
-			schedule();
-		}
 	}
 
 	__set_current_state(TASK_RUNNING);

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: INFO: task hung in pipe_read (2)
  2020-08-01 17:39   ` Linus Torvalds
@ 2020-08-07  5:31     ` Andrea Arcangeli
  2020-08-08  1:01       ` Tetsuo Handa
  0 siblings, 1 reply; 11+ messages in thread
From: Andrea Arcangeli @ 2020-08-07  5:31 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Tetsuo Handa, syzbot, syzkaller-bugs, linux-fsdevel,
	Linux Kernel Mailing List, Al Viro

Hello!

On Sat, Aug 01, 2020 at 10:39:00AM -0700, Linus Torvalds wrote:
> On Sat, Aug 1, 2020 at 8:30 AM Tetsuo Handa
> <penguin-kernel@i-love.sakura.ne.jp> wrote:
> >
> > Waiting for response at https://lkml.kernel.org/r/45a9b2c8-d0b7-8f00-5b30-0cfe3e028b28@I-love.SAKURA.ne.jp .
> 
> I think handle_userfault() should have a (shortish) timeout, and just
> return VM_FAULT_RETRY.

The 1sec timeout if applied only to kernel faults (not the case yet
but it'd be enough to solve the hangcheck timer), will work perfectly
for Android, but it will break qemu.

[  916.954313] INFO: task syz-executor.0:61593 blocked for more than 40 seconds.

If you want to enforce a timeout, 40 seconds or something of the order
of the hangcheck timer would be more reasonable.

1sec is of the same order of magnitude of latency that you'd get with
an host kernel upgrade in place with kexec (with the guest memory
being preserved in RAM) that you'd suffer occasionally from in most public clouds.

So postcopy live migration should be allowed to take 1 sec latency and
it shouldn't become a deal breaker, that results in the VM getting killed.

> The code is overly complex anyway, because it predates the "just return RETRY".
> 
> And because we can't wait forever when the source of the fault is a
> kernel exception, I think we should add some extra logic to just say
> "if this is a retry, we've already done this once, just return an
> error".

Until the uffp-wp was merged recently, we never needed more than one
VM_FAULT_RETRY to handle uffd-missing faults, you seem to want to go
back to that which again would be fine for uffd-missing faults.

I haven't had time to read and test the testcase properly yet, but at
first glance from reading the hangcheck report it looks like there
would be just one userfault? So I don't see an immediate connection.

The change adding a 1sec timeout would definitely fix this issue, but
it'll also break qemu and probably the vast majority of the users.

> This is a TEST PATCH ONLY. I think we'll actually have to do something
> like this, but I think the final version might need to allow a couple
> of retries, rather than just give up after just one second.
> 
> But for testing your case, this patch might be enough to at least show
> that "yeah, this kind of approach works".
> 
> Andrea? Comments? As mentioned, this is probably much too aggressive,
> but I do think we need to limit the time that the kernel will wait for
> page faults.

Why is pipe preventing to SIGKILL the task that is blocked on the
mutex_lock? Is there any good reason for it or it simply has margin
for improvement regardless of the hangcheck report? It'd be great if
we can look into that before looking into the uffd specific bits.

The hangcheck timer would have zero issues with tasks that can be
killed, if only the pipe code could be improved to use mutex_lock_killable.

		/* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
		if (t->state == TASK_UNINTERRUPTIBLE)
			check_hung_task(t, timeout);

The hangcheck report is just telling us one task was in D state a
little too long, but it wasn't fatal error and the kernel wasn't
actually destabilized and the only malfunction reported is that a task
was unkillable for too long.

Now if it's impossible to improve the pipe code so it works better not
just for uffd, there's still no reason to worry: we could disable uffd
in the pipe context. For example ptrace opts-out of uffds, so that gdb
doesn't get stuck if you read a pointer that should be handled by the
process that is under debug. I hope it won't be necessary but it
wouldn't be a major issue, certainly it wouldn't risk breaking qemu
(and non-cooperative APIs are privileged so it could still skip the
timeout).

> Because userfaultfd has become a huge source of security holes as a
> way to time kernel faults or delay them indefinitely.

I assume you refer to the below:

https://duasynt.com/blog/cve-2016-6187-heap-off-by-one-exploit
https://blog.lizzie.io/using-userfaultfd.html

These reports don't happen to mention CONFIG_SLAB_FREELIST_RANDOM=y
which is already enabled by default in enterprise kernels for a reason
and they don't mention the more recent CONFIG_SLAB_FREELIST_HARDENED
and CONFIG_SHUFFLE_PAGE_ALLOCATOR.

Can they test it with those options enabled again, does it still work
so good or not anymore? That would be very helpful to know.

Randomizing which is the next page that gets allocated is much more
important than worrying about uffd because if you removed uffd you may
still have other ways to temporarily stop the page fault depending on
the setup. Example:

https://bugs.chromium.org/p/project-zero/issues/detail?id=808

The above one doesn't use uffd, but it uses fuse. So is fuse also a
source of security holes given they even use it for the exploit in a
preferential way instead of uffd?

"This can be done by abusing the writev() syscall and FUSE: The
attacker mounts a FUSE filesystem that artificially delays read
accesses, then mmap()s a file containing a struct iovec from that FUSE
filesystem and passes the result of mmap() to writev(). (Another way
to do this would be to use the userfaultfd() syscall.)"

It's not just uffd and fuse: even if you set
/proc/sys/vm/unprivileged_userfaultfd to 0 and you drop fuse, swapin
and pagein from NFS can be attacked through the network. The fact
there's a timeout won't make those less useful.

Enlarging the race window to 1sec or 40sec won't help much. What would
be useful if something is to add a delay to the wakeup event, not to
add a timeout, but even if we do that, the bug may still be
reproducible and it may eventually win the race regardless, by just
waiting longer.

It's impossible to obtain maximum features, maximum security and
maximum performance all at the same time, something has to give in and
we clearly want to support the enhanced-robustness secure setup where
all other unprivileged_* sysctl are tweaked too (not just uffd), and
we already do with the sysctl we added for it. In this respect uffd is
not different from other features that also shouldn't be accessible
without privilege in those setup. It's part of the tradeoff.

Most important the default container runtime seccomp filters blocks
uffd so all containers are at zero risk unless they use uffd actively
and opt-in explicitly using the OCI schema seccomp filter, in which
case the previous sentence applies.

Anybody running a secure setup but not wrapping everything under at
least the default seccomp filter of the container runtime, is not
really secure anyway so even the sysctl is meaningless in reality. Way
more useful than the sysctl in practice is that container runtime
needs an hard denylist/allowlist that cannot be opted out of the OCI
schema and in paranoid setups some syscalls could be added to it,
despite it may break stuff.

Randomizing the allocations so the timing don't matter anymore is most
certainly worth it because it will work not just for uffd, but also
for fuse, swapin from nfs under malicious network flood and any other
source of page fault controllable stalls.

The last complaint received on the uffd security topic was the
suggestion that uffd being allowed to block kernel faults was a
concern for lockdown confidentiality mode. I answered to that here and
I repeated some part in the above as well:

https://lkml.kernel.org/r/20200520040608.GB26186@redhat.com

(if no time to read the above: the short version is that lockdown
confidentiality is not a robustness/probabilistic feature. It's a
black and white feature and such it shouldn't even try to be robust
against kernel bugs by design. If we turn confidentiality mode in an
"hardening" kernel feature, then the sky is the limit and it'll become
a growing black hole that will drop more and more and it won't be
possible to draw a sure line where to stop dropping, until what's left
will be too little or too slow to be useful)

Thanks,
Andrea

(On the seccomp topic, and this would open a new huge thread,
absolutely we need to change the default of spec_store_bypass_disable
and spectre_v2_user to prctl, we can't keep it to seccomp, most
userland including the container runtimes already opted-out with
SECCOMP_FILTER_FLAG_SPEC_ALLOW, the longer we wait the more it'll be
pointless to even leave a =seccomp option as opt-in later, I'll try to
work on submitting something soon to fix this)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: INFO: task hung in pipe_read (2)
  2020-08-07  5:31     ` Andrea Arcangeli
@ 2020-08-08  1:01       ` Tetsuo Handa
  2020-08-10 19:29         ` Andrea Arcangeli
  0 siblings, 1 reply; 11+ messages in thread
From: Tetsuo Handa @ 2020-08-08  1:01 UTC (permalink / raw)
  To: Andrea Arcangeli, Linus Torvalds, Al Viro
  Cc: syzbot, syzkaller-bugs, linux-fsdevel, Linux Kernel Mailing List,
	Dmitry Vyukov, Andrew Morton

On 2020/08/07 14:31, Andrea Arcangeli wrote:
>> Andrea? Comments? As mentioned, this is probably much too aggressive,
>> but I do think we need to limit the time that the kernel will wait for
>> page faults.
> 
> Why is pipe preventing to SIGKILL the task that is blocked on the
> mutex_lock? Is there any good reason for it or it simply has margin
> for improvement regardless of the hangcheck report? It'd be great if
> we can look into that before looking into the uffd specific bits.

It would be possible to use _killable version for this specific function, but

> 
> The hangcheck timer would have zero issues with tasks that can be
> killed, if only the pipe code could be improved to use mutex_lock_killable.
> 
> 		/* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
> 		if (t->state == TASK_UNINTERRUPTIBLE)
> 			check_hung_task(t, timeout);
> 
> The hangcheck report is just telling us one task was in D state a
> little too long, but it wasn't fatal error and the kernel wasn't
> actually destabilized and the only malfunction reported is that a task
> was unkillable for too long.

use of killable waits disables ability to detect possibility of deadlock (because
lockdep can't check possibility of deadlock which involves actions in userspace), for
syzkaller process is SIGKILLed after 5 seconds while khungtaskd's timeout is 140 seconds.

If we encounter a deadlock in an unattended operation (e.g. some server process),
we don't have a method for resolving the deadlock. Therefore, I consider that
t->state == TASK_UNINTERRUPTIBLE check is a bad choice. Unless a sleep is neutral
(e.g. no lock is held, or obviously safe to sleep with that specific lock held),
sleeping for 140 seconds inside the kernel is a bad sign even if interruptible/killable.

> 
> Now if it's impossible to improve the pipe code so it works better not
> just for uffd, there's still no reason to worry: we could disable uffd
> in the pipe context. For example ptrace opts-out of uffds, so that gdb
> doesn't get stuck if you read a pointer that should be handled by the
> process that is under debug. I hope it won't be necessary but it
> wouldn't be a major issue, certainly it wouldn't risk breaking qemu
> (and non-cooperative APIs are privileged so it could still skip the
> timeout).

Can we do something like this?

  bool retried = false;
retry:
  lock();
  disable_fault();
  ret = access_memory_that_might_fault();
  enable_fault();
  if (ret == -EWOULDFAULT && !retried)
    goto retry_without_lock;
  if (ret == 0)
    ret = do_something();
  unlock();
  return ret;
retry_without_lock:
  unlock();
  ret = access_memory_that_might_fault();
  retried = true;
  goto retry;


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: INFO: task hung in pipe_read (2)
  2020-08-08  1:01       ` Tetsuo Handa
@ 2020-08-10 19:29         ` Andrea Arcangeli
  2020-08-13  7:00           ` Tetsuo Handa
  0 siblings, 1 reply; 11+ messages in thread
From: Andrea Arcangeli @ 2020-08-10 19:29 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Linus Torvalds, Al Viro, syzbot, syzkaller-bugs, linux-fsdevel,
	Linux Kernel Mailing List, Dmitry Vyukov, Andrew Morton

Hello Tetsuo,

On Sat, Aug 08, 2020 at 10:01:21AM +0900, Tetsuo Handa wrote:
> use of killable waits disables ability to detect possibility of deadlock (because
> lockdep can't check possibility of deadlock which involves actions in userspace), for
> syzkaller process is SIGKILLed after 5 seconds while khungtaskd's timeout is 140 seconds.
> 
> If we encounter a deadlock in an unattended operation (e.g. some server process),
> we don't have a method for resolving the deadlock. Therefore, I consider that
> t->state == TASK_UNINTERRUPTIBLE check is a bad choice. Unless a sleep is neutral
> (e.g. no lock is held, or obviously safe to sleep with that specific lock held),
> sleeping for 140 seconds inside the kernel is a bad sign even if interruptible/killable.

Task in killable state for seconds as result of another task taking
too long to do something in kernel sounds bad, if the other task had a
legitimate reason to take a long time in normal operations, i.e. like
if the other task was just doing an getdents of a large directory.

Nobody force any app to use userfaultfd, if an app uses it and the
other side of the pipe trusts to read from it, and it gets stuck for
seconds in uninterruptible and killable state, it's either an app bug
resolvable with kill -9. We also can't enforce all signals to run in
presence of other bugs, for example if the task that won't respond to
any signal other than CONT and KILL was blocked in stopped state by a
buggy SIGSTOP. The pipe also can get stuck if the network is down and
it's swapping in from NFS and nobody is forced to take the risk of
using network attached storage as swap device either.

The hangcheck is currently correct to report a concern, because the
other side of the pipe may be another process of another user that
cannot SIGKILL the task blocked in the userfault. That sounds far
fetched and it's not particular concerning anyway, but it's not
technically impossible so I agree with the hangcheck timer reporting
an issue that needs correction.

However once the mutex is killable there's no concern anymore and the
hangcheck timer is correct also not reporting any misbehavior anymore.

Instead of userfaultfd, you can think at 100% kernel faults backed by
swapin from NFS or swaping from attached network storage or swapin
from scsi with a scsi fibre channel accidentally pulled out of a few
seconds. It's nice if uffd can survive as well as nfs or scsi would by
retrying and waiting more than 1sec.

> Can we do something like this?
> 
>   bool retried = false;
> retry:
>   lock();
>   disable_fault();
>   ret = access_memory_that_might_fault();
>   enable_fault();
>   if (ret == -EWOULDFAULT && !retried)
>     goto retry_without_lock;
>   if (ret == 0)
>     ret = do_something();
>   unlock();
>   return ret;
> retry_without_lock:
>   unlock();
>   ret = access_memory_that_might_fault();
>   retried = true;
>   goto retry;

This would work, but it'll make the kernel more complex than using a
killable mutex.

It'd also give a worse runtime than the killable mutex, if the only
source of blocking events while holding the mutex wouldn't be the page
fault.

With just 2 processes in this case probably it would be fine and there
are likely won't be other sources of contention, so the main cons is
just the code complexity to be maintained and the fact it won't
provide any measurable practical benefit, if something it'll run
slower by having to repeat the same fault in blocking and non blocking
mode.

With regard to the reporting of the hangcheck timer most modern paging
code uses killable mutex because unlike the pipe code, there can be
other sources of blockage and you don't want to wait for shared
resources to unblock a process that is waiting on a mutex. I think
trying to reduce the usage of killable mutex overall is a ship that
has sailed, it won't move the needle to just avoid it in pipe code
since it'll remain everywhere else.

So I'm certainly not against your proposal, but if we increase the
complexity like above then I'd find it more attractive if it was for
some other benefit unrelated to userfaultfd, or swapin from NFS or
network attached storage for that matter, and I don't see a big enough
benefit to justify it.

Thanks!
Andrea

PS. I'll be busy until Wed sorry if I don't answer promptly to
    followups. If somebody could give a try to add the killable mutex
    bailout failure paths that return to userland direct, or your more
    complex alternative it'd be great.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: INFO: task hung in pipe_read (2)
  2020-08-10 19:29         ` Andrea Arcangeli
@ 2020-08-13  7:00           ` Tetsuo Handa
  2020-08-13 11:20             ` Tetsuo Handa
  0 siblings, 1 reply; 11+ messages in thread
From: Tetsuo Handa @ 2020-08-13  7:00 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: Linus Torvalds, Al Viro, syzbot, syzkaller-bugs, linux-fsdevel,
	Linux Kernel Mailing List, Dmitry Vyukov, Andrew Morton

On 2020/08/11 4:29, Andrea Arcangeli wrote:
> However once the mutex is killable there's no concern anymore and the
> hangcheck timer is correct also not reporting any misbehavior anymore.

Do you mean something like below untested patch? I think that the difficult
part is that mutex for close() operation can't become killable. And I worry
that syzbot soon reports a hung task at pipe_release() instead of pipe_read()
or pipe_write(). If pagefault with locks held can be avoided, there will be no
such worry.

 fs/pipe.c                 | 106 ++++++++++++++++++++++++++++++++++++++--------
 fs/splice.c               |  41 ++++++++++++------
 include/linux/pipe_fs_i.h |   5 ++-
 3 files changed, 120 insertions(+), 32 deletions(-)

diff --git a/fs/pipe.c b/fs/pipe.c
index 60dbee4..537d1ef 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -66,6 +66,13 @@ static void pipe_lock_nested(struct pipe_inode_info *pipe, int subclass)
 		mutex_lock_nested(&pipe->mutex, subclass);
 }
 
+static int __must_check pipe_lock_killable_nested(struct pipe_inode_info *pipe, int subclass)
+{
+	if (pipe->files)
+		return mutex_lock_killable_nested(&pipe->mutex, subclass);
+	return 0;
+}
+
 void pipe_lock(struct pipe_inode_info *pipe)
 {
 	/*
@@ -75,6 +82,14 @@ void pipe_lock(struct pipe_inode_info *pipe)
 }
 EXPORT_SYMBOL(pipe_lock);
 
+int pipe_lock_killable(struct pipe_inode_info *pipe)
+{
+	/*
+	 * pipe_lock() nests non-pipe inode locks (for writing to a file)
+	 */
+	return pipe_lock_killable_nested(pipe, I_MUTEX_PARENT);
+}
+
 void pipe_unlock(struct pipe_inode_info *pipe)
 {
 	if (pipe->files)
@@ -87,23 +102,37 @@ static inline void __pipe_lock(struct pipe_inode_info *pipe)
 	mutex_lock_nested(&pipe->mutex, I_MUTEX_PARENT);
 }
 
+static inline int __must_check __pipe_lock_killable(struct pipe_inode_info *pipe)
+{
+	return mutex_lock_killable_nested(&pipe->mutex, I_MUTEX_PARENT);
+}
+
 static inline void __pipe_unlock(struct pipe_inode_info *pipe)
 {
 	mutex_unlock(&pipe->mutex);
 }
 
-void pipe_double_lock(struct pipe_inode_info *pipe1,
-		      struct pipe_inode_info *pipe2)
+int pipe_double_lock_killable(struct pipe_inode_info *pipe1,
+			      struct pipe_inode_info *pipe2)
 {
 	BUG_ON(pipe1 == pipe2);
 
 	if (pipe1 < pipe2) {
-		pipe_lock_nested(pipe1, I_MUTEX_PARENT);
-		pipe_lock_nested(pipe2, I_MUTEX_CHILD);
+		if (pipe_lock_killable_nested(pipe1, I_MUTEX_PARENT))
+			return -ERESTARTSYS;
+		if (pipe_lock_killable_nested(pipe2, I_MUTEX_CHILD)) {
+			pipe_unlock(pipe1);
+			return -ERESTARTSYS;
+		}
 	} else {
-		pipe_lock_nested(pipe2, I_MUTEX_PARENT);
-		pipe_lock_nested(pipe1, I_MUTEX_CHILD);
+		if (pipe_lock_killable_nested(pipe2, I_MUTEX_PARENT))
+			return -ERESTARTSYS;
+		if (pipe_lock_killable_nested(pipe1, I_MUTEX_CHILD)) {
+			pipe_unlock(pipe2);
+			return -ERESTARTSYS;
+		}
 	}
+	return 0;
 }
 
 /* Drop the inode semaphore and wait for a pipe event, atomically */
@@ -125,6 +154,24 @@ void pipe_wait(struct pipe_inode_info *pipe)
 	pipe_lock(pipe);
 }
 
+int pipe_wait_killable(struct pipe_inode_info *pipe)
+{
+	DEFINE_WAIT(rdwait);
+	DEFINE_WAIT(wrwait);
+
+	/*
+	 * Pipes are system-local resources, so sleeping on them
+	 * is considered a noninteractive wait:
+	 */
+	prepare_to_wait(&pipe->rd_wait, &rdwait, TASK_INTERRUPTIBLE);
+	prepare_to_wait(&pipe->wr_wait, &wrwait, TASK_INTERRUPTIBLE);
+	pipe_unlock(pipe);
+	schedule();
+	finish_wait(&pipe->rd_wait, &rdwait);
+	finish_wait(&pipe->wr_wait, &wrwait);
+	return pipe_lock_killable(pipe);
+}
+
 static void anon_pipe_buf_release(struct pipe_inode_info *pipe,
 				  struct pipe_buffer *buf)
 {
@@ -244,7 +291,8 @@ static inline bool pipe_readable(const struct pipe_inode_info *pipe)
 		return 0;
 
 	ret = 0;
-	__pipe_lock(pipe);
+	if (__pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	/*
 	 * We only wake up writers if the pipe was full when we started
@@ -381,7 +429,8 @@ static inline bool pipe_readable(const struct pipe_inode_info *pipe)
 		if (wait_event_interruptible_exclusive(pipe->rd_wait, pipe_readable(pipe)) < 0)
 			return -ERESTARTSYS;
 
-		__pipe_lock(pipe);
+		if (__pipe_lock_killable(pipe))
+			return -ERESTARTSYS;
 		was_full = pipe_full(pipe->head, pipe->tail, pipe->max_usage);
 		wake_next_reader = true;
 	}
@@ -432,7 +481,8 @@ static inline bool pipe_writable(const struct pipe_inode_info *pipe)
 	if (unlikely(total_len == 0))
 		return 0;
 
-	__pipe_lock(pipe);
+	if (__pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	if (!pipe->readers) {
 		send_sig(SIGPIPE, current, 0);
@@ -577,7 +627,14 @@ static inline bool pipe_writable(const struct pipe_inode_info *pipe)
 			kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
 		}
 		wait_event_interruptible_exclusive(pipe->wr_wait, pipe_writable(pipe));
-		__pipe_lock(pipe);
+		if (__pipe_lock_killable(pipe)) {
+			if (!ret)
+				ret = -ERESTARTSYS;
+			/* Extra notification is better than missing notification? */
+			was_empty = true;
+			wake_next_writer = true;
+			goto out_unlocked;
+		}
 		was_empty = pipe_empty(pipe->head, pipe->tail);
 		wake_next_writer = true;
 	}
@@ -586,6 +643,7 @@ static inline bool pipe_writable(const struct pipe_inode_info *pipe)
 		wake_next_writer = false;
 	__pipe_unlock(pipe);
 
+out_unlocked:
 	/*
 	 * If we do do a wakeup event, we do a 'sync' wakeup, because we
 	 * want the reader to start processing things asap, rather than
@@ -617,7 +675,8 @@ static long pipe_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 
 	switch (cmd) {
 	case FIONREAD:
-		__pipe_lock(pipe);
+		if (__pipe_lock_killable(pipe))
+			return -ERESTARTSYS;
 		count = 0;
 		head = pipe->head;
 		tail = pipe->tail;
@@ -634,7 +693,8 @@ static long pipe_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 #ifdef CONFIG_WATCH_QUEUE
 	case IOC_WATCH_QUEUE_SET_SIZE: {
 		int ret;
-		__pipe_lock(pipe);
+		if (__pipe_lock_killable(pipe))
+			return -ERESTARTSYS;
 		ret = watch_queue_set_size(pipe, arg);
 		__pipe_unlock(pipe);
 		return ret;
@@ -719,6 +779,10 @@ static void put_pipe_info(struct inode *inode, struct pipe_inode_info *pipe)
 {
 	struct pipe_inode_info *pipe = file->private_data;
 
+	/*
+	 * Think lock can't be killable. How to avoid deadlock if page fault
+	 * with pipe mutex held does not finish quickly?
+	 */
 	__pipe_lock(pipe);
 	if (file->f_mode & FMODE_READ)
 		pipe->readers--;
@@ -744,7 +808,8 @@ static void put_pipe_info(struct inode *inode, struct pipe_inode_info *pipe)
 	struct pipe_inode_info *pipe = filp->private_data;
 	int retval = 0;
 
-	__pipe_lock(pipe);
+	if (__pipe_lock_killable(pipe)) /* Can this be safely killable? */
+		return -ERESTARTSYS;
 	if (filp->f_mode & FMODE_READ)
 		retval = fasync_helper(fd, filp, on, &pipe->fasync_readers);
 	if ((filp->f_mode & FMODE_WRITE) && retval >= 0) {
@@ -1040,7 +1105,8 @@ static int wait_for_partner(struct pipe_inode_info *pipe, unsigned int *cnt)
 	int cur = *cnt;
 
 	while (cur == *cnt) {
-		pipe_wait(pipe);
+		if (pipe_wait_killable(pipe))
+			break;
 		if (signal_pending(current))
 			break;
 	}
@@ -1083,10 +1149,13 @@ static int fifo_open(struct inode *inode, struct file *filp)
 			spin_unlock(&inode->i_lock);
 		}
 	}
-	filp->private_data = pipe;
-	/* OK, we have a pipe and it's pinned down */
 
-	__pipe_lock(pipe);
+	/* OK, we have a pipe and it's pinned down */
+	if (__pipe_lock_killable(pipe)) {
+		put_pipe_info(inode, pipe);
+		return -ERESTARTSYS;
+	}
+	filp->private_data = pipe;
 
 	/* We can only do regular read/write on fifos */
 	stream_open(inode, filp);
@@ -1349,7 +1418,8 @@ long pipe_fcntl(struct file *file, unsigned int cmd, unsigned long arg)
 	if (!pipe)
 		return -EBADF;
 
-	__pipe_lock(pipe);
+	if (__pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	switch (cmd) {
 	case F_SETPIPE_SZ:
diff --git a/fs/splice.c b/fs/splice.c
index d7c8a7c..65d24df 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -563,7 +563,8 @@ static int splice_from_pipe_next(struct pipe_inode_info *pipe, struct splice_des
 			sd->need_wakeup = false;
 		}
 
-		pipe_wait(pipe);
+		if (pipe_wait_killable(pipe))
+			return -ERESTARTSYS;
 	}
 
 	return 1;
@@ -657,7 +658,8 @@ ssize_t splice_from_pipe(struct pipe_inode_info *pipe, struct file *out,
 		.u.file = out,
 	};
 
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 	ret = __splice_from_pipe(pipe, &sd, actor);
 	pipe_unlock(pipe);
 
@@ -696,7 +698,10 @@ ssize_t splice_from_pipe(struct pipe_inode_info *pipe, struct file *out,
 	if (unlikely(!array))
 		return -ENOMEM;
 
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe)) {
+		kfree(array);
+		return -ERESTARTSYS;
+	}
 
 	splice_from_pipe_begin(&sd);
 	while (sd.total_len) {
@@ -1077,7 +1082,8 @@ static int wait_for_space(struct pipe_inode_info *pipe, unsigned flags)
 			return -EAGAIN;
 		if (signal_pending(current))
 			return -ERESTARTSYS;
-		pipe_wait(pipe);
+		if (pipe_wait_killable(pipe))
+			return -ERESTARTSYS;
 	}
 }
 
@@ -1167,7 +1173,8 @@ long do_splice(struct file *in, loff_t __user *off_in,
 		if (out->f_flags & O_NONBLOCK)
 			flags |= SPLICE_F_NONBLOCK;
 
-		pipe_lock(opipe);
+		if (pipe_lock_killable(opipe))
+			return -ERESTARTSYS;
 		ret = wait_for_space(opipe, flags);
 		if (!ret) {
 			unsigned int p_space;
@@ -1264,7 +1271,8 @@ static long vmsplice_to_user(struct file *file, struct iov_iter *iter,
 		return -EBADF;
 
 	if (sd.total_len) {
-		pipe_lock(pipe);
+		if (pipe_lock_killable(pipe))
+			return -ERESTARTSYS;
 		ret = __splice_from_pipe(pipe, &sd, pipe_to_user);
 		pipe_unlock(pipe);
 	}
@@ -1291,7 +1299,8 @@ static long vmsplice_to_pipe(struct file *file, struct iov_iter *iter,
 	if (!pipe)
 		return -EBADF;
 
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 	ret = wait_for_space(pipe, flags);
 	if (!ret)
 		ret = iter_to_pipe(iter, pipe, buf_flag);
@@ -1441,7 +1450,8 @@ static int ipipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
 		return 0;
 
 	ret = 0;
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	while (pipe_empty(pipe->head, pipe->tail)) {
 		if (signal_pending(current)) {
@@ -1454,7 +1464,8 @@ static int ipipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
 			ret = -EAGAIN;
 			break;
 		}
-		pipe_wait(pipe);
+		if (pipe_wait_killable(pipe))
+			return -ERESTARTSYS;
 	}
 
 	pipe_unlock(pipe);
@@ -1477,7 +1488,8 @@ static int opipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
 		return 0;
 
 	ret = 0;
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	while (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) {
 		if (!pipe->readers) {
@@ -1493,7 +1505,8 @@ static int opipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
 			ret = -ERESTARTSYS;
 			break;
 		}
-		pipe_wait(pipe);
+		if (pipe_wait_killable(pipe))
+			return -ERESTARTSYS;
 	}
 
 	pipe_unlock(pipe);
@@ -1529,7 +1542,8 @@ static int splice_pipe_to_pipe(struct pipe_inode_info *ipipe,
 	 * grabbing by pipe info address. Otherwise two different processes
 	 * could deadlock (one doing tee from A -> B, the other from B -> A).
 	 */
-	pipe_double_lock(ipipe, opipe);
+	if (pipe_double_lock_killable(ipipe, opipe))
+		return -ERESTARTSYS;
 
 	i_tail = ipipe->tail;
 	i_mask = ipipe->ring_size - 1;
@@ -1655,7 +1669,8 @@ static int link_pipe(struct pipe_inode_info *ipipe,
 	 * grabbing by pipe info address. Otherwise two different processes
 	 * could deadlock (one doing tee from A -> B, the other from B -> A).
 	 */
-	pipe_double_lock(ipipe, opipe);
+	if (pipe_double_lock_killable(ipipe, opipe))
+		return -ERESTARTSYS;
 
 	i_tail = ipipe->tail;
 	i_mask = ipipe->ring_size - 1;
diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
index 50afd0d..eb99c18 100644
--- a/include/linux/pipe_fs_i.h
+++ b/include/linux/pipe_fs_i.h
@@ -233,8 +233,10 @@ static inline bool pipe_buf_try_steal(struct pipe_inode_info *pipe,
 
 /* Pipe lock and unlock operations */
 void pipe_lock(struct pipe_inode_info *);
+int __must_check pipe_lock_killable(struct pipe_inode_info *pipe);
 void pipe_unlock(struct pipe_inode_info *);
-void pipe_double_lock(struct pipe_inode_info *, struct pipe_inode_info *);
+int __must_check pipe_double_lock_killable(struct pipe_inode_info *pipe1,
+					   struct pipe_inode_info *pipe2);
 
 extern unsigned int pipe_max_size;
 extern unsigned long pipe_user_pages_hard;
@@ -242,6 +244,7 @@ static inline bool pipe_buf_try_steal(struct pipe_inode_info *pipe,
 
 /* Drop the inode semaphore and wait for a pipe event, atomically */
 void pipe_wait(struct pipe_inode_info *pipe);
+int __must_check pipe_wait_killable(struct pipe_inode_info *pipe);
 
 struct pipe_inode_info *alloc_pipe_info(void);
 void free_pipe_info(struct pipe_inode_info *);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: INFO: task hung in pipe_read (2)
  2020-08-13  7:00           ` Tetsuo Handa
@ 2020-08-13 11:20             ` Tetsuo Handa
  2020-08-22  4:34               ` [RFC PATCH] pipe: make pipe_release() deferrable Tetsuo Handa
  0 siblings, 1 reply; 11+ messages in thread
From: Tetsuo Handa @ 2020-08-13 11:20 UTC (permalink / raw)
  To: Andrea Arcangeli, Al Viro
  Cc: Linus Torvalds, syzbot, syzkaller-bugs, linux-fsdevel,
	Linux Kernel Mailing List, Dmitry Vyukov, Andrew Morton

On 2020/08/13 16:00, Tetsuo Handa wrote:
> On 2020/08/11 4:29, Andrea Arcangeli wrote:
>> However once the mutex is killable there's no concern anymore and the
>> hangcheck timer is correct also not reporting any misbehavior anymore.
> 
> Do you mean something like below untested patch? I think that the difficult
> part is that mutex for close() operation can't become killable. And I worry
> that syzbot soon reports a hung task at pipe_release() instead of pipe_read()
> or pipe_write(). If pagefault with locks held can be avoided, there will be no
> such worry.

Hmm, the difficult part is not limited to close() operation. While some of them
are low hanging fruits, the rest seems to be subtle or complicated. Al, do you
think that we can make all pipe mutex killable?

 fs/pipe.c                 | 104 +++++++++++++++++++++++++++++++-------
 fs/splice.c               |  60 +++++++++++++++-------
 include/linux/pipe_fs_i.h |   5 +-
 3 files changed, 134 insertions(+), 35 deletions(-)

diff --git a/fs/pipe.c b/fs/pipe.c
index 60dbee457143..f21c420dc7c7 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -66,6 +66,13 @@ static void pipe_lock_nested(struct pipe_inode_info *pipe, int subclass)
 		mutex_lock_nested(&pipe->mutex, subclass);
 }
 
+static int __must_check pipe_lock_killable_nested(struct pipe_inode_info *pipe, int subclass)
+{
+	if (pipe->files)
+		return mutex_lock_killable_nested(&pipe->mutex, subclass);
+	return 0;
+}
+
 void pipe_lock(struct pipe_inode_info *pipe)
 {
 	/*
@@ -75,6 +82,14 @@ void pipe_lock(struct pipe_inode_info *pipe)
 }
 EXPORT_SYMBOL(pipe_lock);
 
+int pipe_lock_killable(struct pipe_inode_info *pipe)
+{
+	/*
+	 * pipe_lock() nests non-pipe inode locks (for writing to a file)
+	 */
+	return pipe_lock_killable_nested(pipe, I_MUTEX_PARENT);
+}
+
 void pipe_unlock(struct pipe_inode_info *pipe)
 {
 	if (pipe->files)
@@ -87,23 +102,37 @@ static inline void __pipe_lock(struct pipe_inode_info *pipe)
 	mutex_lock_nested(&pipe->mutex, I_MUTEX_PARENT);
 }
 
+static inline int __must_check __pipe_lock_killable(struct pipe_inode_info *pipe)
+{
+	return mutex_lock_killable_nested(&pipe->mutex, I_MUTEX_PARENT);
+}
+
 static inline void __pipe_unlock(struct pipe_inode_info *pipe)
 {
 	mutex_unlock(&pipe->mutex);
 }
 
-void pipe_double_lock(struct pipe_inode_info *pipe1,
-		      struct pipe_inode_info *pipe2)
+int pipe_double_lock_killable(struct pipe_inode_info *pipe1,
+			      struct pipe_inode_info *pipe2)
 {
 	BUG_ON(pipe1 == pipe2);
 
 	if (pipe1 < pipe2) {
-		pipe_lock_nested(pipe1, I_MUTEX_PARENT);
-		pipe_lock_nested(pipe2, I_MUTEX_CHILD);
+		if (pipe_lock_killable_nested(pipe1, I_MUTEX_PARENT))
+			return -ERESTARTSYS;
+		if (pipe_lock_killable_nested(pipe2, I_MUTEX_CHILD)) {
+			pipe_unlock(pipe1);
+			return -ERESTARTSYS;
+		}
 	} else {
-		pipe_lock_nested(pipe2, I_MUTEX_PARENT);
-		pipe_lock_nested(pipe1, I_MUTEX_CHILD);
+		if (pipe_lock_killable_nested(pipe2, I_MUTEX_PARENT))
+			return -ERESTARTSYS;
+		if (pipe_lock_killable_nested(pipe1, I_MUTEX_CHILD)) {
+			pipe_unlock(pipe2);
+			return -ERESTARTSYS;
+		}
 	}
+	return 0;
 }
 
 /* Drop the inode semaphore and wait for a pipe event, atomically */
@@ -125,6 +154,24 @@ void pipe_wait(struct pipe_inode_info *pipe)
 	pipe_lock(pipe);
 }
 
+int pipe_wait_killable(struct pipe_inode_info *pipe)
+{
+	DEFINE_WAIT(rdwait);
+	DEFINE_WAIT(wrwait);
+
+	/*
+	 * Pipes are system-local resources, so sleeping on them
+	 * is considered a noninteractive wait:
+	 */
+	prepare_to_wait(&pipe->rd_wait, &rdwait, TASK_INTERRUPTIBLE);
+	prepare_to_wait(&pipe->wr_wait, &wrwait, TASK_INTERRUPTIBLE);
+	pipe_unlock(pipe);
+	schedule();
+	finish_wait(&pipe->rd_wait, &rdwait);
+	finish_wait(&pipe->wr_wait, &wrwait);
+	return pipe_lock_killable(pipe);
+}
+
 static void anon_pipe_buf_release(struct pipe_inode_info *pipe,
 				  struct pipe_buffer *buf)
 {
@@ -244,7 +291,8 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
 		return 0;
 
 	ret = 0;
-	__pipe_lock(pipe);
+	if (__pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	/*
 	 * We only wake up writers if the pipe was full when we started
@@ -381,7 +429,8 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
 		if (wait_event_interruptible_exclusive(pipe->rd_wait, pipe_readable(pipe)) < 0)
 			return -ERESTARTSYS;
 
-		__pipe_lock(pipe);
+		if (__pipe_lock_killable(pipe))
+			return -ERESTARTSYS;
 		was_full = pipe_full(pipe->head, pipe->tail, pipe->max_usage);
 		wake_next_reader = true;
 	}
@@ -432,7 +481,8 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
 	if (unlikely(total_len == 0))
 		return 0;
 
-	__pipe_lock(pipe);
+	if (__pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	if (!pipe->readers) {
 		send_sig(SIGPIPE, current, 0);
@@ -577,7 +627,14 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
 			kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
 		}
 		wait_event_interruptible_exclusive(pipe->wr_wait, pipe_writable(pipe));
-		__pipe_lock(pipe);
+		if (__pipe_lock_killable(pipe)) {
+			if (!ret)
+				ret = -ERESTARTSYS;
+			/* Extra notification is better than missing notification? */
+			was_empty = true;
+			wake_next_writer = true;
+			goto out_unlocked;
+		}
 		was_empty = pipe_empty(pipe->head, pipe->tail);
 		wake_next_writer = true;
 	}
@@ -586,6 +643,7 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
 		wake_next_writer = false;
 	__pipe_unlock(pipe);
 
+out_unlocked:
 	/*
 	 * If we do do a wakeup event, we do a 'sync' wakeup, because we
 	 * want the reader to start processing things asap, rather than
@@ -617,7 +675,8 @@ static long pipe_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 
 	switch (cmd) {
 	case FIONREAD:
-		__pipe_lock(pipe);
+		if (__pipe_lock_killable(pipe))
+			return -ERESTARTSYS;
 		count = 0;
 		head = pipe->head;
 		tail = pipe->tail;
@@ -634,7 +693,8 @@ static long pipe_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 #ifdef CONFIG_WATCH_QUEUE
 	case IOC_WATCH_QUEUE_SET_SIZE: {
 		int ret;
-		__pipe_lock(pipe);
+		if (__pipe_lock_killable(pipe))
+			return -ERESTARTSYS;
 		ret = watch_queue_set_size(pipe, arg);
 		__pipe_unlock(pipe);
 		return ret;
@@ -719,6 +779,10 @@ pipe_release(struct inode *inode, struct file *file)
 {
 	struct pipe_inode_info *pipe = file->private_data;
 
+	/*
+	 * This lock can't be killable. How to avoid deadlock if page fault
+	 * with pipe mutex held does not finish quickly?
+	 */
 	__pipe_lock(pipe);
 	if (file->f_mode & FMODE_READ)
 		pipe->readers--;
@@ -744,7 +808,8 @@ pipe_fasync(int fd, struct file *filp, int on)
 	struct pipe_inode_info *pipe = filp->private_data;
 	int retval = 0;
 
-	__pipe_lock(pipe);
+	if (__pipe_lock_killable(pipe)) /* Can this be safely killable? */
+		return -ERESTARTSYS;
 	if (filp->f_mode & FMODE_READ)
 		retval = fasync_helper(fd, filp, on, &pipe->fasync_readers);
 	if ((filp->f_mode & FMODE_WRITE) && retval >= 0) {
@@ -1040,6 +1105,7 @@ static int wait_for_partner(struct pipe_inode_info *pipe, unsigned int *cnt)
 	int cur = *cnt;
 
 	while (cur == *cnt) {
+		/* Can't become killable, for need to lock before return. */
 		pipe_wait(pipe);
 		if (signal_pending(current))
 			break;
@@ -1083,10 +1149,13 @@ static int fifo_open(struct inode *inode, struct file *filp)
 			spin_unlock(&inode->i_lock);
 		}
 	}
-	filp->private_data = pipe;
-	/* OK, we have a pipe and it's pinned down */
 
-	__pipe_lock(pipe);
+	/* OK, we have a pipe and it's pinned down */
+	if (__pipe_lock_killable(pipe)) {
+		put_pipe_info(inode, pipe);
+		return -ERESTARTSYS;
+	}
+	filp->private_data = pipe;
 
 	/* We can only do regular read/write on fifos */
 	stream_open(inode, filp);
@@ -1349,7 +1418,8 @@ long pipe_fcntl(struct file *file, unsigned int cmd, unsigned long arg)
 	if (!pipe)
 		return -EBADF;
 
-	__pipe_lock(pipe);
+	if (__pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	switch (cmd) {
 	case F_SETPIPE_SZ:
diff --git a/fs/splice.c b/fs/splice.c
index d7c8a7c4db07..30069937b063 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -563,6 +563,10 @@ static int splice_from_pipe_next(struct pipe_inode_info *pipe, struct splice_des
 			sd->need_wakeup = false;
 		}
 
+		/*
+		 * Can't become killable for now, for call dependency is
+		 * complicated.
+		 */
 		pipe_wait(pipe);
 	}
 
@@ -657,7 +661,8 @@ ssize_t splice_from_pipe(struct pipe_inode_info *pipe, struct file *out,
 		.u.file = out,
 	};
 
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 	ret = __splice_from_pipe(pipe, &sd, actor);
 	pipe_unlock(pipe);
 
@@ -696,7 +701,10 @@ iter_file_splice_write(struct pipe_inode_info *pipe, struct file *out,
 	if (unlikely(!array))
 		return -ENOMEM;
 
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe)) {
+		kfree(array);
+		return -ERESTARTSYS;
+	}
 
 	splice_from_pipe_begin(&sd);
 	while (sd.total_len) {
@@ -1064,8 +1072,9 @@ long do_splice_direct(struct file *in, loff_t *ppos, struct file *out,
 }
 EXPORT_SYMBOL(do_splice_direct);
 
-static int wait_for_space(struct pipe_inode_info *pipe, unsigned flags)
+static int wait_for_space(struct pipe_inode_info *pipe, unsigned flags, bool *locked)
 {
+	*locked = true;
 	for (;;) {
 		if (unlikely(!pipe->readers)) {
 			send_sig(SIGPIPE, current, 0);
@@ -1077,7 +1086,10 @@ static int wait_for_space(struct pipe_inode_info *pipe, unsigned flags)
 			return -EAGAIN;
 		if (signal_pending(current))
 			return -ERESTARTSYS;
-		pipe_wait(pipe);
+		if (pipe_wait_killable(pipe)) {
+			*locked = false;
+			return -ERESTARTSYS;
+		}
 	}
 }
 
@@ -1153,6 +1165,8 @@ long do_splice(struct file *in, loff_t __user *off_in,
 	}
 
 	if (opipe) {
+		bool locked;
+
 		if (off_out)
 			return -ESPIPE;
 		if (off_in) {
@@ -1167,8 +1181,9 @@ long do_splice(struct file *in, loff_t __user *off_in,
 		if (out->f_flags & O_NONBLOCK)
 			flags |= SPLICE_F_NONBLOCK;
 
-		pipe_lock(opipe);
-		ret = wait_for_space(opipe, flags);
+		if (pipe_lock_killable(opipe))
+			return -ERESTARTSYS;
+		ret = wait_for_space(opipe, flags, &locked);
 		if (!ret) {
 			unsigned int p_space;
 
@@ -1178,7 +1193,8 @@ long do_splice(struct file *in, loff_t __user *off_in,
 
 			ret = do_splice_to(in, &offset, opipe, len, flags);
 		}
-		pipe_unlock(opipe);
+		if (locked)
+			pipe_unlock(opipe);
 		if (ret > 0)
 			wakeup_pipe_readers(opipe);
 		if (!off_in)
@@ -1264,7 +1280,8 @@ static long vmsplice_to_user(struct file *file, struct iov_iter *iter,
 		return -EBADF;
 
 	if (sd.total_len) {
-		pipe_lock(pipe);
+		if (pipe_lock_killable(pipe))
+			return -ERESTARTSYS;
 		ret = __splice_from_pipe(pipe, &sd, pipe_to_user);
 		pipe_unlock(pipe);
 	}
@@ -1283,6 +1300,7 @@ static long vmsplice_to_pipe(struct file *file, struct iov_iter *iter,
 	struct pipe_inode_info *pipe;
 	long ret = 0;
 	unsigned buf_flag = 0;
+	bool locked;
 
 	if (flags & SPLICE_F_GIFT)
 		buf_flag = PIPE_BUF_FLAG_GIFT;
@@ -1291,11 +1309,13 @@ static long vmsplice_to_pipe(struct file *file, struct iov_iter *iter,
 	if (!pipe)
 		return -EBADF;
 
-	pipe_lock(pipe);
-	ret = wait_for_space(pipe, flags);
+	if (pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
+	ret = wait_for_space(pipe, flags, &locked);
 	if (!ret)
 		ret = iter_to_pipe(iter, pipe, buf_flag);
-	pipe_unlock(pipe);
+	if (locked)
+		pipe_unlock(pipe);
 	if (ret > 0)
 		wakeup_pipe_readers(pipe);
 	return ret;
@@ -1441,7 +1461,8 @@ static int ipipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
 		return 0;
 
 	ret = 0;
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	while (pipe_empty(pipe->head, pipe->tail)) {
 		if (signal_pending(current)) {
@@ -1454,7 +1475,8 @@ static int ipipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
 			ret = -EAGAIN;
 			break;
 		}
-		pipe_wait(pipe);
+		if (pipe_wait_killable(pipe))
+			return -ERESTARTSYS;
 	}
 
 	pipe_unlock(pipe);
@@ -1477,7 +1499,8 @@ static int opipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
 		return 0;
 
 	ret = 0;
-	pipe_lock(pipe);
+	if (pipe_lock_killable(pipe))
+		return -ERESTARTSYS;
 
 	while (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) {
 		if (!pipe->readers) {
@@ -1493,7 +1516,8 @@ static int opipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
 			ret = -ERESTARTSYS;
 			break;
 		}
-		pipe_wait(pipe);
+		if (pipe_wait_killable(pipe))
+			return -ERESTARTSYS;
 	}
 
 	pipe_unlock(pipe);
@@ -1529,7 +1553,8 @@ static int splice_pipe_to_pipe(struct pipe_inode_info *ipipe,
 	 * grabbing by pipe info address. Otherwise two different processes
 	 * could deadlock (one doing tee from A -> B, the other from B -> A).
 	 */
-	pipe_double_lock(ipipe, opipe);
+	if (pipe_double_lock_killable(ipipe, opipe))
+		return -ERESTARTSYS;
 
 	i_tail = ipipe->tail;
 	i_mask = ipipe->ring_size - 1;
@@ -1655,7 +1680,8 @@ static int link_pipe(struct pipe_inode_info *ipipe,
 	 * grabbing by pipe info address. Otherwise two different processes
 	 * could deadlock (one doing tee from A -> B, the other from B -> A).
 	 */
-	pipe_double_lock(ipipe, opipe);
+	if (pipe_double_lock_killable(ipipe, opipe))
+		return -ERESTARTSYS;
 
 	i_tail = ipipe->tail;
 	i_mask = ipipe->ring_size - 1;
diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
index 50afd0d0084c..eb99c18fc12d 100644
--- a/include/linux/pipe_fs_i.h
+++ b/include/linux/pipe_fs_i.h
@@ -233,8 +233,10 @@ static inline bool pipe_buf_try_steal(struct pipe_inode_info *pipe,
 
 /* Pipe lock and unlock operations */
 void pipe_lock(struct pipe_inode_info *);
+int __must_check pipe_lock_killable(struct pipe_inode_info *pipe);
 void pipe_unlock(struct pipe_inode_info *);
-void pipe_double_lock(struct pipe_inode_info *, struct pipe_inode_info *);
+int __must_check pipe_double_lock_killable(struct pipe_inode_info *pipe1,
+					   struct pipe_inode_info *pipe2);
 
 extern unsigned int pipe_max_size;
 extern unsigned long pipe_user_pages_hard;
@@ -242,6 +244,7 @@ extern unsigned long pipe_user_pages_soft;
 
 /* Drop the inode semaphore and wait for a pipe event, atomically */
 void pipe_wait(struct pipe_inode_info *pipe);
+int __must_check pipe_wait_killable(struct pipe_inode_info *pipe);
 
 struct pipe_inode_info *alloc_pipe_info(void);
 void free_pipe_info(struct pipe_inode_info *);
-- 
2.18.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH] pipe: make pipe_release() deferrable.
  2020-08-13 11:20             ` Tetsuo Handa
@ 2020-08-22  4:34               ` Tetsuo Handa
  2020-08-22 16:30                 ` Linus Torvalds
  0 siblings, 1 reply; 11+ messages in thread
From: Tetsuo Handa @ 2020-08-22  4:34 UTC (permalink / raw)
  To: Al Viro
  Cc: Andrea Arcangeli, Linus Torvalds, syzbot, syzkaller-bugs,
	linux-fsdevel, Linux Kernel Mailing List, Dmitry Vyukov,
	Andrew Morton

syzbot is reporting hung task at pipe_write() [1], for __pipe_lock() from
pipe_write() by task-A can be blocked forever waiting for
handle_userfault() from copy_page_from_iter() from pipe_write() by task-B
to complete and call __pipe_unlock().

Since the problem is that we can't enforce task-B to immediately complete
handle_userfault() (this is effectively returning to userspace with locks
held), we won't be able to avoid this hung task report unless we convert
all pipe locks to killable (because khungtaskd does not warn stalling
killable waits).

Linus Torvalds commented that we could introduce timeout for
handle_userfault(), and Andrea Arcangeli responded that too short timeout
can cause problems (e.g. breaking qemu's live migration) [2], and we can't
guarantee that khungtaskd's timeout is longer than timeout for
multiple handle_userfault() events.

Since Andrea commented that we will be able to avoid this hung task report
if we convert pipe locks to killable, I tried a feasibility test [3].
While it is not difficult to make some of pipe locks killable, there
are subtle or complicated locations (e.g. pipe_wait() users).

syzbot already reported that even pipe_release() is subjected to this hung
task report [4]. While the cause of [4] is that splice() from pipe to file
hit an infinite busy loop bug after holding pipe lock, it is a sign that
we have to care about __pipe_lock() in pipe_release() even if pipe_read()
or pipe_write() is stalling due to page fault handling.

Therefore, this patch tries to convert __pipe_lock() in pipe_release() to
killable, by deferring to a workqueue context when __pipe_lock_killable()
failed.

(a) Do you think that we can make all pipe locks killable?
(b) Do you think that we can introduce timeout for handling page faults?
(c) Do you think that we can avoid page faults with pipe locks held?

[1] https://syzkaller.appspot.com/bug?id=ab3d277fa3b068651edb7171a1aa4f78e5eacf78
[2] https://lkml.kernel.org/r/CAHk-=wj15SDiHjP2wPiC=Ru-RrUjOuT4AoULj6N_9pVvSXaWiw@mail.gmail.com
[3] https://lkml.kernel.org/r/dc9b2681-3b84-eb74-8c88-3815beaff7f8@i-love.sakura.ne.jp
[4] https://syzkaller.appspot.com/bug?id=2ccac875e85dc852911a0b5b788ada82dc0a081e

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
---
 fs/pipe.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 48 insertions(+), 7 deletions(-)

diff --git a/fs/pipe.c b/fs/pipe.c
index 60dbee457143..a64c7fc1794f 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -87,6 +87,11 @@ static inline void __pipe_lock(struct pipe_inode_info *pipe)
 	mutex_lock_nested(&pipe->mutex, I_MUTEX_PARENT);
 }
 
+static inline int __pipe_lock_killable(struct pipe_inode_info *pipe)
+{
+	return mutex_lock_killable_nested(&pipe->mutex, I_MUTEX_PARENT);
+}
+
 static inline void __pipe_unlock(struct pipe_inode_info *pipe)
 {
 	mutex_unlock(&pipe->mutex);
@@ -714,15 +719,12 @@ static void put_pipe_info(struct inode *inode, struct pipe_inode_info *pipe)
 		free_pipe_info(pipe);
 }
 
-static int
-pipe_release(struct inode *inode, struct file *file)
+/* Caller holds pipe->mutex. */
+static void do_pipe_release(struct inode *inode, struct pipe_inode_info *pipe, fmode_t f_mode)
 {
-	struct pipe_inode_info *pipe = file->private_data;
-
-	__pipe_lock(pipe);
-	if (file->f_mode & FMODE_READ)
+	if (f_mode & FMODE_READ)
 		pipe->readers--;
-	if (file->f_mode & FMODE_WRITE)
+	if (f_mode & FMODE_WRITE)
 		pipe->writers--;
 
 	/* Was that the last reader or writer, but not the other side? */
@@ -732,9 +734,48 @@ pipe_release(struct inode *inode, struct file *file)
 		kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
 		kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
 	}
+}
+
+struct pipe_release_data {
+	struct work_struct work;
+	struct inode *inode;
+	struct pipe_inode_info *pipe;
+	fmode_t f_mode;
+};
+
+static void deferred_pipe_release(struct work_struct *work)
+{
+	struct pipe_release_data *w = container_of(work, struct pipe_release_data, work);
+	struct inode *inode = w->inode;
+	struct pipe_inode_info *pipe = w->pipe;
+
+	__pipe_lock(pipe);
+	do_pipe_release(inode, pipe, w->f_mode);
 	__pipe_unlock(pipe);
 
 	put_pipe_info(inode, pipe);
+	iput(inode); /* pipe_release() called ihold(inode). */
+	kfree(w);
+}
+
+static int pipe_release(struct inode *inode, struct file *file)
+{
+	struct pipe_inode_info *pipe = file->private_data;
+	struct pipe_release_data *w;
+
+	if (likely(__pipe_lock_killable(pipe) == 0)) {
+		do_pipe_release(inode, pipe, file->f_mode);
+		__pipe_unlock(pipe);
+		put_pipe_info(inode, pipe);
+		return 0;
+	}
+	w = kmalloc(sizeof(*w), GFP_KERNEL | __GFP_NOFAIL);
+	ihold(inode); /* deferred_pipe_release() will call iput(inode). */
+	w->inode = inode;
+	w->pipe = pipe;
+	w->f_mode = file->f_mode;
+	INIT_WORK(&w->work, deferred_pipe_release);
+	queue_work(system_wq, &w->work);
 	return 0;
 }
 
-- 
2.18.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] pipe: make pipe_release() deferrable.
  2020-08-22  4:34               ` [RFC PATCH] pipe: make pipe_release() deferrable Tetsuo Handa
@ 2020-08-22 16:30                 ` Linus Torvalds
  2020-08-23  0:21                   ` Tetsuo Handa
  0 siblings, 1 reply; 11+ messages in thread
From: Linus Torvalds @ 2020-08-22 16:30 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Al Viro, Andrea Arcangeli, syzbot, syzkaller-bugs, linux-fsdevel,
	Linux Kernel Mailing List, Dmitry Vyukov, Andrew Morton

On Fri, Aug 21, 2020 at 9:35 PM Tetsuo Handa
<penguin-kernel@i-love.sakura.ne.jp> wrote:
>
> Therefore, this patch tries to convert __pipe_lock() in pipe_release() to
> killable, by deferring to a workqueue context when __pipe_lock_killable()
> failed.

I don't think this is an improvement.

If somebody can delay the pipe unlock arbitrarily, you've now turned a
user-visible blocking operation into blocking a workqueue instead. So
it's still there, and now it possibly is much worse and blocks
system_wq instead.

                Linus

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] pipe: make pipe_release() deferrable.
  2020-08-22 16:30                 ` Linus Torvalds
@ 2020-08-23  0:21                   ` Tetsuo Handa
  0 siblings, 0 replies; 11+ messages in thread
From: Tetsuo Handa @ 2020-08-23  0:21 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Al Viro, Andrea Arcangeli, syzbot, syzkaller-bugs, linux-fsdevel,
	Linux Kernel Mailing List, Dmitry Vyukov, Andrew Morton

On 2020/08/23 1:30, Linus Torvalds wrote:
> On Fri, Aug 21, 2020 at 9:35 PM Tetsuo Handa
> <penguin-kernel@i-love.sakura.ne.jp> wrote:
>>
>> Therefore, this patch tries to convert __pipe_lock() in pipe_release() to
>> killable, by deferring to a workqueue context when __pipe_lock_killable()
>> failed.
> 
> I don't think this is an improvement.
> 
> If somebody can delay the pipe unlock arbitrarily, you've now turned a
> user-visible blocking operation into blocking a workqueue instead. So
> it's still there, and now it possibly is much worse and blocks
> system_wq instead.

We can use a dedicated WQ_MEM_RECLAIM workqueue (like mm_percpu_wq)
if blocking system_wq is a concern.

Moving a user-visible blocking operation into blocking a workqueue helps
avoiding AB-BA deadlocks in some situations. This hung task report says that
a task can't close file descriptor of userfaultfd unless pipe_release()
completes while pipe_write() (which is blocking pipe_release()) can abort
if file descriptor of userfaultfd is closed. A different report [1] says that
a task can't close file descriptor of /dev/raw-gadget unless wdm_flush()
completes while wdm_flush() can abort if file descriptor of /dev/raw-gadget
is closed.

handle_userfault() is a method for delaying for arbitrarily period (and this
report says the period is forever due to AB-BA deadlock condition). But if each
copy_page_to_iter()/copy_page_from_iter() takes 10 seconds for whatever reason,
it is sufficient for triggering khungtaskd warning (demonstrated by patch below).

We can't use too short pagefault timeout for "do not break e.g. live migration"
but we need to use short pagefault timeout for "do not trigger khungtaskd warning"
and we can't use long khungtaskd timeout for "detect really hanging tasks".
I think these are incompatible as long as uninterruptible wait is used.

[1] https://lkml.kernel.org/r/1597188375-4787-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp

----------------------------------------
diff --git a/fs/pipe.c b/fs/pipe.c
index 60dbee457143..2510c6a330b8 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -82,9 +82,13 @@ void pipe_unlock(struct pipe_inode_info *pipe)
 }
 EXPORT_SYMBOL(pipe_unlock);
 
-static inline void __pipe_lock(struct pipe_inode_info *pipe)
+static inline void __pipe_lock(struct pipe_inode_info *pipe, const char *func)
 {
+	if (!strcmp(current->comm, "a.out"))
+		printk("%s started __pipe_lock()\n", func);
 	mutex_lock_nested(&pipe->mutex, I_MUTEX_PARENT);
+	if (!strcmp(current->comm, "a.out"))
+		printk("%s completed __pipe_lock()\n", func);
 }
 
 static inline void __pipe_unlock(struct pipe_inode_info *pipe)
@@ -244,7 +248,7 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
 		return 0;
 
 	ret = 0;
-	__pipe_lock(pipe);
+	__pipe_lock(pipe, __func__);
 
 	/*
 	 * We only wake up writers if the pipe was full when we started
@@ -306,6 +310,12 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
 				break;
 			}
 
+			if (!strcmp(current->comm, "a.out")) {
+				int i;
+
+				for (i = 0; i < 10; i++)
+					schedule_timeout_uninterruptible(HZ);
+			}
 			written = copy_page_to_iter(buf->page, buf->offset, chars, to);
 			if (unlikely(written < chars)) {
 				if (!ret)
@@ -381,7 +391,7 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
 		if (wait_event_interruptible_exclusive(pipe->rd_wait, pipe_readable(pipe)) < 0)
 			return -ERESTARTSYS;
 
-		__pipe_lock(pipe);
+		__pipe_lock(pipe, __func__);
 		was_full = pipe_full(pipe->head, pipe->tail, pipe->max_usage);
 		wake_next_reader = true;
 	}
@@ -432,7 +442,7 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
 	if (unlikely(total_len == 0))
 		return 0;
 
-	__pipe_lock(pipe);
+	__pipe_lock(pipe, __func__);
 
 	if (!pipe->readers) {
 		send_sig(SIGPIPE, current, 0);
@@ -472,6 +482,12 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
 			if (ret)
 				goto out;
 
+			if (!strcmp(current->comm, "a.out")) {
+				int i;
+
+				for (i = 0; i < 10; i++)
+					schedule_timeout_uninterruptible(HZ);
+			}
 			ret = copy_page_from_iter(buf->page, offset, chars, from);
 			if (unlikely(ret < chars)) {
 				ret = -EFAULT;
@@ -536,6 +552,12 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
 				buf->flags = PIPE_BUF_FLAG_CAN_MERGE;
 			pipe->tmp_page = NULL;
 
+			if (!strcmp(current->comm, "a.out")) {
+				int i;
+
+				for (i = 0; i < 10; i++)
+					schedule_timeout_uninterruptible(HZ);
+			}
 			copied = copy_page_from_iter(page, 0, PAGE_SIZE, from);
 			if (unlikely(copied < PAGE_SIZE && iov_iter_count(from))) {
 				if (!ret)
@@ -577,7 +599,7 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
 			kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
 		}
 		wait_event_interruptible_exclusive(pipe->wr_wait, pipe_writable(pipe));
-		__pipe_lock(pipe);
+		__pipe_lock(pipe, __func__);
 		was_empty = pipe_empty(pipe->head, pipe->tail);
 		wake_next_writer = true;
 	}
@@ -617,7 +639,7 @@ static long pipe_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 
 	switch (cmd) {
 	case FIONREAD:
-		__pipe_lock(pipe);
+		__pipe_lock(pipe, __func__);
 		count = 0;
 		head = pipe->head;
 		tail = pipe->tail;
@@ -634,7 +656,7 @@ static long pipe_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 #ifdef CONFIG_WATCH_QUEUE
 	case IOC_WATCH_QUEUE_SET_SIZE: {
 		int ret;
-		__pipe_lock(pipe);
+		__pipe_lock(pipe, __func__);
 		ret = watch_queue_set_size(pipe, arg);
 		__pipe_unlock(pipe);
 		return ret;
@@ -719,7 +741,7 @@ pipe_release(struct inode *inode, struct file *file)
 {
 	struct pipe_inode_info *pipe = file->private_data;
 
-	__pipe_lock(pipe);
+	__pipe_lock(pipe, __func__);
 	if (file->f_mode & FMODE_READ)
 		pipe->readers--;
 	if (file->f_mode & FMODE_WRITE)
@@ -744,7 +766,7 @@ pipe_fasync(int fd, struct file *filp, int on)
 	struct pipe_inode_info *pipe = filp->private_data;
 	int retval = 0;
 
-	__pipe_lock(pipe);
+	__pipe_lock(pipe, __func__);
 	if (filp->f_mode & FMODE_READ)
 		retval = fasync_helper(fd, filp, on, &pipe->fasync_readers);
 	if ((filp->f_mode & FMODE_WRITE) && retval >= 0) {
@@ -1086,7 +1108,7 @@ static int fifo_open(struct inode *inode, struct file *filp)
 	filp->private_data = pipe;
 	/* OK, we have a pipe and it's pinned down */
 
-	__pipe_lock(pipe);
+	__pipe_lock(pipe, __func__);
 
 	/* We can only do regular read/write on fifos */
 	stream_open(inode, filp);
@@ -1349,7 +1371,7 @@ long pipe_fcntl(struct file *file, unsigned int cmd, unsigned long arg)
 	if (!pipe)
 		return -EBADF;
 
-	__pipe_lock(pipe);
+	__pipe_lock(pipe, __func__);
 
 	switch (cmd) {
 	case F_SETPIPE_SZ:
----------------------------------------

----------------------------------------
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>

int main(int argc, char *argv[])
{
        static char buffer[4096 * 32];
        int pipe_fd[2] = { -1, -1 };
        pipe(pipe_fd);
        if (fork() == 0) {
                close(pipe_fd[1]);
                sleep(1);
                close(pipe_fd[0]); // blocked until write() releases pipe mutex.
                _exit(0);
        }
        close(pipe_fd[0]);
        write(pipe_fd[1], buffer, sizeof(buffer));
        close(pipe_fd[1]);
        return 0;
}
----------------------------------------

----------------------------------------
[   67.674493][ T2800] pipe_write started __pipe_lock()
[   67.676820][ T2800] pipe_write completed __pipe_lock()
[   68.675176][ T2801] pipe_release started __pipe_lock()
[  217.648842][   T33] INFO: task a.out:2801 blocked for more than 144 seconds.
[  217.656134][   T33]       Not tainted 5.9.0-rc1+ #808
[  217.660467][   T33] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  217.667303][   T33] task:a.out           state:D stack:    0 pid: 2801 ppid:  2800 flags:0x00000080
[  217.674941][   T33] Call Trace:
[  217.677903][   T33]  __schedule+0x1f0/0x5b0
[  217.681688][   T33]  ? irq_work_queue+0x21/0x30
[  217.685524][   T33]  schedule+0x3f/0xb0
[  217.689741][   T33]  schedule_preempt_disabled+0x9/0x10
[  217.694009][   T33]  __mutex_lock.isra.12+0x2b2/0x4a0
[  217.698635][   T33]  ? vprintk_default+0x1a/0x20
[  217.702470][   T33]  __mutex_lock_slowpath+0xe/0x10
[  217.706434][   T33]  mutex_lock+0x27/0x30
[  217.710167][   T33]  pipe_release+0x4e/0x120
[  217.713657][   T33]  __fput+0x92/0x240
[  217.715745][   T33]  ____fput+0x9/0x10
[  217.717756][   T33]  task_work_run+0x69/0xa0
[  217.719992][   T33]  exit_to_user_mode_prepare+0x125/0x130
[  217.722709][   T33]  syscall_exit_to_user_mode+0x27/0xf0
[  217.725345][   T33]  do_syscall_64+0x3d/0x40
[  217.727820][   T33]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  217.731030][   T33] RIP: 0033:0x7fb09f7ba050
[  217.733359][   T33] Code: Bad RIP value.
[  217.735458][   T33] RSP: 002b:00007ffc41c02828 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
[  217.739376][   T33] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fb09f7ba050
[  217.743027][   T33] RDX: 0000000000000000 RSI: 00007ffc41c02650 RDI: 0000000000000003
[  217.746360][   T33] RBP: 0000000000000000 R08: 00007ffc41c02760 R09: 00007ffc41c025a0
[  217.749835][   T33] R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000400621
[  217.752847][   T33] R13: 00007ffc41c02920 R14: 0000000000000000 R15: 0000000000000000
[  222.768491][   T33] INFO: task a.out:2801 blocked for more than 149 seconds.
[  222.771191][   T33]       Not tainted 5.9.0-rc1+ #808
[  222.773277][   T33] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  222.776857][   T33] task:a.out           state:D stack:    0 pid: 2801 ppid:  2800 flags:0x00000080
[  222.780868][   T33] Call Trace:
[  222.782290][   T33]  __schedule+0x1f0/0x5b0
[  222.784255][   T33]  ? irq_work_queue+0x21/0x30
[  222.786433][   T33]  schedule+0x3f/0xb0
[  222.788449][   T33]  schedule_preempt_disabled+0x9/0x10
[  222.790697][   T33]  __mutex_lock.isra.12+0x2b2/0x4a0
[  222.792746][   T33]  ? vprintk_default+0x1a/0x20
[  222.794633][   T33]  __mutex_lock_slowpath+0xe/0x10
[  222.796631][   T33]  mutex_lock+0x27/0x30
[  222.798316][   T33]  pipe_release+0x4e/0x120
[  222.800085][   T33]  __fput+0x92/0x240
[  222.802046][   T33]  ____fput+0x9/0x10
[  222.803663][   T33]  task_work_run+0x69/0xa0
[  222.805386][   T33]  exit_to_user_mode_prepare+0x125/0x130
[  222.807582][   T33]  syscall_exit_to_user_mode+0x27/0xf0
[  222.809615][   T33]  do_syscall_64+0x3d/0x40
[  222.811299][   T33]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  222.813661][   T33] RIP: 0033:0x7fb09f7ba050
[  222.815357][   T33] Code: Bad RIP value.
[  222.817354][   T33] RSP: 002b:00007ffc41c02828 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
[  222.821300][   T33] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fb09f7ba050
[  222.824229][   T33] RDX: 0000000000000000 RSI: 00007ffc41c02650 RDI: 0000000000000003
[  222.827439][   T33] RBP: 0000000000000000 R08: 00007ffc41c02760 R09: 00007ffc41c025a0
[  222.830442][   T33] R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000400621
[  222.833388][   T33] R13: 00007ffc41c02920 R14: 0000000000000000 R15: 0000000000000000
[  227.888517][   T33] INFO: task a.out:2801 blocked for more than 154 seconds.
[  227.893406][   T33]       Not tainted 5.9.0-rc1+ #808
[  227.896959][   T33] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  227.902882][   T33] task:a.out           state:D stack:    0 pid: 2801 ppid:  2800 flags:0x00000080
[  227.910517][   T33] Call Trace:
[  227.913261][   T33]  __schedule+0x1f0/0x5b0
[  227.916444][   T33]  ? irq_work_queue+0x21/0x30
[  227.919706][   T33]  schedule+0x3f/0xb0
[  227.922740][   T33]  schedule_preempt_disabled+0x9/0x10
[  227.926799][   T33]  __mutex_lock.isra.12+0x2b2/0x4a0
[  227.930554][   T33]  ? vprintk_default+0x1a/0x20
[  227.933855][   T33]  __mutex_lock_slowpath+0xe/0x10
[  227.937291][   T33]  mutex_lock+0x27/0x30
[  227.940365][   T33]  pipe_release+0x4e/0x120
[  227.943504][   T33]  __fput+0x92/0x240
[  227.945133][   T33]  ____fput+0x9/0x10
[  227.947085][   T33]  task_work_run+0x69/0xa0
[  227.949361][   T33]  exit_to_user_mode_prepare+0x125/0x130
[  227.951688][   T33]  syscall_exit_to_user_mode+0x27/0xf0
[  227.953809][   T33]  do_syscall_64+0x3d/0x40
[  227.955615][   T33]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  227.958389][   T33] RIP: 0033:0x7fb09f7ba050
[  227.960177][   T33] Code: Bad RIP value.
[  227.961964][   T33] RSP: 002b:00007ffc41c02828 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
[  227.965075][   T33] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fb09f7ba050
[  227.968559][   T33] RDX: 0000000000000000 RSI: 00007ffc41c02650 RDI: 0000000000000003
[  227.971582][   T33] RBP: 0000000000000000 R08: 00007ffc41c02760 R09: 00007ffc41c025a0
[  227.974721][   T33] R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000400621
[  227.977725][   T33] R13: 00007ffc41c02920 R14: 0000000000000000 R15: 0000000000000000
[  231.537242][ T2801] pipe_release completed __pipe_lock()
[  231.551865][ T2800] pipe_write started __pipe_lock()
[  231.556124][ T2800] pipe_write completed __pipe_lock()
[  231.560827][ T2800] pipe_release started __pipe_lock()
[  231.565195][ T2800] pipe_release completed __pipe_lock()
----------------------------------------


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-08-23  0:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-01  8:45 INFO: task hung in pipe_read (2) syzbot
2020-08-01 15:29 ` Tetsuo Handa
2020-08-01 17:39   ` Linus Torvalds
2020-08-07  5:31     ` Andrea Arcangeli
2020-08-08  1:01       ` Tetsuo Handa
2020-08-10 19:29         ` Andrea Arcangeli
2020-08-13  7:00           ` Tetsuo Handa
2020-08-13 11:20             ` Tetsuo Handa
2020-08-22  4:34               ` [RFC PATCH] pipe: make pipe_release() deferrable Tetsuo Handa
2020-08-22 16:30                 ` Linus Torvalds
2020-08-23  0:21                   ` Tetsuo Handa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).