All of lore.kernel.org
 help / color / mirror / Atom feed
* INFO: task hung in __sb_start_write
@ 2018-05-03 10:17 syzbot
  2018-06-10 14:47 ` Tetsuo Handa
  0 siblings, 1 reply; 13+ messages in thread
From: syzbot @ 2018-05-03 10:17 UTC (permalink / raw)
  To: linux-fsdevel, linux-kernel, syzkaller-bugs, viro

Hello,

syzbot found the following crash on:

HEAD commit:    06dd3dfeea60 Merge tag 'char-misc-4.17-rc1' of  
git://git.k...
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=12911087800000
kernel config:  https://syzkaller.appspot.com/x/.config?x=301513bac2f9fd9
dashboard link: https://syzkaller.appspot.com/bug?extid=7b2866454055e43c21e5
compiler:       gcc (GCC) 8.0.1 20180301 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+7b2866454055e43c21e5@syzkaller.appspotmail.com

kworker/dying (22) used greatest stack depth: 8440 bytes left
INFO: task syz-executor1:4457 blocked for more than 120 seconds.
       Not tainted 4.16.0+ #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor1   D21448  4457   4449 0x00000000
Call Trace:
  context_switch kernel/sched/core.c:2848 [inline]
  __schedule+0x807/0x1e40 kernel/sched/core.c:3490
  schedule+0xef/0x430 kernel/sched/core.c:3549
  __rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:269 [inline]
  rwsem_down_read_failed+0x400/0x6e0 kernel/locking/rwsem-xadd.c:286
  call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
  __down_read arch/x86/include/asm/rwsem.h:83 [inline]
  __percpu_down_read+0x15d/0x200 kernel/locking/percpu-rwsem.c:85
  percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:49 [inline]
  percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
  __sb_start_write+0x2d7/0x300 fs/super.c:1340
  sb_start_write include/linux/fs.h:1548 [inline]
  mnt_want_write+0x3f/0xc0 fs/namespace.c:386
  do_unlinkat+0x2a3/0xa10 fs/namei.c:4027
  SYSC_unlink fs/namei.c:4092 [inline]
  SyS_unlink+0x26/0x30 fs/namei.c:4090
  do_syscall_64+0x29e/0x9d0 arch/x86/entry/common.c:287
  entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x455027
RSP: 002b:00007ffe9645abc8 EFLAGS: 00000202 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000000000455027
RDX: 0000000001272983 RSI: 00007ffe9645ac60 RDI: 00007ffe9645ac60
RBP: 00007ffe9645c970 R08: 0000000000000000 R09: 0000000000000010
R10: 000000000000000a R11: 0000000000000202 R12: 0000000001272940
R13: 0000000000000000 R14: 000000000000041b R15: 000000000003297e

Showing all locks held in the system:
2 locks held by khungtaskd/878:
  #0: 00000000e417309b (rcu_read_lock){....}, at:  
check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline]
  #0: 00000000e417309b (rcu_read_lock){....}, at: watchdog+0x1ff/0xf60  
kernel/hung_task.c:249
  #1: 00000000a42a84b8 (tasklist_lock){.+.+}, at:  
debug_show_all_locks+0xde/0x34a kernel/locking/lockdep.c:4470
2 locks held by rs:main Q:Reg/4288:
  #0: 00000000658bb541 (&f->f_pos_lock){+.+.}, at: __fdget_pos+0x1a9/0x1e0  
fs/file.c:766
  #1: 0000000052aff78e (sb_writers#4){++++}, at: file_start_write  
include/linux/fs.h:2709 [inline]
  #1: 0000000052aff78e (sb_writers#4){++++}, at: vfs_write+0x452/0x560  
fs/read_write.c:548
2 locks held by getty/4381:
  #0: 00000000990269ba (&tty->ldisc_sem){++++}, at:  
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
  #1: 000000009f1d2051 (&ldata->atomic_read_lock){+.+.}, at:  
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4382:
  #0: 00000000ecd434e6 (&tty->ldisc_sem){++++}, at:  
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
  #1: 0000000034d736c1 (&ldata->atomic_read_lock){+.+.}, at:  
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4383:
  #0: 0000000013764982 (&tty->ldisc_sem){++++}, at:  
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
  #1: 00000000202a6e1f (&ldata->atomic_read_lock){+.+.}, at:  
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4384:
  #0: 00000000ba3d2e6d (&tty->ldisc_sem){++++}, at:  
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
  #1: 00000000e3323037 (&ldata->atomic_read_lock){+.+.}, at:  
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4385:
  #0: 00000000a3add0e7 (&tty->ldisc_sem){++++}, at:  
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
  #1: 0000000081f7b524 (&ldata->atomic_read_lock){+.+.}, at:  
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4386:
  #0: 00000000a73f06ea (&tty->ldisc_sem){++++}, at:  
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
  #1: 000000002de50dc3 (&ldata->atomic_read_lock){+.+.}, at:  
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4387:
  #0: 0000000087cccc64 (&tty->ldisc_sem){++++}, at:  
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
  #1: 000000000a3822a6 (&ldata->atomic_read_lock){+.+.}, at:  
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
1 lock held by syz-executor1/4457:
  #0: 0000000052aff78e (sb_writers#4){++++}, at: sb_start_write  
include/linux/fs.h:1548 [inline]
  #0: 0000000052aff78e (sb_writers#4){++++}, at: mnt_want_write+0x3f/0xc0  
fs/namespace.c:386
1 lock held by syz-executor7/4459:
  #0: 0000000052aff78e (sb_writers#4){++++}, at: sb_start_write  
include/linux/fs.h:1548 [inline]
  #0: 0000000052aff78e (sb_writers#4){++++}, at: mnt_want_write+0x3f/0xc0  
fs/namespace.c:386
1 lock held by syz-executor5/4460:
  #0: 0000000052aff78e (sb_writers#4){++++}, at: sb_start_write  
include/linux/fs.h:1548 [inline]
  #0: 0000000052aff78e (sb_writers#4){++++}, at: mnt_want_write+0x3f/0xc0  
fs/namespace.c:386
1 lock held by syz-executor3/4462:
  #0: 0000000052aff78e (sb_writers#4){++++}, at: sb_start_write  
include/linux/fs.h:1548 [inline]
  #0: 0000000052aff78e (sb_writers#4){++++}, at: mnt_want_write+0x3f/0xc0  
fs/namespace.c:386
2 locks held by syz-executor6/14836:
  #0: 00000000f86d1ee3 (&mm->mmap_sem){++++}, at:  
__do_page_fault+0x381/0xe40 arch/x86/mm/fault.c:1328
  #1: 0000000002ee4fbd (sb_pagefaults){++++}, at: sb_start_pagefault  
include/linux/fs.h:1577 [inline]
  #1: 0000000002ee4fbd (sb_pagefaults){++++}, at:  
ext4_page_mkwrite+0x1c8/0x1420 fs/ext4/inode.c:6074
1 lock held by syz-executor0/14842:
  #0: 0000000052aff78e (sb_writers#4){++++}, at: sb_start_write  
include/linux/fs.h:1548 [inline]
  #0: 0000000052aff78e (sb_writers#4){++++}, at: mnt_want_write+0x3f/0xc0  
fs/namespace.c:386
1 lock held by syz-executor4/14843:
  #0: 0000000052aff78e (sb_writers#4){++++}, at: sb_start_write  
include/linux/fs.h:1548 [inline]
  #0: 0000000052aff78e (sb_writers#4){++++}, at: mnt_want_write+0x3f/0xc0  
fs/namespace.c:386
1 lock held by syz-executor2/14849:
  #0: 0000000052aff78e (sb_writers#4){++++}, at: sb_start_write  
include/linux/fs.h:1548 [inline]
  #0: 0000000052aff78e (sb_writers#4){++++}, at: mnt_want_write+0x3f/0xc0  
fs/namespace.c:386

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 878 Comm: khungtaskd Not tainted 4.16.0+ #1
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS  
Google 01/01/2011
Call Trace:
  __dump_stack lib/dump_stack.c:17 [inline]
  dump_stack+0x1b9/0x29f lib/dump_stack.c:53
  nmi_cpu_backtrace.cold.4+0x19/0xce lib/nmi_backtrace.c:103
  nmi_trigger_cpumask_backtrace+0x151/0x192 lib/nmi_backtrace.c:62
  arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
  trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline]
  check_hung_task kernel/hung_task.c:132 [inline]
  check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline]
  watchdog+0xc10/0xf60 kernel/hung_task.c:249
  kthread+0x345/0x410 kernel/kthread.c:238
  ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:411
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at native_safe_halt+0x6/0x10  
arch/x86/include/asm/irqflags.h:54


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this bug report.
If you forgot to add the Reported-by tag, once the fix for this bug is  
merged
into any tree, please reply to this email with:
#syz fix: exact-commit-title
To mark this as a duplicate of another syzbot report, please reply with:
#syz dup: exact-subject-of-another-report
If it's a one-off invalid bug report, please reply with:
#syz invalid
Note: if the crash happens again, it will cause creation of a new bug  
report.
Note: all commands must start from beginning of the line in the email body.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-05-03 10:17 INFO: task hung in __sb_start_write syzbot
@ 2018-06-10 14:47 ` Tetsuo Handa
  2018-06-11  7:30   ` Peter Zijlstra
  0 siblings, 1 reply; 13+ messages in thread
From: Tetsuo Handa @ 2018-06-10 14:47 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: syzbot, linux-fsdevel, linux-kernel, syzkaller-bugs, viro

Hello.

Commits

  401c636a0eeb0d51 "kernel/hung_task.c: show all hung tasks before panic"
  8cc05c71ba5f7936 "locking/lockdep: Move sanity check to inside lockdep_print_held_locks()"

arrived at linux.git and syzbot started giving us more hints.

Quoting from https://syzkaller.appspot.com/text?tag=CrashReport&x=1477e81f800000 :
----------------------------------------
2 locks held by rs:main Q:Reg/4416:
 #0: 00000000dff3f899 (&f->f_pos_lock){+.+.}, at: __fdget_pos+0x1a9/0x1e0 fs/file.c:766
 #1: 00000000c4a96cb8 (sb_writers#6){++++}, at: file_start_write include/linux/fs.h:2737 [inline]
 #1: 00000000c4a96cb8 (sb_writers#6){++++}, at: vfs_write+0x452/0x560 fs/read_write.c:548

1 lock held by rsyslogd/4418:
 #0: 000000007f0c215c (&f->f_pos_lock){+.+.}, at: __fdget_pos+0x1a9/0x1e0 fs/file.c:766

syz-executor4   D22224  4597   4588 0x00000000
Call Trace:
 context_switch kernel/sched/core.c:2856 [inline]
 __schedule+0x801/0x1e30 kernel/sched/core.c:3498
 schedule+0xef/0x430 kernel/sched/core.c:3542
 __rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:269 [inline]
 rwsem_down_read_failed+0x350/0x5e0 kernel/locking/rwsem-xadd.c:286
 call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
 __down_read arch/x86/include/asm/rwsem.h:83 [inline]
 __percpu_down_read+0x15d/0x200 kernel/locking/percpu-rwsem.c:85
 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:49 [inline]
 percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
 __sb_start_write+0x2d7/0x300 fs/super.c:1403
 sb_start_write include/linux/fs.h:1552 [inline]
 mnt_want_write+0x3f/0xc0 fs/namespace.c:386
 do_unlinkat+0x2a3/0xa10 fs/namei.c:4026
 __do_sys_unlink fs/namei.c:4091 [inline]
 __se_sys_unlink fs/namei.c:4089 [inline]
 __x64_sys_unlink+0x42/0x50 fs/namei.c:4089
 do_syscall_64+0x1b1/0x800 arch/x86/entry/common.c:287
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
1 lock held by syz-executor4/4597:
 #0: 00000000c4a96cb8 (sb_writers#6){++++}, at: sb_start_write include/linux/fs.h:1552 [inline]
 #0: 00000000c4a96cb8 (sb_writers#6){++++}, at: mnt_want_write+0x3f/0xc0 fs/namespace.c:386

syz-executor6   D21536  4600   4591 0x00000000
Call Trace:
 context_switch kernel/sched/core.c:2856 [inline]
 __schedule+0x801/0x1e30 kernel/sched/core.c:3498
 schedule+0xef/0x430 kernel/sched/core.c:3542
 __rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:269 [inline]
 rwsem_down_read_failed+0x350/0x5e0 kernel/locking/rwsem-xadd.c:286
 call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
 __down_read arch/x86/include/asm/rwsem.h:83 [inline]
 __percpu_down_read+0x15d/0x200 kernel/locking/percpu-rwsem.c:85
 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:49 [inline]
 percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
 __sb_start_write+0x2d7/0x300 fs/super.c:1403
 sb_start_write include/linux/fs.h:1552 [inline]
 mnt_want_write+0x3f/0xc0 fs/namespace.c:386
 do_unlinkat+0x2a3/0xa10 fs/namei.c:4026
 __do_sys_unlink fs/namei.c:4091 [inline]
 __se_sys_unlink fs/namei.c:4089 [inline]
 __x64_sys_unlink+0x42/0x50 fs/namei.c:4089
 do_syscall_64+0x1b1/0x800 arch/x86/entry/common.c:287
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
1 lock held by syz-executor6/4600:
 #0: 00000000c4a96cb8 (sb_writers#6){++++}, at: sb_start_write include/linux/fs.h:1552 [inline]
 #0: 00000000c4a96cb8 (sb_writers#6){++++}, at: mnt_want_write+0x3f/0xc0 fs/namespace.c:386

syz-executor3   D21536  7320   7261 0x00000000
Call Trace:
 context_switch kernel/sched/core.c:2856 [inline]
 __schedule+0x801/0x1e30 kernel/sched/core.c:3498
 schedule+0xef/0x430 kernel/sched/core.c:3542
 __rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:269 [inline]
 rwsem_down_read_failed+0x350/0x5e0 kernel/locking/rwsem-xadd.c:286
 call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
 __down_read arch/x86/include/asm/rwsem.h:83 [inline]
 __percpu_down_read+0x15d/0x200 kernel/locking/percpu-rwsem.c:85
 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:49 [inline]
 percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
 __sb_start_write+0x2d7/0x300 fs/super.c:1403
 sb_start_write include/linux/fs.h:1552 [inline]
 mnt_want_write+0x3f/0xc0 fs/namespace.c:386
 do_unlinkat+0x2a3/0xa10 fs/namei.c:4026
 __do_sys_unlink fs/namei.c:4091 [inline]
 __se_sys_unlink fs/namei.c:4089 [inline]
 __x64_sys_unlink+0x42/0x50 fs/namei.c:4089
 do_syscall_64+0x1b1/0x800 arch/x86/entry/common.c:287
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
1 lock held by syz-executor3/7320:
 #0: 00000000c4a96cb8 (sb_writers#6){++++}, at: sb_start_write include/linux/fs.h:1552 [inline]
 #0: 00000000c4a96cb8 (sb_writers#6){++++}, at: mnt_want_write+0x3f/0xc0 fs/namespace.c:386

syz-executor2   D24032 13221   4593 0x00000004
Call Trace:
 context_switch kernel/sched/core.c:2856 [inline]
 __schedule+0x801/0x1e30 kernel/sched/core.c:3498
 schedule+0xef/0x430 kernel/sched/core.c:3542
 __rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:269 [inline]
 rwsem_down_read_failed+0x350/0x5e0 kernel/locking/rwsem-xadd.c:286
 call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
 __down_read arch/x86/include/asm/rwsem.h:83 [inline]
 __percpu_down_read+0x15d/0x200 kernel/locking/percpu-rwsem.c:85
 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:49 [inline]
 percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
 __sb_start_write+0x2d7/0x300 fs/super.c:1403
 sb_start_pagefault include/linux/fs.h:1581 [inline]
 ext4_page_mkwrite+0x1c8/0x1420 fs/ext4/inode.c:6125
 do_page_mkwrite+0x146/0x500 mm/memory.c:2380
 do_shared_fault mm/memory.c:3706 [inline]
 do_fault mm/memory.c:3745 [inline]
 handle_pte_fault mm/memory.c:3972 [inline]
 __handle_mm_fault+0x2acb/0x4390 mm/memory.c:4096
 handle_mm_fault+0x53a/0xc70 mm/memory.c:4133
 __do_page_fault+0x60b/0xe40 arch/x86/mm/fault.c:1403
 do_page_fault+0xee/0x8a7 arch/x86/mm/fault.c:1478
 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1160
2 locks held by syz-executor2/13221:
 #0: 000000002618cb0b (&mm->mmap_sem){++++}, at: __do_page_fault+0x381/0xe40 arch/x86/mm/fault.c:1332
 #1: 00000000a50d0f8c (sb_pagefaults){++++}, at: sb_start_pagefault include/linux/fs.h:1581 [inline]
 #1: 00000000a50d0f8c (sb_pagefaults){++++}, at: ext4_page_mkwrite+0x1c8/0x1420 fs/ext4/inode.c:6125

syz-executor5   D24696 13225   4598 0x00000004
Call Trace:
 context_switch kernel/sched/core.c:2856 [inline]
 __schedule+0x801/0x1e30 kernel/sched/core.c:3498
 schedule+0xef/0x430 kernel/sched/core.c:3542
 __rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:269 [inline]
 rwsem_down_read_failed+0x350/0x5e0 kernel/locking/rwsem-xadd.c:286
 call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
 __down_read arch/x86/include/asm/rwsem.h:83 [inline]
 __percpu_down_read+0x15d/0x200 kernel/locking/percpu-rwsem.c:85
 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:49 [inline]
 percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
 __sb_start_write+0x2d7/0x300 fs/super.c:1403
 sb_start_pagefault include/linux/fs.h:1581 [inline]
 ext4_page_mkwrite+0x1c8/0x1420 fs/ext4/inode.c:6125
 do_page_mkwrite+0x146/0x500 mm/memory.c:2380
 wp_page_shared mm/memory.c:2676 [inline]
 do_wp_page+0xf5d/0x1990 mm/memory.c:2776
 handle_pte_fault mm/memory.c:3988 [inline]
 __handle_mm_fault+0x29f5/0x4390 mm/memory.c:4096
 handle_mm_fault+0x53a/0xc70 mm/memory.c:4133
 __do_page_fault+0x60b/0xe40 arch/x86/mm/fault.c:1403
 do_page_fault+0xee/0x8a7 arch/x86/mm/fault.c:1478
 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1160
2 locks held by syz-executor5/13225:
 #0: 000000000aa505ed (&mm->mmap_sem){++++}, at: __do_page_fault+0x381/0xe40 arch/x86/mm/fault.c:1332
 #1: 00000000a50d0f8c (sb_pagefaults){++++}, at: sb_start_pagefault include/linux/fs.h:1581 [inline]
 #1: 00000000a50d0f8c (sb_pagefaults){++++}, at: ext4_page_mkwrite+0x1c8/0x1420 fs/ext4/inode.c:6125

syz-executor1   D23432 13236   4594 0x00000004
Call Trace:
 context_switch kernel/sched/core.c:2856 [inline]
 __schedule+0x801/0x1e30 kernel/sched/core.c:3498
 schedule+0xef/0x430 kernel/sched/core.c:3542
 __rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:269 [inline]
 rwsem_down_read_failed+0x350/0x5e0 kernel/locking/rwsem-xadd.c:286
 call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
 __down_read arch/x86/include/asm/rwsem.h:83 [inline]
 __percpu_down_read+0x15d/0x200 kernel/locking/percpu-rwsem.c:85
 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:49 [inline]
 percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
 __sb_start_write+0x2d7/0x300 fs/super.c:1403
 sb_start_pagefault include/linux/fs.h:1581 [inline]
 ext4_page_mkwrite+0x1c8/0x1420 fs/ext4/inode.c:6125
 do_page_mkwrite+0x146/0x500 mm/memory.c:2380
 wp_page_shared mm/memory.c:2676 [inline]
 do_wp_page+0xf5d/0x1990 mm/memory.c:2776
 handle_pte_fault mm/memory.c:3988 [inline]
 __handle_mm_fault+0x29f5/0x4390 mm/memory.c:4096
 handle_mm_fault+0x53a/0xc70 mm/memory.c:4133
 __do_page_fault+0x60b/0xe40 arch/x86/mm/fault.c:1403
 do_page_fault+0xee/0x8a7 arch/x86/mm/fault.c:1478
 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1160
2 locks held by syz-executor1/13236:
 #0: 0000000079279c9c (&mm->mmap_sem){++++}, at: __do_page_fault+0x381/0xe40 arch/x86/mm/fault.c:1332
 #1: 00000000a50d0f8c (sb_pagefaults){++++}, at: sb_start_pagefault include/linux/fs.h:1581 [inline]
 #1: 00000000a50d0f8c (sb_pagefaults){++++}, at: ext4_page_mkwrite+0x1c8/0x1420 fs/ext4/inode.c:6125
----------------------------------------

This looks quite strange that nobody is holding percpu_rw_semaphore for
write but everybody is stuck trying to hold it for read. (Since there
is no "X locks held by ..." line without followup "#0:" line, there is
no possibility that somebody is in TASK_RUNNING state while holding
percpu_rw_semaphore for write.)

I feel that either API has a bug or API usage is wrong.
Any idea for debugging this?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-10 14:47 ` Tetsuo Handa
@ 2018-06-11  7:30   ` Peter Zijlstra
  2018-06-11  7:39     ` Dmitry Vyukov
  0 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-11  7:30 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Ingo Molnar, Will Deacon, syzbot, linux-fsdevel, linux-kernel,
	syzkaller-bugs, viro

On Sun, Jun 10, 2018 at 11:47:56PM +0900, Tetsuo Handa wrote:

> This looks quite strange that nobody is holding percpu_rw_semaphore for
> write but everybody is stuck trying to hold it for read. (Since there
> is no "X locks held by ..." line without followup "#0:" line, there is
> no possibility that somebody is in TASK_RUNNING state while holding
> percpu_rw_semaphore for write.)
> 
> I feel that either API has a bug or API usage is wrong.
> Any idea for debugging this?

Look at percpu_rwsem_release() and usage. The whole fs freezer thing is
magic.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-11  7:30   ` Peter Zijlstra
@ 2018-06-11  7:39     ` Dmitry Vyukov
  2018-06-14 10:33         ` Tetsuo Handa
  0 siblings, 1 reply; 13+ messages in thread
From: Dmitry Vyukov @ 2018-06-11  7:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Tetsuo Handa, Ingo Molnar, Will Deacon, syzbot, linux-fsdevel,
	LKML, syzkaller-bugs, Al Viro

On Mon, Jun 11, 2018 at 9:30 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Sun, Jun 10, 2018 at 11:47:56PM +0900, Tetsuo Handa wrote:
>
>> This looks quite strange that nobody is holding percpu_rw_semaphore for
>> write but everybody is stuck trying to hold it for read. (Since there
>> is no "X locks held by ..." line without followup "#0:" line, there is
>> no possibility that somebody is in TASK_RUNNING state while holding
>> percpu_rw_semaphore for write.)
>>
>> I feel that either API has a bug or API usage is wrong.
>> Any idea for debugging this?
>
> Look at percpu_rwsem_release() and usage. The whole fs freezer thing is
> magic.

Do you mean that we froze fs? We tried to never-ever issue
ioctl(FIFREEZE) during fuzzing. Are there other ways to do this?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-11  7:39     ` Dmitry Vyukov
@ 2018-06-14 10:33         ` Tetsuo Handa
  0 siblings, 0 replies; 13+ messages in thread
From: Tetsuo Handa @ 2018-06-14 10:33 UTC (permalink / raw)
  To: Dmitry Vyukov, Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, syzbot, linux-fsdevel, LKML,
	syzkaller-bugs, Al Viro, Linus Torvalds

On 2018/06/11 16:39, Dmitry Vyukov wrote:
> On Mon, Jun 11, 2018 at 9:30 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>> On Sun, Jun 10, 2018 at 11:47:56PM +0900, Tetsuo Handa wrote:
>>
>>> This looks quite strange that nobody is holding percpu_rw_semaphore for
>>> write but everybody is stuck trying to hold it for read. (Since there
>>> is no "X locks held by ..." line without followup "#0:" line, there is
>>> no possibility that somebody is in TASK_RUNNING state while holding
>>> percpu_rw_semaphore for write.)
>>>
>>> I feel that either API has a bug or API usage is wrong.
>>> Any idea for debugging this?
>>
>> Look at percpu_rwsem_release() and usage. The whole fs freezer thing is
>> magic.
> 
> Do you mean that we froze fs? We tried to never-ever issue
> ioctl(FIFREEZE) during fuzzing. Are there other ways to do this?
> 

Dmitry, can you try this patch? If you can get

[   48.080875] ================================================
[   48.083648] WARNING: lock held when returning to user space!
[   48.086384] 4.17.0+ #588 Tainted: G                T
[   48.088890] ------------------------------------------------
[   48.091447] a.out/1243 is leaving the kernel with locks still held!
[   48.093487] 3 locks held by a.out/1243:
[   48.094964]  #0: 00000000148ae74c (sb_writers#8){++++}, at: percpu_down_write+0x1d/0x110
[   48.097622]  #1: 000000001c9e7d4d (sb_pagefaults){++++}, at: percpu_down_write+0x1d/0x110
[   48.100432]  #2: 000000003c3d2e71 (sb_internal){++++}, at: percpu_down_write+0x1d/0x110

with this patch, there is a way to return to userspace with locks held.
If you got possible deadlock warning messages, it will be great.

If you cannot reproduce with this patch, I think we need a git tree to try
this patch. But linux-next.git is not yet re-added to the list of trees to
test and linux.git is not suitable for temporary debug patch...



From 5ba3a5bc6401a937d30a9e3421ac2a29ee99588f Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Date: Thu, 14 Jun 2018 16:14:55 +0900
Subject: [PATCH] locking/lockdep: Add config option to allow lockdep splat
 upon ioctl(FIFREEZE) request

syzbot is hitting hung task problems at __sb_start_write() [1]. While hung
tasks at __sb_start_write() suggest that filesystem was frozen, syzbot is
not doing ioctl(FIFREEZE) requests. Therefore, the root cause of hung tasks
is currently unknown. If there are other paths to freeze filesystem and
return to userspace without thawing, syzbot needs to avoid testing such
paths. Therefore, this patch adds a kernel config option which allows
lockdep to splat if syzbot found such paths.

[1] https://syzkaller.appspot.com/bug?id=287aa8708bc940d0ca1645223c53dd4c2d203be6

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
---
 fs/super.c        |  4 ++++
 lib/Kconfig.debug | 13 +++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/fs/super.c b/fs/super.c
index 50728d9..eb51268 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -1428,10 +1428,12 @@ static void sb_wait_write(struct super_block *sb, int level)
  */
 static void lockdep_sb_freeze_release(struct super_block *sb)
 {
+#ifndef CONFIG_LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
 	int level;
 
 	for (level = SB_FREEZE_LEVELS - 1; level >= 0; level--)
 		percpu_rwsem_release(sb->s_writers.rw_sem + level, 0, _THIS_IP_);
+#endif
 }
 
 /*
@@ -1439,10 +1441,12 @@ static void lockdep_sb_freeze_release(struct super_block *sb)
  */
 static void lockdep_sb_freeze_acquire(struct super_block *sb)
 {
+#ifndef CONFIG_LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
 	int level;
 
 	for (level = 0; level < SB_FREEZE_LEVELS; ++level)
 		percpu_rwsem_acquire(sb->s_writers.rw_sem + level, 0, _THIS_IP_);
+#endif
 }
 
 static void sb_freeze_unlock(struct super_block *sb)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index d543c65..e119d19 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1170,6 +1170,19 @@ config DEBUG_LOCK_ALLOC
 	 spin_lock_init()/mutex_init()/etc., or whether there is any lock
 	 held during task exit.
 
+config LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
+       bool "Allow lockdep splat upon ioctl(FIFREEZE) request"
+       depends on PROVE_LOCKING
+       default n
+       help
+	 Freezing filesystems and not thawing shortly will kill the system with
+	 hung tasks. Since ioctl(FIFREEZE) request returns to userspace with
+	 locks held, in order to suppress lockdep splat messages, information
+	 of held locks is cleared without actually releasing locks held. But
+	 doing so might hide possible deadlock bugs. Thus, this option allows
+	 lockdep splat if ioctl(FIFREEZE) is requested, in order to help
+	 finding possible deadlock bugs.
+
 config LOCKDEP
 	bool
 	depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
@ 2018-06-14 10:33         ` Tetsuo Handa
  0 siblings, 0 replies; 13+ messages in thread
From: Tetsuo Handa @ 2018-06-14 10:33 UTC (permalink / raw)
  To: Dmitry Vyukov, Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, syzbot, linux-fsdevel, LKML,
	syzkaller-bugs, Al Viro, Linus Torvalds

On 2018/06/11 16:39, Dmitry Vyukov wrote:
> On Mon, Jun 11, 2018 at 9:30 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>> On Sun, Jun 10, 2018 at 11:47:56PM +0900, Tetsuo Handa wrote:
>>
>>> This looks quite strange that nobody is holding percpu_rw_semaphore for
>>> write but everybody is stuck trying to hold it for read. (Since there
>>> is no "X locks held by ..." line without followup "#0:" line, there is
>>> no possibility that somebody is in TASK_RUNNING state while holding
>>> percpu_rw_semaphore for write.)
>>>
>>> I feel that either API has a bug or API usage is wrong.
>>> Any idea for debugging this?
>>
>> Look at percpu_rwsem_release() and usage. The whole fs freezer thing is
>> magic.
> 
> Do you mean that we froze fs? We tried to never-ever issue
> ioctl(FIFREEZE) during fuzzing. Are there other ways to do this?
> 

Dmitry, can you try this patch? If you can get

[   48.080875] ================================================
[   48.083648] WARNING: lock held when returning to user space!
[   48.086384] 4.17.0+ #588 Tainted: G                T
[   48.088890] ------------------------------------------------
[   48.091447] a.out/1243 is leaving the kernel with locks still held!
[   48.093487] 3 locks held by a.out/1243:
[   48.094964]  #0: 00000000148ae74c (sb_writers#8){++++}, at: percpu_down_write+0x1d/0x110
[   48.097622]  #1: 000000001c9e7d4d (sb_pagefaults){++++}, at: percpu_down_write+0x1d/0x110
[   48.100432]  #2: 000000003c3d2e71 (sb_internal){++++}, at: percpu_down_write+0x1d/0x110

with this patch, there is a way to return to userspace with locks held.
If you got possible deadlock warning messages, it will be great.

If you cannot reproduce with this patch, I think we need a git tree to try
this patch. But linux-next.git is not yet re-added to the list of trees to
test and linux.git is not suitable for temporary debug patch...



>From 5ba3a5bc6401a937d30a9e3421ac2a29ee99588f Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Date: Thu, 14 Jun 2018 16:14:55 +0900
Subject: [PATCH] locking/lockdep: Add config option to allow lockdep splat
 upon ioctl(FIFREEZE) request

syzbot is hitting hung task problems at __sb_start_write() [1]. While hung
tasks at __sb_start_write() suggest that filesystem was frozen, syzbot is
not doing ioctl(FIFREEZE) requests. Therefore, the root cause of hung tasks
is currently unknown. If there are other paths to freeze filesystem and
return to userspace without thawing, syzbot needs to avoid testing such
paths. Therefore, this patch adds a kernel config option which allows
lockdep to splat if syzbot found such paths.

[1] https://syzkaller.appspot.com/bug?id=287aa8708bc940d0ca1645223c53dd4c2d203be6

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
---
 fs/super.c        |  4 ++++
 lib/Kconfig.debug | 13 +++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/fs/super.c b/fs/super.c
index 50728d9..eb51268 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -1428,10 +1428,12 @@ static void sb_wait_write(struct super_block *sb, int level)
  */
 static void lockdep_sb_freeze_release(struct super_block *sb)
 {
+#ifndef CONFIG_LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
 	int level;
 
 	for (level = SB_FREEZE_LEVELS - 1; level >= 0; level--)
 		percpu_rwsem_release(sb->s_writers.rw_sem + level, 0, _THIS_IP_);
+#endif
 }
 
 /*
@@ -1439,10 +1441,12 @@ static void lockdep_sb_freeze_release(struct super_block *sb)
  */
 static void lockdep_sb_freeze_acquire(struct super_block *sb)
 {
+#ifndef CONFIG_LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
 	int level;
 
 	for (level = 0; level < SB_FREEZE_LEVELS; ++level)
 		percpu_rwsem_acquire(sb->s_writers.rw_sem + level, 0, _THIS_IP_);
+#endif
 }
 
 static void sb_freeze_unlock(struct super_block *sb)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index d543c65..e119d19 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1170,6 +1170,19 @@ config DEBUG_LOCK_ALLOC
 	 spin_lock_init()/mutex_init()/etc., or whether there is any lock
 	 held during task exit.
 
+config LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
+       bool "Allow lockdep splat upon ioctl(FIFREEZE) request"
+       depends on PROVE_LOCKING
+       default n
+       help
+	 Freezing filesystems and not thawing shortly will kill the system with
+	 hung tasks. Since ioctl(FIFREEZE) request returns to userspace with
+	 locks held, in order to suppress lockdep splat messages, information
+	 of held locks is cleared without actually releasing locks held. But
+	 doing so might hide possible deadlock bugs. Thus, this option allows
+	 lockdep splat if ioctl(FIFREEZE) is requested, in order to help
+	 finding possible deadlock bugs.
+
 config LOCKDEP
 	bool
 	depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-14 10:33         ` Tetsuo Handa
  (?)
@ 2018-06-15  9:19         ` Dmitry Vyukov
  2018-06-15 19:40           ` Tetsuo Handa
  -1 siblings, 1 reply; 13+ messages in thread
From: Dmitry Vyukov @ 2018-06-15  9:19 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, syzbot, linux-fsdevel,
	LKML, syzkaller-bugs, Al Viro, Linus Torvalds

On Thu, Jun 14, 2018 at 12:33 PM, Tetsuo Handa
<penguin-kernel@i-love.sakura.ne.jp> wrote:
> On 2018/06/11 16:39, Dmitry Vyukov wrote:
>> On Mon, Jun 11, 2018 at 9:30 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>>> On Sun, Jun 10, 2018 at 11:47:56PM +0900, Tetsuo Handa wrote:
>>>
>>>> This looks quite strange that nobody is holding percpu_rw_semaphore for
>>>> write but everybody is stuck trying to hold it for read. (Since there
>>>> is no "X locks held by ..." line without followup "#0:" line, there is
>>>> no possibility that somebody is in TASK_RUNNING state while holding
>>>> percpu_rw_semaphore for write.)
>>>>
>>>> I feel that either API has a bug or API usage is wrong.
>>>> Any idea for debugging this?
>>>
>>> Look at percpu_rwsem_release() and usage. The whole fs freezer thing is
>>> magic.
>>
>> Do you mean that we froze fs? We tried to never-ever issue
>> ioctl(FIFREEZE) during fuzzing. Are there other ways to do this?
>>
>
> Dmitry, can you try this patch? If you can get
>
> [   48.080875] ================================================
> [   48.083648] WARNING: lock held when returning to user space!
> [   48.086384] 4.17.0+ #588 Tainted: G                T
> [   48.088890] ------------------------------------------------
> [   48.091447] a.out/1243 is leaving the kernel with locks still held!
> [   48.093487] 3 locks held by a.out/1243:
> [   48.094964]  #0: 00000000148ae74c (sb_writers#8){++++}, at: percpu_down_write+0x1d/0x110
> [   48.097622]  #1: 000000001c9e7d4d (sb_pagefaults){++++}, at: percpu_down_write+0x1d/0x110
> [   48.100432]  #2: 000000003c3d2e71 (sb_internal){++++}, at: percpu_down_write+0x1d/0x110
>
> with this patch, there is a way to return to userspace with locks held.
> If you got possible deadlock warning messages, it will be great.
>
> If you cannot reproduce with this patch, I think we need a git tree to try
> this patch. But linux-next.git is not yet re-added to the list of trees to
> test and linux.git is not suitable for temporary debug patch...
>
>
>
> From 5ba3a5bc6401a937d30a9e3421ac2a29ee99588f Mon Sep 17 00:00:00 2001
> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Date: Thu, 14 Jun 2018 16:14:55 +0900
> Subject: [PATCH] locking/lockdep: Add config option to allow lockdep splat
>  upon ioctl(FIFREEZE) request
>
> syzbot is hitting hung task problems at __sb_start_write() [1]. While hung
> tasks at __sb_start_write() suggest that filesystem was frozen, syzbot is
> not doing ioctl(FIFREEZE) requests. Therefore, the root cause of hung tasks
> is currently unknown. If there are other paths to freeze filesystem and
> return to userspace without thawing, syzbot needs to avoid testing such
> paths. Therefore, this patch adds a kernel config option which allows
> lockdep to splat if syzbot found such paths.
>
> [1] https://syzkaller.appspot.com/bug?id=287aa8708bc940d0ca1645223c53dd4c2d203be6
>
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> ---
>  fs/super.c        |  4 ++++
>  lib/Kconfig.debug | 13 +++++++++++++
>  2 files changed, 17 insertions(+)
>
> diff --git a/fs/super.c b/fs/super.c
> index 50728d9..eb51268 100644
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -1428,10 +1428,12 @@ static void sb_wait_write(struct super_block *sb, int level)
>   */
>  static void lockdep_sb_freeze_release(struct super_block *sb)
>  {
> +#ifndef CONFIG_LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
>         int level;
>
>         for (level = SB_FREEZE_LEVELS - 1; level >= 0; level--)
>                 percpu_rwsem_release(sb->s_writers.rw_sem + level, 0, _THIS_IP_);
> +#endif
>  }
>
>  /*
> @@ -1439,10 +1441,12 @@ static void lockdep_sb_freeze_release(struct super_block *sb)
>   */
>  static void lockdep_sb_freeze_acquire(struct super_block *sb)
>  {
> +#ifndef CONFIG_LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
>         int level;
>
>         for (level = 0; level < SB_FREEZE_LEVELS; ++level)
>                 percpu_rwsem_acquire(sb->s_writers.rw_sem + level, 0, _THIS_IP_);
> +#endif
>  }
>
>  static void sb_freeze_unlock(struct super_block *sb)
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index d543c65..e119d19 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1170,6 +1170,19 @@ config DEBUG_LOCK_ALLOC
>          spin_lock_init()/mutex_init()/etc., or whether there is any lock
>          held during task exit.
>
> +config LOCKDEP_REPORT_FSFREEZE_FROM_USERSPACE
> +       bool "Allow lockdep splat upon ioctl(FIFREEZE) request"
> +       depends on PROVE_LOCKING
> +       default n
> +       help
> +        Freezing filesystems and not thawing shortly will kill the system with
> +        hung tasks. Since ioctl(FIFREEZE) request returns to userspace with
> +        locks held, in order to suppress lockdep splat messages, information
> +        of held locks is cleared without actually releasing locks held. But
> +        doing so might hide possible deadlock bugs. Thus, this option allows
> +        lockdep splat if ioctl(FIFREEZE) is requested, in order to help
> +        finding possible deadlock bugs.
> +
>  config LOCKDEP
>         bool
>         depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
> --
> 1.8.3.1


I've tried replying 5 logs with this patch, but I don't see that we
return to user-space with locks held, nor deadlock reports.

What I've noticed is that all these logs contain lots of error
messages around block subsystem. Perhaps if we can identify the common
denominator across error messages in different logs, we can find the
one responsible for hangs.

[  443.933790] FAT-fs (loop3): Directory bread(block 137) failed
[  444.077580] attempt to access beyond end of device
[  444.082571] loop3: rw=2049, want=310, limit=128
[  444.087278] buffer_io_error: 238 callbacks suppressed
[  444.087291] Buffer I/O error on dev loop3, logical block 309, lost
async page write
[  444.100466] attempt to access beyond end of device
[  444.105465] loop3: rw=2049, want=311, limit=128
[  444.110160] Buffer I/O error on dev loop3, logical block 310, lost
async page write
[  444.118684] attempt to access beyond end of device
[  444.123660] loop3: rw=2049, want=312, limit=128
[  444.225240] attempt to access beyond end of device
[  444.230192] loop3: rw=2049, want=2153, limit=128
[  444.244874] attempt to access beyond end of device
[  444.250144] loop3: rw=0, want=310, limit=128

[  199.713659] print_req_error: I/O error, dev loop0, sector 0
[  199.714970] Buffer I/O error on dev loop0, logical block 0, lost
async page write

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-15  9:19         ` Dmitry Vyukov
@ 2018-06-15 19:40           ` Tetsuo Handa
  2018-06-19 11:10             ` Tetsuo Handa
  0 siblings, 1 reply; 13+ messages in thread
From: Tetsuo Handa @ 2018-06-15 19:40 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, syzbot, linux-fsdevel,
	LKML, syzkaller-bugs, Al Viro, Linus Torvalds

On 2018/06/15 18:19, Dmitry Vyukov wrote:
> On Thu, Jun 14, 2018 at 12:33 PM, Tetsuo Handa
> <penguin-kernel@i-love.sakura.ne.jp> wrote:
>> On 2018/06/11 16:39, Dmitry Vyukov wrote:
>>> On Mon, Jun 11, 2018 at 9:30 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>>>> On Sun, Jun 10, 2018 at 11:47:56PM +0900, Tetsuo Handa wrote:
>>>>
>>>>> This looks quite strange that nobody is holding percpu_rw_semaphore for
>>>>> write but everybody is stuck trying to hold it for read. (Since there
>>>>> is no "X locks held by ..." line without followup "#0:" line, there is
>>>>> no possibility that somebody is in TASK_RUNNING state while holding
>>>>> percpu_rw_semaphore for write.)
>>>>>
>>>>> I feel that either API has a bug or API usage is wrong.
>>>>> Any idea for debugging this?
>>>>
>>>> Look at percpu_rwsem_release() and usage. The whole fs freezer thing is
>>>> magic.
>>>
>>> Do you mean that we froze fs? We tried to never-ever issue
>>> ioctl(FIFREEZE) during fuzzing. Are there other ways to do this?
>>>
>>
>> Dmitry, can you try this patch? If you can get
> 
> I've tried replying 5 logs with this patch, but I don't see that we
> return to user-space with locks held, nor deadlock reports.

Did you succeed to reproduce khungtaskd messages with this patch?
If yes, was one of sb_writers#X/sb_pagefaults/sb_internal printed there?
If no, we would want a git tree for testing under syzbot.

> 
> What I've noticed is that all these logs contain lots of error
> messages around block subsystem. Perhaps if we can identify the common
> denominator across error messages in different logs, we can find the
> one responsible for hangs.

While there are lots of error messages around block subsystem, how can
down_read() fail to continue unless up_write() somehow failed to wake up
waiters sleeping at down_read(), assuming that khungtaskd says that none
of sb_writers#X/sb_pagefaults/sb_internal was held?

Hmm, there might be other locations calling percpu_rwsem_release() ?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-15 19:40           ` Tetsuo Handa
@ 2018-06-19 11:10             ` Tetsuo Handa
  2018-06-19 11:47               ` Dmitry Vyukov
  2018-07-11 11:13               ` Tetsuo Handa
  0 siblings, 2 replies; 13+ messages in thread
From: Tetsuo Handa @ 2018-06-19 11:10 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, syzbot, linux-fsdevel,
	LKML, syzkaller-bugs, Al Viro, Linus Torvalds

On 2018/06/16 4:40, Tetsuo Handa wrote:
> Hmm, there might be other locations calling percpu_rwsem_release() ?

There are other locations calling percpu_rwsem_release(), but quite few.

include/linux/fs.h:1494:#define __sb_writers_release(sb, lev)   \
include/linux/fs.h-1495-        percpu_rwsem_release(&(sb)->s_writers.rw_sem[(lev)-1], 1, _THIS_IP_)

fs/btrfs/transaction.c:1821:            __sb_writers_release(fs_info->sb, SB_FREEZE_FS);
fs/aio.c:1566:                  __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
fs/xfs/xfs_aops.c:211:  __sb_writers_release(ioend->io_inode->i_sb, SB_FREEZE_FS);



I'd like to check what atomic_long_read(&sem->rw_sem.count) says
when hung task is reported.

Dmitry, if you succeed to reproduce khungtaskd messages, can you try this patch?

---
 include/linux/sched.h         |  1 +
 kernel/hung_task.c            | 25 +++++++++++++++++++++++++
 kernel/locking/percpu-rwsem.c |  2 ++
 3 files changed, 28 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 87bf02d..af95251 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1179,6 +1179,7 @@ struct task_struct {
 	/* Used by LSM modules for access restriction: */
 	void				*security;
 #endif
+	struct percpu_rw_semaphore	*pcpu_rwsem_read_waiting;
 
 	/*
 	 * New fields for task_struct should be added above here, so that
diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index 32b4794..409cc82 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -18,6 +18,7 @@
 #include <linux/utsname.h>
 #include <linux/sched/signal.h>
 #include <linux/sched/debug.h>
+#include <linux/percpu-rwsem.h>
 
 #include <trace/events/sched.h>
 
@@ -77,6 +78,29 @@ static int __init hung_task_panic_setup(char *str)
 	.notifier_call = hung_task_panic,
 };
 
+static void dump_percpu_rwsem_state(struct percpu_rw_semaphore *sem)
+{
+	unsigned int sum = 0;
+	int cpu;
+
+	if (!sem)
+		return;
+	pr_info("percpu_rw_semaphore(%p)\n", sem);
+	pr_info("  ->rw_sem.count=0x%lx\n",
+		atomic_long_read(&sem->rw_sem.count));
+	pr_info("  ->rss.gp_state=%d\n", sem->rss.gp_state);
+	pr_info("  ->rss.gp_count=%d\n", sem->rss.gp_count);
+	pr_info("  ->rss.cb_state=%d\n", sem->rss.cb_state);
+	pr_info("  ->rss.gp_type=%d\n", sem->rss.gp_type);
+	pr_info("  ->readers_block=%d\n", sem->readers_block);
+	for_each_possible_cpu(cpu)
+		sum += per_cpu(*sem->read_count, cpu);
+	pr_info("  ->read_count=%d\n", sum);
+	pr_info("  ->list_empty(rw_sem.wait_list)=%d\n",
+	       list_empty(&sem->rw_sem.wait_list));
+	pr_info("  ->writer.task=%p\n", sem->writer.task);
+}
+
 static void check_hung_task(struct task_struct *t, unsigned long timeout)
 {
 	unsigned long switch_count = t->nvcsw + t->nivcsw;
@@ -122,6 +146,7 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
 		pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\""
 			" disables this message.\n");
 		sched_show_task(t);
+		dump_percpu_rwsem_state(t->pcpu_rwsem_read_waiting);
 		hung_task_show_lock = true;
 	}
 
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
index 883cf1b..b3654ab 100644
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -82,7 +82,9 @@ int __percpu_down_read(struct percpu_rw_semaphore *sem, int try)
 	/*
 	 * Avoid lockdep for the down/up_read() we already have them.
 	 */
+	current->pcpu_rwsem_read_waiting = sem;
 	__down_read(&sem->rw_sem);
+	current->pcpu_rwsem_read_waiting = NULL;
 	this_cpu_inc(*sem->read_count);
 	__up_read(&sem->rw_sem);
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-19 11:10             ` Tetsuo Handa
@ 2018-06-19 11:47               ` Dmitry Vyukov
  2018-06-19 13:00                 ` Tetsuo Handa
  2018-07-11 11:13               ` Tetsuo Handa
  1 sibling, 1 reply; 13+ messages in thread
From: Dmitry Vyukov @ 2018-06-19 11:47 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, syzbot, linux-fsdevel,
	LKML, syzkaller-bugs, Al Viro, Linus Torvalds

On Tue, Jun 19, 2018 at 1:10 PM, Tetsuo Handa
<penguin-kernel@i-love.sakura.ne.jp> wrote:
> On 2018/06/16 4:40, Tetsuo Handa wrote:
>> Hmm, there might be other locations calling percpu_rwsem_release() ?
>
> There are other locations calling percpu_rwsem_release(), but quite few.
>
> include/linux/fs.h:1494:#define __sb_writers_release(sb, lev)   \
> include/linux/fs.h-1495-        percpu_rwsem_release(&(sb)->s_writers.rw_sem[(lev)-1], 1, _THIS_IP_)
>
> fs/btrfs/transaction.c:1821:            __sb_writers_release(fs_info->sb, SB_FREEZE_FS);
> fs/aio.c:1566:                  __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
> fs/xfs/xfs_aops.c:211:  __sb_writers_release(ioend->io_inode->i_sb, SB_FREEZE_FS);
>
>
>
> I'd like to check what atomic_long_read(&sem->rw_sem.count) says
> when hung task is reported.
>
> Dmitry, if you succeed to reproduce khungtaskd messages, can you try this patch?
>
> ---
>  include/linux/sched.h         |  1 +
>  kernel/hung_task.c            | 25 +++++++++++++++++++++++++
>  kernel/locking/percpu-rwsem.c |  2 ++
>  3 files changed, 28 insertions(+)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 87bf02d..af95251 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1179,6 +1179,7 @@ struct task_struct {
>         /* Used by LSM modules for access restriction: */
>         void                            *security;
>  #endif
> +       struct percpu_rw_semaphore      *pcpu_rwsem_read_waiting;
>
>         /*
>          * New fields for task_struct should be added above here, so that
> diff --git a/kernel/hung_task.c b/kernel/hung_task.c
> index 32b4794..409cc82 100644
> --- a/kernel/hung_task.c
> +++ b/kernel/hung_task.c
> @@ -18,6 +18,7 @@
>  #include <linux/utsname.h>
>  #include <linux/sched/signal.h>
>  #include <linux/sched/debug.h>
> +#include <linux/percpu-rwsem.h>
>
>  #include <trace/events/sched.h>
>
> @@ -77,6 +78,29 @@ static int __init hung_task_panic_setup(char *str)
>         .notifier_call = hung_task_panic,
>  };
>
> +static void dump_percpu_rwsem_state(struct percpu_rw_semaphore *sem)
> +{
> +       unsigned int sum = 0;
> +       int cpu;
> +
> +       if (!sem)
> +               return;
> +       pr_info("percpu_rw_semaphore(%p)\n", sem);
> +       pr_info("  ->rw_sem.count=0x%lx\n",
> +               atomic_long_read(&sem->rw_sem.count));
> +       pr_info("  ->rss.gp_state=%d\n", sem->rss.gp_state);
> +       pr_info("  ->rss.gp_count=%d\n", sem->rss.gp_count);
> +       pr_info("  ->rss.cb_state=%d\n", sem->rss.cb_state);
> +       pr_info("  ->rss.gp_type=%d\n", sem->rss.gp_type);
> +       pr_info("  ->readers_block=%d\n", sem->readers_block);
> +       for_each_possible_cpu(cpu)
> +               sum += per_cpu(*sem->read_count, cpu);
> +       pr_info("  ->read_count=%d\n", sum);
> +       pr_info("  ->list_empty(rw_sem.wait_list)=%d\n",
> +              list_empty(&sem->rw_sem.wait_list));
> +       pr_info("  ->writer.task=%p\n", sem->writer.task);
> +}
> +
>  static void check_hung_task(struct task_struct *t, unsigned long timeout)
>  {
>         unsigned long switch_count = t->nvcsw + t->nivcsw;
> @@ -122,6 +146,7 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
>                 pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\""
>                         " disables this message.\n");
>                 sched_show_task(t);
> +               dump_percpu_rwsem_state(t->pcpu_rwsem_read_waiting);
>                 hung_task_show_lock = true;
>         }
>
> diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> index 883cf1b..b3654ab 100644
> --- a/kernel/locking/percpu-rwsem.c
> +++ b/kernel/locking/percpu-rwsem.c
> @@ -82,7 +82,9 @@ int __percpu_down_read(struct percpu_rw_semaphore *sem, int try)
>         /*
>          * Avoid lockdep for the down/up_read() we already have them.
>          */
> +       current->pcpu_rwsem_read_waiting = sem;
>         __down_read(&sem->rw_sem);
> +       current->pcpu_rwsem_read_waiting = NULL;
>         this_cpu_inc(*sem->read_count);
>         __up_read(&sem->rw_sem);
>
> --
> 1.8.3.1



I wasn't able to reproduce this locally yet.

> If no, we would want a git tree for testing under syzbot.

Thinking of this, we could setup a sandbox instance that won't report
anything over email at all, but crashes will be available on the web.
We could point this instance to a custom git tree, where additional
patches can be applied as necessary.  The main question is: who and
how will manage this tree? The tree needs to be rebased periodically,
patches applied, old patches taken out.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-19 11:47               ` Dmitry Vyukov
@ 2018-06-19 13:00                 ` Tetsuo Handa
  0 siblings, 0 replies; 13+ messages in thread
From: Tetsuo Handa @ 2018-06-19 13:00 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, syzbot, linux-fsdevel,
	LKML, syzkaller-bugs, Al Viro, Linus Torvalds

On 2018/06/19 20:47, Dmitry Vyukov wrote:
>> If no, we would want a git tree for testing under syzbot.
> 
> Thinking of this, we could setup a sandbox instance that won't report
> anything over email at all, but crashes will be available on the web.
> We could point this instance to a custom git tree, where additional
> patches can be applied as necessary.  The main question is: who and
> how will manage this tree? The tree needs to be rebased periodically,
> patches applied, old patches taken out.
> 

The simplest approach (if Linus can tolerate) would be to merge any
debug printk() patches to linux.git with single kernel config option
(e.g. CONFIG_DEBUG_AID_FOR_SYZBOT=y) for turning them on/off.
Assuming that we will eventually be able to move to saving and analyzing
vmcore files, some bit of debug printk() code would be tolerable for now?

Other possible approach (if Andrew can tolerate) would be to use
linux-next.git and carry debug printk() patches via mmotm tree.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-06-19 11:10             ` Tetsuo Handa
  2018-06-19 11:47               ` Dmitry Vyukov
@ 2018-07-11 11:13               ` Tetsuo Handa
  2018-07-13 10:38                 ` Tetsuo Handa
  1 sibling, 1 reply; 13+ messages in thread
From: Tetsuo Handa @ 2018-07-11 11:13 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, syzbot, linux-fsdevel,
	LKML, syzkaller-bugs, Al Viro, Linus Torvalds

On 2018/06/19 20:10, Tetsuo Handa wrote:
> On 2018/06/16 4:40, Tetsuo Handa wrote:
>> Hmm, there might be other locations calling percpu_rwsem_release() ?
> 
> There are other locations calling percpu_rwsem_release(), but quite few.
> 
> include/linux/fs.h:1494:#define __sb_writers_release(sb, lev)   \
> include/linux/fs.h-1495-        percpu_rwsem_release(&(sb)->s_writers.rw_sem[(lev)-1], 1, _THIS_IP_)
> 
> fs/btrfs/transaction.c:1821:            __sb_writers_release(fs_info->sb, SB_FREEZE_FS);
> fs/aio.c:1566:                  __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
> fs/xfs/xfs_aops.c:211:  __sb_writers_release(ioend->io_inode->i_sb, SB_FREEZE_FS);
> 
> 
> 
> I'd like to check what atomic_long_read(&sem->rw_sem.count) says
> when hung task is reported.
> 

syzbot reproduced this problem with the patch applied.

percpu_rw_semaphore(00000000082ac9da)
  ->rw_sem.count=0xfffffffe00000001
  ->rss.gp_state=2
  ->rss.gp_count=1
  ->rss.cb_state=0
  ->rss.gp_type=1
  ->readers_block=1
  ->read_count=0
  ->list_empty(rw_sem.wait_list)=0
  ->writer.task=          (null)

The output says that percpu_down_read() was blocked because
somebody has called percpu_down_write().

  DEFINE_STATIC_PERCPU_RWSEM(sem);
  percpu_down_write(&sem);
  percpu_down_read(&sem);
  percpu_up_read(&sem);
  percpu_up_write(&sem);

The next step is to find who is calling percpu_down_write(). How do we want to do this?
We don't want to annoy normal linux-next.git testers. Below one?

---
 include/linux/percpu-rwsem.h | 4 ++++
 lib/Kconfig.debug            | 6 ++++++
 2 files changed, 10 insertions(+)

diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
index 79b99d6..26e87c3 100644
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -130,7 +130,9 @@ extern int __percpu_init_rwsem(struct percpu_rw_semaphore *,
 static inline void percpu_rwsem_release(struct percpu_rw_semaphore *sem,
 					bool read, unsigned long ip)
 {
+#ifndef CONFIG_DEBUG_AID_FOR_SYZBOT
 	lock_release(&sem->rw_sem.dep_map, 1, ip);
+#endif
 #ifdef CONFIG_RWSEM_SPIN_ON_OWNER
 	if (!read)
 		sem->rw_sem.owner = RWSEM_OWNER_UNKNOWN;
@@ -140,7 +142,9 @@ static inline void percpu_rwsem_release(struct percpu_rw_semaphore *sem,
 static inline void percpu_rwsem_acquire(struct percpu_rw_semaphore *sem,
 					bool read, unsigned long ip)
 {
+#ifndef CONFIG_DEBUG_AID_FOR_SYZBOT
 	lock_acquire(&sem->rw_sem.dep_map, 0, 1, read, 1, NULL, ip);
+#endif
 #ifdef CONFIG_RWSEM_SPIN_ON_OWNER
 	if (!read)
 		sem->rw_sem.owner = current;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index c731ff9..f0d02e8 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1181,6 +1181,12 @@ config DEBUG_LOCK_ALLOC
 	 spin_lock_init()/mutex_init()/etc., or whether there is any lock
 	 held during task exit.
 
+config DEBUG_AID_FOR_SYZBOT
+       bool "Additional debug options for syzbot"
+       default n
+       help
+	 This option is intended for testing by syzbot.
+
 config LOCKDEP
 	bool
 	depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
-- 

Hmm, given that neither xfs nor btrfs is used, is it aio code?

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: INFO: task hung in __sb_start_write
  2018-07-11 11:13               ` Tetsuo Handa
@ 2018-07-13 10:38                 ` Tetsuo Handa
  0 siblings, 0 replies; 13+ messages in thread
From: Tetsuo Handa @ 2018-07-13 10:38 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, syzbot, linux-fsdevel,
	LKML, syzkaller-bugs, Al Viro, Linus Torvalds

Dmitry noticed what is wrong and fixed the bug as
https://github.com/google/syzkaller/commit/06c33b3af0ff4072fb002879f83077c9d162a224 .

Since the bug did not reproduce for last 24 hours, it is OK to close this report.

#syz invalid

Thanks all.


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-07-13 10:39 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-03 10:17 INFO: task hung in __sb_start_write syzbot
2018-06-10 14:47 ` Tetsuo Handa
2018-06-11  7:30   ` Peter Zijlstra
2018-06-11  7:39     ` Dmitry Vyukov
2018-06-14 10:33       ` Tetsuo Handa
2018-06-14 10:33         ` Tetsuo Handa
2018-06-15  9:19         ` Dmitry Vyukov
2018-06-15 19:40           ` Tetsuo Handa
2018-06-19 11:10             ` Tetsuo Handa
2018-06-19 11:47               ` Dmitry Vyukov
2018-06-19 13:00                 ` Tetsuo Handa
2018-07-11 11:13               ` Tetsuo Handa
2018-07-13 10:38                 ` Tetsuo Handa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.