linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* possible deadlock in pipe_lock (2)
@ 2019-02-10 18:23 syzbot
  2019-02-11  7:38 ` Amir Goldstein
  0 siblings, 1 reply; 13+ messages in thread
From: syzbot @ 2019-02-10 18:23 UTC (permalink / raw)
  To: linux-fsdevel, linux-kernel, syzkaller-bugs, viro

Hello,

syzbot found the following crash on:

HEAD commit:    74e96711e337 Merge tag 'platform-drivers-x86-v5.0-2' of gi..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=11225b27400000
kernel config:  https://syzkaller.appspot.com/x/.config?x=8f00801d7b7c4fe6
dashboard link: https://syzkaller.appspot.com/bug?extid=31d8b84465a7cbfd8515
compiler:       gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com

kvm [15406]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
======================================================
WARNING: possible circular locking dependency detected
5.0.0-rc5+ #63 Not tainted
------------------------------------------------------
overlayfs: filesystem on './file0' not supported as upperdir
syz-executor.2/15417 is trying to acquire lock:
00000000dcbcd11f (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62  
[inline]
00000000dcbcd11f (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80  
fs/pipe.c:70

but task is already holding lock:
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
00000000c13db9a0 (sb_writers#3){.+.+}, at: file_start_write  
include/linux/fs.h:2816 [inline]
00000000c13db9a0 (sb_writers#3){.+.+}, at: do_splice+0xceb/0x1330  
fs/splice.c:1151

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'

-> #2 (sb_writers#3){.+.+}:
        percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36  
[inline]
        percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
        __sb_start_write+0x20b/0x360 fs/super.c:1389
        sb_start_write include/linux/fs.h:1603 [inline]
        mnt_want_write+0x3f/0xc0 fs/namespace.c:357
        ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
        ovl_setattr+0xdd/0x950 fs/overlayfs/inode.c:30
        notify_change+0xad9/0xfb0 fs/attr.c:334
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
        do_truncate+0x158/0x220 fs/open.c:63
        handle_truncate fs/namei.c:3008 [inline]
        do_last fs/namei.c:3424 [inline]
        path_openat+0x2cc6/0x4690 fs/namei.c:3534
        do_filp_open+0x1a1/0x280 fs/namei.c:3564
        do_sys_open+0x3fe/0x5d0 fs/open.c:1063
        __do_sys_openat fs/open.c:1090 [inline]
        __se_sys_openat fs/open.c:1084 [inline]
        __x64_sys_openat+0x9d/0x100 fs/open.c:1084
        do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
        entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #1 (&ovl_i_mutex_key[depth]){+.+.}:
        down_write+0x38/0x90 kernel/locking/rwsem.c:70
        inode_lock include/linux/fs.h:757 [inline]
        ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
        call_write_iter include/linux/fs.h:1863 [inline]
        new_sync_write fs/read_write.c:474 [inline]
        __vfs_write+0x613/0x8e0 fs/read_write.c:487
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
        __kernel_write+0x110/0x3b0 fs/read_write.c:506
        write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
        splice_from_pipe_feed fs/splice.c:503 [inline]
        __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
        splice_from_pipe+0x108/0x170 fs/splice.c:662
        default_file_splice_write+0x3c/0x90 fs/splice.c:809
        do_splice_from fs/splice.c:851 [inline]
        do_splice+0x644/0x1330 fs/splice.c:1152
        __do_sys_splice fs/splice.c:1419 [inline]
        __se_sys_splice fs/splice.c:1399 [inline]
        __x64_sys_splice+0x2c6/0x330 fs/splice.c:1399
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
        do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
        entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #0 (&pipe->mutex/1){+.+.}:
        lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:3841
        __mutex_lock_common kernel/locking/mutex.c:925 [inline]
        __mutex_lock+0xf7/0x1310 kernel/locking/mutex.c:1072
        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
        pipe_lock_nested fs/pipe.c:62 [inline]
        pipe_lock+0x6e/0x80 fs/pipe.c:70
        iter_file_splice_write+0x18b/0xbe0 fs/splice.c:700
        do_splice_from fs/splice.c:851 [inline]
        do_splice+0x644/0x1330 fs/splice.c:1152
        __do_sys_splice fs/splice.c:1419 [inline]
        __se_sys_splice fs/splice.c:1399 [inline]
        __x64_sys_splice+0x2c6/0x330 fs/splice.c:1399
        do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
        entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
   &pipe->mutex/1 --> &ovl_i_mutex_key[depth] --> sb_writers#3

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(sb_writers#3);
                                lock(&ovl_i_mutex_key[depth]);
                                lock(sb_writers#3);
   lock(&pipe->mutex/1);

  *** DEADLOCK ***

1 lock held by syz-executor.2/15417:
  #0: 00000000c13db9a0 (sb_writers#3){.+.+}, at: file_start_write  
include/linux/fs.h:2816 [inline]
  #0: 00000000c13db9a0 (sb_writers#3){.+.+}, at: do_splice+0xceb/0x1330  
fs/splice.c:1151

stack backtrace:
CPU: 1 PID: 15417 Comm: syz-executor.2 Not tainted 5.0.0-rc5+ #63
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS  
Google 01/01/2011
Call Trace:
  __dump_stack lib/dump_stack.c:77 [inline]
  dump_stack+0x172/0x1f0 lib/dump_stack.c:113
  print_circular_bug.isra.0.cold+0x1cc/0x28f kernel/locking/lockdep.c:1224
  check_prev_add kernel/locking/lockdep.c:1866 [inline]
  check_prevs_add kernel/locking/lockdep.c:1979 [inline]
  validate_chain kernel/locking/lockdep.c:2350 [inline]
  __lock_acquire+0x2f00/0x4700 kernel/locking/lockdep.c:3338
  lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:3841
  __mutex_lock_common kernel/locking/mutex.c:925 [inline]
  __mutex_lock+0xf7/0x1310 kernel/locking/mutex.c:1072
  mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
  pipe_lock_nested fs/pipe.c:62 [inline]
  pipe_lock+0x6e/0x80 fs/pipe.c:70
  iter_file_splice_write+0x18b/0xbe0 fs/splice.c:700
  do_splice_from fs/splice.c:851 [inline]
  do_splice+0x644/0x1330 fs/splice.c:1152
  __do_sys_splice fs/splice.c:1419 [inline]
  __se_sys_splice fs/splice.c:1399 [inline]
  __x64_sys_splice+0x2c6/0x330 fs/splice.c:1399
  do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
  entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x457e39
Code: ad b8 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7  
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff  
ff 0f 83 7b b8 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fe58b4bac78 EFLAGS: 00000246 ORIG_RAX: 0000000000000113
RAX: ffffffffffffffda RBX: 0000000000000006 RCX: 0000000000457e39
RDX: 0000000000000007 RSI: 0000000000000000 RDI: 0000000000000005
RBP: 000000000073bfa0 R08: 000100000000ffe0 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fe58b4bb6d4
R13: 00000000004c6c00 R14: 00000000004db710 R15: 00000000ffffffff
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kvm [15421]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
overlayfs: filesystem on './file0' not supported as upperdir
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kvm [15450]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
overlayfs: unrecognized mount option "default_permissions\" or missing value
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
overlayfs: unrecognized mount option "default_permissions\" or missing value
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
overlayfs: failed to resolve '��file0': -2
overlayfs: failed to resolve '��file0': -2
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
overlayfs: filesystem on './file0' not supported as upperdir
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
overlayfs: unrecognized mount option "u�7pperdir=./file0" or missing value
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
overlayfs: unrecognized mount option "u�7pperdir=./file0" or missing value
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
overlayfs: failed to resolve './file1': -2
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
overlayfs: failed to resolve './file1': -2
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kvm [15512]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
overlayfs: unrecognized mount option "upperdis�\x02/file0" or missing value
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
overlayfs: unrecognized mount option "upperdis�\x02/file0" or missing value
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
overlayfs: failed to resolve '\x06/file1': -2
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
overlayfs: failed to resolve '\x06/file1': -2
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
overlayfs: filesystem on './file0' not supported as upperdir
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
overlayfs: filesystem on './file0' not supported as upperdir
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kvm [15572]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
overlayfs: unrecognized mount option "upperdiv=./file0" or missing value
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
overlayfs: unrecognized mount option "upperdiv=./file0" or missing value
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
overlayfs: filesystem on './file0' not supported as upperdir
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
net_ratelimit: 16 callbacks suppressed
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
overlayfs: unrecognized mount option "\x04" or missing value
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
overlayfs: unrecognized mount option "\x04" or missing value
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
overlayfs: filesystem on './file0' not supported as upperdir
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
overlayfs: unrecognized mount option "upperdiDr|��9G\x01r�\x06A��r=./file0" or  
missing value
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
overlayfs: unrecognized mount option "upperdiDr|��9G\x01r�\x06A��r=./file0" or  
missing value
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
overlayfs: unrecognized mount option "lower�\a" or missing value
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kvm_set_msr_common: 1 callbacks suppressed
kvm [15659]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
overlayfs: unrecognized mount option "lower�\a" or missing value
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
print_req_error: I/O error, dev loop0, sector 120 flags 0
print_req_error: I/O error, dev loop0, sector 0 flags 801
Buffer I/O error on dev loop0, logical block 0, lost async page write
print_req_error: I/O error, dev loop0, sector 8 flags 801
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
Buffer I/O error on dev loop0, logical block 1, lost async page write
print_req_error: I/O error, dev loop0, sector 16 flags 801
Buffer I/O error on dev loop0, logical block 2, lost async page write
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kvm [15737]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
print_req_error: I/O error, dev loop0, sector 0 flags 801
Buffer I/O error on dev loop0, logical block 0, lost async page write
print_req_error: I/O error, dev loop0, sector 8 flags 801
Buffer I/O error on dev loop0, logical block 1, lost async page write
print_req_error: I/O error, dev loop0, sector 16 flags 801
kvm [15793]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
Buffer I/O error on dev loop0, logical block 2, lost async page write
print_req_error: I/O error, dev loop0, sector 24 flags 801
Buffer I/O error on dev loop0, logical block 3, lost async page write
print_req_error: I/O error, dev loop0, sector 32 flags 801
Buffer I/O error on dev loop0, logical block 4, lost async page write
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kvm [15856]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
overlayfs: failed to resolve './file1': -2
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kvm [15906]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:  
MSR_IA32_DEBUGCTLMSR 0x3, nop
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
net_ratelimit: 18 callbacks suppressed
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
protocol 88fb is buggy, dev hsr_slave_0
protocol 88fb is buggy, dev hsr_slave_1
kobject: 'loop3' (000000009dce83ac): kobject_uevent_env
kobject: 'loop3' (000000009dce83ac): fill_kobj_path: path  
= '/devices/virtual/block/loop3'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop2' (00000000daaf6648): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
overlayfs: failed to resolve './file1': -2
kobject: 'loop2' (00000000daaf6648): fill_kobj_path: path  
= '/devices/virtual/block/loop2'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'
kobject: 'loop1' (0000000079c3ae74): kobject_uevent_env
kobject: 'loop1' (0000000079c3ae74): fill_kobj_path: path  
= '/devices/virtual/block/loop1'
kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
kobject: 'kvm' (0000000085e711ac): kobject_uevent_env
kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path  
= '/devices/virtual/block/loop0'
kobject: 'kvm' (0000000085e711ac): fill_kobj_path: path  
= '/devices/virtual/misc/kvm'
kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path  
= '/devices/virtual/block/loop4'
kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path  
= '/devices/virtual/block/loop5'


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#bug-status-tracking for how to communicate with  
syzbot.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-10 18:23 possible deadlock in pipe_lock (2) syzbot
@ 2019-02-11  7:38 ` Amir Goldstein
  2019-02-11 12:06   ` Miklos Szeredi
  0 siblings, 1 reply; 13+ messages in thread
From: Amir Goldstein @ 2019-02-11  7:38 UTC (permalink / raw)
  To: Miklos Szeredi
  Cc: linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro, syzbot, overlayfs

On Sun, Feb 10, 2019 at 8:23 PM syzbot
<syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:    74e96711e337 Merge tag 'platform-drivers-x86-v5.0-2' of gi..
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=11225b27400000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=8f00801d7b7c4fe6
> dashboard link: https://syzkaller.appspot.com/bug?extid=31d8b84465a7cbfd8515
> compiler:       gcc (GCC) 9.0.0 20181231 (experimental)
>
> Unfortunately, I don't have any reproducer for this crash yet.
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com
>
> kvm [15406]: vcpu0, guest rIP: 0xfff0 kvm_set_msr_common:
> MSR_IA32_DEBUGCTLMSR 0x3, nop
> ======================================================
> WARNING: possible circular locking dependency detected
> 5.0.0-rc5+ #63 Not tainted
> ------------------------------------------------------
> overlayfs: filesystem on './file0' not supported as upperdir
> syz-executor.2/15417 is trying to acquire lock:
> 00000000dcbcd11f (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62
> [inline]
> 00000000dcbcd11f (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
> fs/pipe.c:70
>
> but task is already holding lock:
> kobject: 'loop0' (00000000d9cc93a0): kobject_uevent_env
> 00000000c13db9a0 (sb_writers#3){.+.+}, at: file_start_write
> include/linux/fs.h:2816 [inline]
> 00000000c13db9a0 (sb_writers#3){.+.+}, at: do_splice+0xceb/0x1330
> fs/splice.c:1151
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
> kobject: 'loop0' (00000000d9cc93a0): fill_kobj_path: path
> = '/devices/virtual/block/loop0'
>
> -> #2 (sb_writers#3){.+.+}:
>         percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36
> [inline]
>         percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
>         __sb_start_write+0x20b/0x360 fs/super.c:1389
>         sb_start_write include/linux/fs.h:1603 [inline]
>         mnt_want_write+0x3f/0xc0 fs/namespace.c:357
>         ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
>         ovl_setattr+0xdd/0x950 fs/overlayfs/inode.c:30
>         notify_change+0xad9/0xfb0 fs/attr.c:334
> kobject: 'loop5' (00000000a501efc5): kobject_uevent_env
>         do_truncate+0x158/0x220 fs/open.c:63
>         handle_truncate fs/namei.c:3008 [inline]
>         do_last fs/namei.c:3424 [inline]
>         path_openat+0x2cc6/0x4690 fs/namei.c:3534
>         do_filp_open+0x1a1/0x280 fs/namei.c:3564
>         do_sys_open+0x3fe/0x5d0 fs/open.c:1063
>         __do_sys_openat fs/open.c:1090 [inline]
>         __se_sys_openat fs/open.c:1084 [inline]
>         __x64_sys_openat+0x9d/0x100 fs/open.c:1084
>         do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
> kobject: 'loop5' (00000000a501efc5): fill_kobj_path: path
> = '/devices/virtual/block/loop5'
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
>         down_write+0x38/0x90 kernel/locking/rwsem.c:70
>         inode_lock include/linux/fs.h:757 [inline]
>         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
>         call_write_iter include/linux/fs.h:1863 [inline]
>         new_sync_write fs/read_write.c:474 [inline]
>         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
>         __kernel_write+0x110/0x3b0 fs/read_write.c:506
>         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
>         splice_from_pipe_feed fs/splice.c:503 [inline]
>         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
>         splice_from_pipe+0x108/0x170 fs/splice.c:662
>         default_file_splice_write+0x3c/0x90 fs/splice.c:809
>         do_splice_from fs/splice.c:851 [inline]
>         do_splice+0x644/0x1330 fs/splice.c:1152
>         __do_sys_splice fs/splice.c:1419 [inline]
>         __se_sys_splice fs/splice.c:1399 [inline]
>         __x64_sys_splice+0x2c6/0x330 fs/splice.c:1399
> kobject: 'loop4' (000000009e2b886d): fill_kobj_path: path
> = '/devices/virtual/block/loop4'
>         do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> -> #0 (&pipe->mutex/1){+.+.}:
>         lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:3841
>         __mutex_lock_common kernel/locking/mutex.c:925 [inline]
>         __mutex_lock+0xf7/0x1310 kernel/locking/mutex.c:1072
>         mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
>         pipe_lock_nested fs/pipe.c:62 [inline]
>         pipe_lock+0x6e/0x80 fs/pipe.c:70
>         iter_file_splice_write+0x18b/0xbe0 fs/splice.c:700
>         do_splice_from fs/splice.c:851 [inline]
>         do_splice+0x644/0x1330 fs/splice.c:1152
>         __do_sys_splice fs/splice.c:1419 [inline]
>         __se_sys_splice fs/splice.c:1399 [inline]
>         __x64_sys_splice+0x2c6/0x330 fs/splice.c:1399
>         do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> other info that might help us debug this:
>
> Chain exists of:
>    &pipe->mutex/1 --> &ovl_i_mutex_key[depth] --> sb_writers#3
>
>   Possible unsafe locking scenario:
>
>         CPU0                    CPU1
>         ----                    ----
>    lock(sb_writers#3);
>                                 lock(&ovl_i_mutex_key[depth]);
>                                 lock(sb_writers#3);
>    lock(&pipe->mutex/1);
>
>   *** DEADLOCK ***
>

Miklos,

Its good that this report popped up again, because I went to
look back at my notes from previous report [1].
If I was right in my previous analysis then we must have a real
deadlock in current "lazy copy up" WIP patches. Right?

"This looks like a false positive because lockdep is not aware of
s_stack_depth of the file (fs) associated with the pipe.

HOWEVER, because overlayfs calls do_splice_direct() on copy up,
it is important to make sure that higher layer do_splice_direct() cannot
recursively trigger copy up.

At this time, overlayfs does the copy up on open for write, so any
higher layer do_splice_direct() will already get an out file that has been
copied up, but with future plans for "lazy copy up on first write", we need
to be careful."

Should we replace do_splice_direct() in copy up with a version
that uses an overlay private pipe?
Should we splice the lower pages directly to overlay inode instead
of upper?

Thanks,
Amir.

[1] https://lore.kernel.org/lkml/CAOQ4uxgiERNAyX4PDHf1DQLBg56rBANWLe575TGkz_WpbxaoCg@mail.gmail.com/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-11  7:38 ` Amir Goldstein
@ 2019-02-11 12:06   ` Miklos Szeredi
  2019-02-11 12:37     ` Miklos Szeredi
  0 siblings, 1 reply; 13+ messages in thread
From: Miklos Szeredi @ 2019-02-11 12:06 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro, syzbot, overlayfs

On Mon, Feb 11, 2019 at 8:38 AM Amir Goldstein <amir73il@gmail.com> wrote:
>
> On Sun, Feb 10, 2019 at 8:23 PM syzbot
> <syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:

> > -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
> >         down_write+0x38/0x90 kernel/locking/rwsem.c:70
> >         inode_lock include/linux/fs.h:757 [inline]
> >         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
> >         call_write_iter include/linux/fs.h:1863 [inline]
> >         new_sync_write fs/read_write.c:474 [inline]
> >         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> > kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
> >         __kernel_write+0x110/0x3b0 fs/read_write.c:506
> >         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
> >         splice_from_pipe_feed fs/splice.c:503 [inline]
> >         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
> >         splice_from_pipe+0x108/0x170 fs/splice.c:662
> >         default_file_splice_write+0x3c/0x90 fs/splice.c:809

Irrelevant to the lockdep splat, but why isn't there an
ovl_splice_write() that just recurses into realfile->splice_write()?
Sounds like a much more efficient way to handle splice read and
write...

[...]

> Miklos,
>
> Its good that this report popped up again, because I went to
> look back at my notes from previous report [1].
> If I was right in my previous analysis then we must have a real
> deadlock in current "lazy copy up" WIP patches. Right?

Hmm, AFAICS this circular dependency translated into layman's terms:

pipe lock -> ovl inode lock  (splice to ovl file)

ovl inode lock -> upper freeze lock (truncate of ovl file)

upper freeze lock -> pipe lock (splice to upper file)

> "This looks like a false positive because lockdep is not aware of
> s_stack_depth of the file (fs) associated with the pipe.

But AFAICS the above dependency doesn't include copy up or stacked
overlay, so looks like a real deadlock.

Am I missing something?

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-11 12:06   ` Miklos Szeredi
@ 2019-02-11 12:37     ` Miklos Szeredi
  2019-02-11 13:08       ` Amir Goldstein
  0 siblings, 1 reply; 13+ messages in thread
From: Miklos Szeredi @ 2019-02-11 12:37 UTC (permalink / raw)
  To: Amir Goldstein, Jan Kara
  Cc: linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro, syzbot, overlayfs

On Mon, Feb 11, 2019 at 1:06 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
>
> On Mon, Feb 11, 2019 at 8:38 AM Amir Goldstein <amir73il@gmail.com> wrote:
> >
> > On Sun, Feb 10, 2019 at 8:23 PM syzbot
> > <syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:
>
> > > -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
> > >         down_write+0x38/0x90 kernel/locking/rwsem.c:70
> > >         inode_lock include/linux/fs.h:757 [inline]
> > >         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
> > >         call_write_iter include/linux/fs.h:1863 [inline]
> > >         new_sync_write fs/read_write.c:474 [inline]
> > >         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> > > kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
> > >         __kernel_write+0x110/0x3b0 fs/read_write.c:506
> > >         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
> > >         splice_from_pipe_feed fs/splice.c:503 [inline]
> > >         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
> > >         splice_from_pipe+0x108/0x170 fs/splice.c:662
> > >         default_file_splice_write+0x3c/0x90 fs/splice.c:809
>
> Irrelevant to the lockdep splat, but why isn't there an
> ovl_splice_write() that just recurses into realfile->splice_write()?
> Sounds like a much more efficient way to handle splice read and
> write...
>
> [...]
>
> > Miklos,
> >
> > Its good that this report popped up again, because I went to
> > look back at my notes from previous report [1].
> > If I was right in my previous analysis then we must have a real
> > deadlock in current "lazy copy up" WIP patches. Right?
>
> Hmm, AFAICS this circular dependency translated into layman's terms:
>
> pipe lock -> ovl inode lock  (splice to ovl file)
>
> ovl inode lock -> upper freeze lock (truncate of ovl file)
>
> upper freeze lock -> pipe lock (splice to upper file)

So what can we do with this?

The "freeze lock -> inode lock" dependency is fixed.   This is
reversed in overlay to "ovl inode lock -> upper freeze lock", which is
okay, because this is a nesting that cannot be reversed.   But in
splice the pipe locks comes in between: "freeze lock -> pipe lock ->
inode lock" which breaks this nesting direction and creates a true
reverse dependency between ovl inode lock and upper freeze lock.

The only way I see this could be fixed is to move the freeze lock
inside the pipe lock.  But that would mean splice/sendfile/etc could
be frozen with the pipe lock held.  It doesn't look nice.

Any other ideas?

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-11 12:37     ` Miklos Szeredi
@ 2019-02-11 13:08       ` Amir Goldstein
  2019-02-11 14:33         ` Miklos Szeredi
  0 siblings, 1 reply; 13+ messages in thread
From: Amir Goldstein @ 2019-02-11 13:08 UTC (permalink / raw)
  To: Miklos Szeredi
  Cc: Jan Kara, linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro,
	syzbot, overlayfs

On Mon, Feb 11, 2019 at 2:37 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
>
> On Mon, Feb 11, 2019 at 1:06 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> >
> > On Mon, Feb 11, 2019 at 8:38 AM Amir Goldstein <amir73il@gmail.com> wrote:
> > >
> > > On Sun, Feb 10, 2019 at 8:23 PM syzbot
> > > <syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:
> >
> > > > -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
> > > >         down_write+0x38/0x90 kernel/locking/rwsem.c:70
> > > >         inode_lock include/linux/fs.h:757 [inline]
> > > >         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
> > > >         call_write_iter include/linux/fs.h:1863 [inline]
> > > >         new_sync_write fs/read_write.c:474 [inline]
> > > >         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> > > > kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
> > > >         __kernel_write+0x110/0x3b0 fs/read_write.c:506
> > > >         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
> > > >         splice_from_pipe_feed fs/splice.c:503 [inline]
> > > >         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
> > > >         splice_from_pipe+0x108/0x170 fs/splice.c:662
> > > >         default_file_splice_write+0x3c/0x90 fs/splice.c:809
> >
> > Irrelevant to the lockdep splat, but why isn't there an
> > ovl_splice_write() that just recurses into realfile->splice_write()?
> > Sounds like a much more efficient way to handle splice read and
> > write...
> >
> > [...]
> >
> > > Miklos,
> > >
> > > Its good that this report popped up again, because I went to
> > > look back at my notes from previous report [1].
> > > If I was right in my previous analysis then we must have a real
> > > deadlock in current "lazy copy up" WIP patches. Right?
> >
> > Hmm, AFAICS this circular dependency translated into layman's terms:
> >
> > pipe lock -> ovl inode lock  (splice to ovl file)
> >
> > ovl inode lock -> upper freeze lock (truncate of ovl file)
> >
> > upper freeze lock -> pipe lock (splice to upper file)
>
> So what can we do with this?
>
> The "freeze lock -> inode lock" dependency is fixed.   This is
> reversed in overlay to "ovl inode lock -> upper freeze lock", which is
> okay, because this is a nesting that cannot be reversed.   But in
> splice the pipe locks comes in between: "freeze lock -> pipe lock ->
> inode lock" which breaks this nesting direction and creates a true
> reverse dependency between ovl inode lock and upper freeze lock.
>
> The only way I see this could be fixed is to move the freeze lock
> inside the pipe lock.  But that would mean splice/sendfile/etc could
> be frozen with the pipe lock held.  It doesn't look nice.
>
> Any other ideas?
>

[CC Jan]

I think we are allowed to file_start_write_trylock(upper)
before ovl_inode_lock(). This in ONLY needed to cover the corner
case of upper being frozen in between "upper freeze lock -> pipe lock"
and thread B being in between "ovl inode lock -> upper freeze lock".
Is it OK to return a transient error in this corner copy up case?

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-11 13:08       ` Amir Goldstein
@ 2019-02-11 14:33         ` Miklos Szeredi
  2019-02-11 15:06           ` Amir Goldstein
  2019-02-12 11:14           ` Jan Kara
  0 siblings, 2 replies; 13+ messages in thread
From: Miklos Szeredi @ 2019-02-11 14:33 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: Jan Kara, linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro,
	syzbot, overlayfs

On Mon, Feb 11, 2019 at 2:08 PM Amir Goldstein <amir73il@gmail.com> wrote:
>
> On Mon, Feb 11, 2019 at 2:37 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> >
> > On Mon, Feb 11, 2019 at 1:06 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > >
> > > On Mon, Feb 11, 2019 at 8:38 AM Amir Goldstein <amir73il@gmail.com> wrote:
> > > >
> > > > On Sun, Feb 10, 2019 at 8:23 PM syzbot
> > > > <syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:
> > >
> > > > > -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
> > > > >         down_write+0x38/0x90 kernel/locking/rwsem.c:70
> > > > >         inode_lock include/linux/fs.h:757 [inline]
> > > > >         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
> > > > >         call_write_iter include/linux/fs.h:1863 [inline]
> > > > >         new_sync_write fs/read_write.c:474 [inline]
> > > > >         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> > > > > kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
> > > > >         __kernel_write+0x110/0x3b0 fs/read_write.c:506
> > > > >         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
> > > > >         splice_from_pipe_feed fs/splice.c:503 [inline]
> > > > >         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
> > > > >         splice_from_pipe+0x108/0x170 fs/splice.c:662
> > > > >         default_file_splice_write+0x3c/0x90 fs/splice.c:809
> > >
> > > Irrelevant to the lockdep splat, but why isn't there an
> > > ovl_splice_write() that just recurses into realfile->splice_write()?
> > > Sounds like a much more efficient way to handle splice read and
> > > write...
> > >
> > > [...]
> > >
> > > > Miklos,
> > > >
> > > > Its good that this report popped up again, because I went to
> > > > look back at my notes from previous report [1].
> > > > If I was right in my previous analysis then we must have a real
> > > > deadlock in current "lazy copy up" WIP patches. Right?
> > >
> > > Hmm, AFAICS this circular dependency translated into layman's terms:
> > >
> > > pipe lock -> ovl inode lock  (splice to ovl file)
> > >
> > > ovl inode lock -> upper freeze lock (truncate of ovl file)
> > >
> > > upper freeze lock -> pipe lock (splice to upper file)
> >
> > So what can we do with this?
> >
> > The "freeze lock -> inode lock" dependency is fixed.   This is
> > reversed in overlay to "ovl inode lock -> upper freeze lock", which is
> > okay, because this is a nesting that cannot be reversed.   But in
> > splice the pipe locks comes in between: "freeze lock -> pipe lock ->
> > inode lock" which breaks this nesting direction and creates a true
> > reverse dependency between ovl inode lock and upper freeze lock.
> >
> > The only way I see this could be fixed is to move the freeze lock
> > inside the pipe lock.  But that would mean splice/sendfile/etc could
> > be frozen with the pipe lock held.  It doesn't look nice.
> >
> > Any other ideas?
> >
>
> [CC Jan]
>
> I think we are allowed to file_start_write_trylock(upper)
> before ovl_inode_lock(). This in ONLY needed to cover the corner
> case of upper being frozen in between "upper freeze lock -> pipe lock"
> and thread B being in between "ovl inode lock -> upper freeze lock".
> Is it OK to return a transient error in this corner copy up case?

This case shouldn't happen assuming adherence to the "upper shall not
be modified while part of the overlay" rule.

Side note: I don't see that it has anything to do with copy-up, but I
may be missing something.

My other thought is that perhaps sb_start_write() should invoke
s_ops->start_write() so that overlay can do the freeze protection on
the upper early.

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-11 14:33         ` Miklos Szeredi
@ 2019-02-11 15:06           ` Amir Goldstein
  2019-02-11 15:40             ` Miklos Szeredi
  2019-02-12 11:14           ` Jan Kara
  1 sibling, 1 reply; 13+ messages in thread
From: Amir Goldstein @ 2019-02-11 15:06 UTC (permalink / raw)
  To: Miklos Szeredi
  Cc: Jan Kara, linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro,
	syzbot, overlayfs

On Mon, Feb 11, 2019 at 4:33 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
>
> On Mon, Feb 11, 2019 at 2:08 PM Amir Goldstein <amir73il@gmail.com> wrote:
> >
> > On Mon, Feb 11, 2019 at 2:37 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > >
> > > On Mon, Feb 11, 2019 at 1:06 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > > >
> > > > On Mon, Feb 11, 2019 at 8:38 AM Amir Goldstein <amir73il@gmail.com> wrote:
> > > > >
> > > > > On Sun, Feb 10, 2019 at 8:23 PM syzbot
> > > > > <syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:
> > > >
> > > > > > -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
> > > > > >         down_write+0x38/0x90 kernel/locking/rwsem.c:70
> > > > > >         inode_lock include/linux/fs.h:757 [inline]
> > > > > >         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
> > > > > >         call_write_iter include/linux/fs.h:1863 [inline]
> > > > > >         new_sync_write fs/read_write.c:474 [inline]
> > > > > >         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> > > > > > kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
> > > > > >         __kernel_write+0x110/0x3b0 fs/read_write.c:506
> > > > > >         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
> > > > > >         splice_from_pipe_feed fs/splice.c:503 [inline]
> > > > > >         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
> > > > > >         splice_from_pipe+0x108/0x170 fs/splice.c:662
> > > > > >         default_file_splice_write+0x3c/0x90 fs/splice.c:809
> > > >
> > > > Irrelevant to the lockdep splat, but why isn't there an
> > > > ovl_splice_write() that just recurses into realfile->splice_write()?
> > > > Sounds like a much more efficient way to handle splice read and
> > > > write...
> > > >
> > > > [...]
> > > >
> > > > > Miklos,
> > > > >
> > > > > Its good that this report popped up again, because I went to
> > > > > look back at my notes from previous report [1].
> > > > > If I was right in my previous analysis then we must have a real
> > > > > deadlock in current "lazy copy up" WIP patches. Right?
> > > >
> > > > Hmm, AFAICS this circular dependency translated into layman's terms:
> > > >
> > > > pipe lock -> ovl inode lock  (splice to ovl file)
> > > >
> > > > ovl inode lock -> upper freeze lock (truncate of ovl file)
> > > >
> > > > upper freeze lock -> pipe lock (splice to upper file)
> > >
> > > So what can we do with this?
> > >
> > > The "freeze lock -> inode lock" dependency is fixed.   This is
> > > reversed in overlay to "ovl inode lock -> upper freeze lock", which is
> > > okay, because this is a nesting that cannot be reversed.   But in
> > > splice the pipe locks comes in between: "freeze lock -> pipe lock ->
> > > inode lock" which breaks this nesting direction and creates a true
> > > reverse dependency between ovl inode lock and upper freeze lock.
> > >
> > > The only way I see this could be fixed is to move the freeze lock
> > > inside the pipe lock.  But that would mean splice/sendfile/etc could
> > > be frozen with the pipe lock held.  It doesn't look nice.
> > >
> > > Any other ideas?
> > >
> >
> > [CC Jan]
> >
> > I think we are allowed to file_start_write_trylock(upper)
> > before ovl_inode_lock(). This in ONLY needed to cover the corner
> > case of upper being frozen in between "upper freeze lock -> pipe lock"
> > and thread B being in between "ovl inode lock -> upper freeze lock".
> > Is it OK to return a transient error in this corner copy up case?
>
> This case shouldn't happen assuming adherence to the "upper shall not
> be modified while part of the overlay" rule.
>

Right. And unfreezing upper should untangle this deadlock,
because both thread A and B are taking a shared sb_writers lock.

> Side note: I don't see that it has anything to do with copy-up, but I
> may be missing something.
>

You are right. I was confusing your "ovl inode lock" with ovl_inode_lock(),
but the latter is taken after upper freeze lock, so irrelevant.

> My other thought is that perhaps sb_start_write() should invoke
> s_ops->start_write() so that overlay can do the freeze protection on
> the upper early.
>

Sorry, I don't see how that solves anything expect for the lockdep
warning. In both cases threads A and B would block until upper
in unfrozen, only without a lockdep warning.
Also, I am quite sure that taking upper freeze lock early will generate
many new lockdep warnings.

And yes, I could be missing something...

Anyway, what about the recursive do_splice_direct() issue
with lazy copy up, do you have an easy solution for that?

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-11 15:06           ` Amir Goldstein
@ 2019-02-11 15:40             ` Miklos Szeredi
  2019-02-11 17:04               ` Amir Goldstein
  0 siblings, 1 reply; 13+ messages in thread
From: Miklos Szeredi @ 2019-02-11 15:40 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: Jan Kara, linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro,
	syzbot, overlayfs

On Mon, Feb 11, 2019 at 4:07 PM Amir Goldstein <amir73il@gmail.com> wrote:
>
> On Mon, Feb 11, 2019 at 4:33 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> >
> > On Mon, Feb 11, 2019 at 2:08 PM Amir Goldstein <amir73il@gmail.com> wrote:
> > >
> > > On Mon, Feb 11, 2019 at 2:37 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > > >
> > > > On Mon, Feb 11, 2019 at 1:06 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > > > >
> > > > > On Mon, Feb 11, 2019 at 8:38 AM Amir Goldstein <amir73il@gmail.com> wrote:
> > > > > >
> > > > > > On Sun, Feb 10, 2019 at 8:23 PM syzbot
> > > > > > <syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:
> > > > >
> > > > > > > -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
> > > > > > >         down_write+0x38/0x90 kernel/locking/rwsem.c:70
> > > > > > >         inode_lock include/linux/fs.h:757 [inline]
> > > > > > >         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
> > > > > > >         call_write_iter include/linux/fs.h:1863 [inline]
> > > > > > >         new_sync_write fs/read_write.c:474 [inline]
> > > > > > >         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> > > > > > > kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
> > > > > > >         __kernel_write+0x110/0x3b0 fs/read_write.c:506
> > > > > > >         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
> > > > > > >         splice_from_pipe_feed fs/splice.c:503 [inline]
> > > > > > >         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
> > > > > > >         splice_from_pipe+0x108/0x170 fs/splice.c:662
> > > > > > >         default_file_splice_write+0x3c/0x90 fs/splice.c:809
> > > > >
> > > > > Irrelevant to the lockdep splat, but why isn't there an
> > > > > ovl_splice_write() that just recurses into realfile->splice_write()?
> > > > > Sounds like a much more efficient way to handle splice read and
> > > > > write...
> > > > >
> > > > > [...]
> > > > >
> > > > > > Miklos,
> > > > > >
> > > > > > Its good that this report popped up again, because I went to
> > > > > > look back at my notes from previous report [1].
> > > > > > If I was right in my previous analysis then we must have a real
> > > > > > deadlock in current "lazy copy up" WIP patches. Right?
> > > > >
> > > > > Hmm, AFAICS this circular dependency translated into layman's terms:
> > > > >
> > > > > pipe lock -> ovl inode lock  (splice to ovl file)
> > > > >
> > > > > ovl inode lock -> upper freeze lock (truncate of ovl file)
> > > > >
> > > > > upper freeze lock -> pipe lock (splice to upper file)
> > > >
> > > > So what can we do with this?
> > > >
> > > > The "freeze lock -> inode lock" dependency is fixed.   This is
> > > > reversed in overlay to "ovl inode lock -> upper freeze lock", which is
> > > > okay, because this is a nesting that cannot be reversed.   But in
> > > > splice the pipe locks comes in between: "freeze lock -> pipe lock ->
> > > > inode lock" which breaks this nesting direction and creates a true
> > > > reverse dependency between ovl inode lock and upper freeze lock.
> > > >
> > > > The only way I see this could be fixed is to move the freeze lock
> > > > inside the pipe lock.  But that would mean splice/sendfile/etc could
> > > > be frozen with the pipe lock held.  It doesn't look nice.
> > > >
> > > > Any other ideas?
> > > >
> > >
> > > [CC Jan]
> > >
> > > I think we are allowed to file_start_write_trylock(upper)
> > > before ovl_inode_lock(). This in ONLY needed to cover the corner
> > > case of upper being frozen in between "upper freeze lock -> pipe lock"
> > > and thread B being in between "ovl inode lock -> upper freeze lock".
> > > Is it OK to return a transient error in this corner copy up case?
> >
> > This case shouldn't happen assuming adherence to the "upper shall not
> > be modified while part of the overlay" rule.
> >
>
> Right. And unfreezing upper should untangle this deadlock,
> because both thread A and B are taking a shared sb_writers lock.

I don't think that'll work.   The deadlock would involve freezing for
sure, otherwise sb_start_write() won't block.  But there's no way to
cancel sb_wait_write() once it's called, so the deadlock is permanent.

> > Side note: I don't see that it has anything to do with copy-up, but I
> > may be missing something.
> >
>
> You are right. I was confusing your "ovl inode lock" with ovl_inode_lock(),
> but the latter is taken after upper freeze lock, so irrelevant.
>
> > My other thought is that perhaps sb_start_write() should invoke
> > s_ops->start_write() so that overlay can do the freeze protection on
> > the upper early.
> >
>
> Sorry, I don't see how that solves anything expect for the lockdep
> warning. In both cases threads A and B would block until upper
> in unfrozen, only without a lockdep warning.
> Also, I am quite sure that taking upper freeze lock early will generate
> many new lockdep warnings.

My thinking was to make the lock order:

  ovl freeze lock -> upper freeze lock -> ovl inode lock -> upper inode lock


> Anyway, what about the recursive do_splice_direct() issue
> with lazy copy up, do you have an easy solution for that?

Not sure.  I think splice_direct_to_actor() should be able to deal
with it.  E.g.

    pipe = current->splice_pipe;
    if (unlikely(!pipe)) {
        pipe = alloc_pipe_info();
        ...
    } else {
        current->splice_pipe = NULL;
    }

    ... do the actual splicing ...

    if (!current->splice_pipe) {
        current->splice_pipe = pipe
    } else {
        free_pipe_info(pipe);
    }

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-11 15:40             ` Miklos Szeredi
@ 2019-02-11 17:04               ` Amir Goldstein
  0 siblings, 0 replies; 13+ messages in thread
From: Amir Goldstein @ 2019-02-11 17:04 UTC (permalink / raw)
  To: Miklos Szeredi
  Cc: Jan Kara, linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro,
	syzbot, overlayfs

On Mon, Feb 11, 2019 at 5:40 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
>
> On Mon, Feb 11, 2019 at 4:07 PM Amir Goldstein <amir73il@gmail.com> wrote:
> >
> > On Mon, Feb 11, 2019 at 4:33 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > >
> > > On Mon, Feb 11, 2019 at 2:08 PM Amir Goldstein <amir73il@gmail.com> wrote:
> > > >
> > > > On Mon, Feb 11, 2019 at 2:37 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > > > >
> > > > > On Mon, Feb 11, 2019 at 1:06 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > > > > >
> > > > > > On Mon, Feb 11, 2019 at 8:38 AM Amir Goldstein <amir73il@gmail.com> wrote:
> > > > > > >
> > > > > > > On Sun, Feb 10, 2019 at 8:23 PM syzbot
> > > > > > > <syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:
> > > > > >
> > > > > > > > -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
> > > > > > > >         down_write+0x38/0x90 kernel/locking/rwsem.c:70
> > > > > > > >         inode_lock include/linux/fs.h:757 [inline]
> > > > > > > >         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
> > > > > > > >         call_write_iter include/linux/fs.h:1863 [inline]
> > > > > > > >         new_sync_write fs/read_write.c:474 [inline]
> > > > > > > >         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> > > > > > > > kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
> > > > > > > >         __kernel_write+0x110/0x3b0 fs/read_write.c:506
> > > > > > > >         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
> > > > > > > >         splice_from_pipe_feed fs/splice.c:503 [inline]
> > > > > > > >         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
> > > > > > > >         splice_from_pipe+0x108/0x170 fs/splice.c:662
> > > > > > > >         default_file_splice_write+0x3c/0x90 fs/splice.c:809
> > > > > >
> > > > > > Irrelevant to the lockdep splat, but why isn't there an
> > > > > > ovl_splice_write() that just recurses into realfile->splice_write()?
> > > > > > Sounds like a much more efficient way to handle splice read and
> > > > > > write...
> > > > > >
> > > > > > [...]
> > > > > >
> > > > > > > Miklos,
> > > > > > >
> > > > > > > Its good that this report popped up again, because I went to
> > > > > > > look back at my notes from previous report [1].
> > > > > > > If I was right in my previous analysis then we must have a real
> > > > > > > deadlock in current "lazy copy up" WIP patches. Right?
> > > > > >
> > > > > > Hmm, AFAICS this circular dependency translated into layman's terms:
> > > > > >
> > > > > > pipe lock -> ovl inode lock  (splice to ovl file)
> > > > > >
> > > > > > ovl inode lock -> upper freeze lock (truncate of ovl file)
> > > > > >
> > > > > > upper freeze lock -> pipe lock (splice to upper file)
> > > > >
> > > > > So what can we do with this?
> > > > >
> > > > > The "freeze lock -> inode lock" dependency is fixed.   This is
> > > > > reversed in overlay to "ovl inode lock -> upper freeze lock", which is
> > > > > okay, because this is a nesting that cannot be reversed.   But in
> > > > > splice the pipe locks comes in between: "freeze lock -> pipe lock ->
> > > > > inode lock" which breaks this nesting direction and creates a true
> > > > > reverse dependency between ovl inode lock and upper freeze lock.
> > > > >
> > > > > The only way I see this could be fixed is to move the freeze lock
> > > > > inside the pipe lock.  But that would mean splice/sendfile/etc could
> > > > > be frozen with the pipe lock held.  It doesn't look nice.
> > > > >
> > > > > Any other ideas?
> > > > >
> > > >
> > > > [CC Jan]
> > > >
> > > > I think we are allowed to file_start_write_trylock(upper)
> > > > before ovl_inode_lock(). This in ONLY needed to cover the corner
> > > > case of upper being frozen in between "upper freeze lock -> pipe lock"
> > > > and thread B being in between "ovl inode lock -> upper freeze lock".
> > > > Is it OK to return a transient error in this corner copy up case?
> > >
> > > This case shouldn't happen assuming adherence to the "upper shall not
> > > be modified while part of the overlay" rule.
> > >
> >
> > Right. And unfreezing upper should untangle this deadlock,
> > because both thread A and B are taking a shared sb_writers lock.
>
> I don't think that'll work.   The deadlock would involve freezing for
> sure, otherwise sb_start_write() won't block.  But there's no way to
> cancel sb_wait_write() once it's called, so the deadlock is permanent.
>

Of course.

> > > Side note: I don't see that it has anything to do with copy-up, but I
> > > may be missing something.
> > >
> >
> > You are right. I was confusing your "ovl inode lock" with ovl_inode_lock(),
> > but the latter is taken after upper freeze lock, so irrelevant.
> >
> > > My other thought is that perhaps sb_start_write() should invoke
> > > s_ops->start_write() so that overlay can do the freeze protection on
> > > the upper early.
> > >
> >
> > Sorry, I don't see how that solves anything expect for the lockdep
> > warning. In both cases threads A and B would block until upper
> > in unfrozen, only without a lockdep warning.
> > Also, I am quite sure that taking upper freeze lock early will generate
> > many new lockdep warnings.
>
> My thinking was to make the lock order:
>
>   ovl freeze lock -> upper freeze lock -> ovl inode lock -> upper inode lock
>

That sounds reasonable for the common cases,
but limits us in doing things like update origin/impure xattr on lookup
and remove impure xattr on readdir.
I guess these best effort cases could use upper freeze trylock?
I sense there may be other issues lurking with page cache implementation..

>
> > Anyway, what about the recursive do_splice_direct() issue
> > with lazy copy up, do you have an easy solution for that?
>
> Not sure.  I think splice_direct_to_actor() should be able to deal
> with it.  E.g.
>
>     pipe = current->splice_pipe;
>     if (unlikely(!pipe)) {
>         pipe = alloc_pipe_info();
>         ...
>     } else {
>         current->splice_pipe = NULL;
>     }
>
>     ... do the actual splicing ...
>
>     if (!current->splice_pipe) {
>         current->splice_pipe = pipe
>     } else {
>         free_pipe_info(pipe);
>     }
>

OK, I wonder how to make lockdep aware of that nesting.
limit nesting level to FILESYSTEM_MAX_STACK_DEPTH?...

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-11 14:33         ` Miklos Szeredi
  2019-02-11 15:06           ` Amir Goldstein
@ 2019-02-12 11:14           ` Jan Kara
  2019-02-12 13:39             ` Amir Goldstein
  1 sibling, 1 reply; 13+ messages in thread
From: Jan Kara @ 2019-02-12 11:14 UTC (permalink / raw)
  To: Miklos Szeredi
  Cc: Amir Goldstein, Jan Kara, linux-fsdevel, linux-kernel,
	syzkaller-bugs, Al Viro, syzbot, overlayfs

On Mon 11-02-19 15:33:38, Miklos Szeredi wrote:
> On Mon, Feb 11, 2019 at 2:08 PM Amir Goldstein <amir73il@gmail.com> wrote:
> >
> > On Mon, Feb 11, 2019 at 2:37 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > >
> > > On Mon, Feb 11, 2019 at 1:06 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > > >
> > > > On Mon, Feb 11, 2019 at 8:38 AM Amir Goldstein <amir73il@gmail.com> wrote:
> > > > >
> > > > > On Sun, Feb 10, 2019 at 8:23 PM syzbot
> > > > > <syzbot+31d8b84465a7cbfd8515@syzkaller.appspotmail.com> wrote:
> > > >
> > > > > > -> #1 (&ovl_i_mutex_key[depth]){+.+.}:
> > > > > >         down_write+0x38/0x90 kernel/locking/rwsem.c:70
> > > > > >         inode_lock include/linux/fs.h:757 [inline]
> > > > > >         ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:231
> > > > > >         call_write_iter include/linux/fs.h:1863 [inline]
> > > > > >         new_sync_write fs/read_write.c:474 [inline]
> > > > > >         __vfs_write+0x613/0x8e0 fs/read_write.c:487
> > > > > > kobject: 'loop4' (000000009e2b886d): kobject_uevent_env
> > > > > >         __kernel_write+0x110/0x3b0 fs/read_write.c:506
> > > > > >         write_pipe_buf+0x15d/0x1f0 fs/splice.c:797
> > > > > >         splice_from_pipe_feed fs/splice.c:503 [inline]
> > > > > >         __splice_from_pipe+0x39a/0x7e0 fs/splice.c:627
> > > > > >         splice_from_pipe+0x108/0x170 fs/splice.c:662
> > > > > >         default_file_splice_write+0x3c/0x90 fs/splice.c:809
> > > >
> > > > Irrelevant to the lockdep splat, but why isn't there an
> > > > ovl_splice_write() that just recurses into realfile->splice_write()?
> > > > Sounds like a much more efficient way to handle splice read and
> > > > write...
> > > >
> > > > [...]
> > > >
> > > > > Miklos,
> > > > >
> > > > > Its good that this report popped up again, because I went to
> > > > > look back at my notes from previous report [1].
> > > > > If I was right in my previous analysis then we must have a real
> > > > > deadlock in current "lazy copy up" WIP patches. Right?
> > > >
> > > > Hmm, AFAICS this circular dependency translated into layman's terms:
> > > >
> > > > pipe lock -> ovl inode lock  (splice to ovl file)
> > > >
> > > > ovl inode lock -> upper freeze lock (truncate of ovl file)
> > > >
> > > > upper freeze lock -> pipe lock (splice to upper file)
> > >
> > > So what can we do with this?
> > >
> > > The "freeze lock -> inode lock" dependency is fixed.   This is
> > > reversed in overlay to "ovl inode lock -> upper freeze lock", which is
> > > okay, because this is a nesting that cannot be reversed.   But in
> > > splice the pipe locks comes in between: "freeze lock -> pipe lock ->
> > > inode lock" which breaks this nesting direction and creates a true
> > > reverse dependency between ovl inode lock and upper freeze lock.
> > >
> > > The only way I see this could be fixed is to move the freeze lock
> > > inside the pipe lock.  But that would mean splice/sendfile/etc could
> > > be frozen with the pipe lock held.  It doesn't look nice.
> > >
> > > Any other ideas?
> > >
> >
> > [CC Jan]
> >
> > I think we are allowed to file_start_write_trylock(upper)
> > before ovl_inode_lock(). This in ONLY needed to cover the corner
> > case of upper being frozen in between "upper freeze lock -> pipe lock"
> > and thread B being in between "ovl inode lock -> upper freeze lock".
> > Is it OK to return a transient error in this corner copy up case?
> 
> This case shouldn't happen assuming adherence to the "upper shall not
> be modified while part of the overlay" rule.
> 
> Side note: I don't see that it has anything to do with copy-up, but I
> may be missing something.
> 
> My other thought is that perhaps sb_start_write() should invoke
> s_ops->start_write() so that overlay can do the freeze protection on
> the upper early.

So my understanding of overlayfs is pretty basic so I'm sorry if I miss
something. If I'm right, we have three superblocks here: ovl, upper, lower.
Now 'lower' is read-only so for freezing purposes we can just forget about
it. 'upper' is where the real changes are going into and 'ovl' is a wrapper
virtual superblock that handles merging of 'lower' and 'upper'. Correct so
far?

And the problem seems to be that when you acquire freeze protection for the
'ovl' superblock, you in fact want to acquire freeze protection for the
'upper' (as 'ovl' is just virtual and has no disk state to protect). So I
agree that a callback to allow overlayfs to acquire freeze protection on
'upper' right away would be one solution. Or we could make s_writers a
pointer and redirect ovl->s_writers to upper->s_writers. Then VFS should do
the right thing from the start unless overlayfs calls back into operations
on 'upper' that will try to acquire the freeze protection again. Thoughts?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-12 11:14           ` Jan Kara
@ 2019-02-12 13:39             ` Amir Goldstein
  2019-02-12 14:19               ` Miklos Szeredi
  2019-02-12 16:11               ` Jan Kara
  0 siblings, 2 replies; 13+ messages in thread
From: Amir Goldstein @ 2019-02-12 13:39 UTC (permalink / raw)
  To: Jan Kara
  Cc: Miklos Szeredi, linux-fsdevel, linux-kernel, syzkaller-bugs,
	Al Viro, syzbot, overlayfs

> > My other thought is that perhaps sb_start_write() should invoke
> > s_ops->start_write() so that overlay can do the freeze protection on
> > the upper early.
>
> So my understanding of overlayfs is pretty basic so I'm sorry if I miss
> something. If I'm right, we have three superblocks here: ovl, upper, lower.
> Now 'lower' is read-only so for freezing purposes we can just forget about
> it. 'upper' is where the real changes are going into and 'ovl' is a wrapper
> virtual superblock that handles merging of 'lower' and 'upper'. Correct so
> far?

Yes.

>
> And the problem seems to be that when you acquire freeze protection for the
> 'ovl' superblock, you in fact want to acquire freeze protection for the
> 'upper' (as 'ovl' is just virtual and has no disk state to protect). So I

There are use case for freezing ovl (i.e. ovl snapshots) but it is not
implemented
at the moment.

Overlayfs already gets upper freeze protection internally before any
modification
to upper.
The problem that locking order of upper freeze is currently under overlay
inode mutex. And that brings a problem with the above pipe case.

> agree that a callback to allow overlayfs to acquire freeze protection on
> 'upper' right away would be one solution. Or we could make s_writers a
> pointer and redirect ovl->s_writers to upper->s_writers. Then VFS should do
> the right thing from the start unless overlayfs calls back into operations
> on 'upper' that will try to acquire the freeze protection again. Thoughts?

Overlayfs definitely calls into operations on upper and upper certainly
acquires several levels of s_writers itself.

The problem with the proposal to change locking order to
ovl freeze -> upper freeze -> ovl inode -> upper inode
is that for some non-write operations (e.g. lookup, readdir)
overlay may end up updating xattrs on upper, so will need
to take upper freeze after ovl inode lock without ovl freeze
being called by vfs.

I suggested that we may use upper freeze trylock in those
cases and skip xattr update if trylock fails.

Not sure if my assumption is correct that this would be ok
w.r.t locking rules?
Not sure if we can get away with trylock in all the cases that
we need to modify upper.

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-12 13:39             ` Amir Goldstein
@ 2019-02-12 14:19               ` Miklos Szeredi
  2019-02-12 16:11               ` Jan Kara
  1 sibling, 0 replies; 13+ messages in thread
From: Miklos Szeredi @ 2019-02-12 14:19 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: Jan Kara, linux-fsdevel, linux-kernel, syzkaller-bugs, Al Viro,
	syzbot, overlayfs

On Tue, Feb 12, 2019 at 2:39 PM Amir Goldstein <amir73il@gmail.com> wrote:
>
> > > My other thought is that perhaps sb_start_write() should invoke
> > > s_ops->start_write() so that overlay can do the freeze protection on
> > > the upper early.
> >
> > So my understanding of overlayfs is pretty basic so I'm sorry if I miss
> > something. If I'm right, we have three superblocks here: ovl, upper, lower.
> > Now 'lower' is read-only so for freezing purposes we can just forget about
> > it. 'upper' is where the real changes are going into and 'ovl' is a wrapper
> > virtual superblock that handles merging of 'lower' and 'upper'. Correct so
> > far?
>
> Yes.
>
> >
> > And the problem seems to be that when you acquire freeze protection for the
> > 'ovl' superblock, you in fact want to acquire freeze protection for the
> > 'upper' (as 'ovl' is just virtual and has no disk state to protect). So I
>
> There are use case for freezing ovl (i.e. ovl snapshots) but it is not
> implemented
> at the moment.
>
> Overlayfs already gets upper freeze protection internally before any
> modification
> to upper.
> The problem that locking order of upper freeze is currently under overlay
> inode mutex. And that brings a problem with the above pipe case.
>
> > agree that a callback to allow overlayfs to acquire freeze protection on
> > 'upper' right away would be one solution. Or we could make s_writers a
> > pointer and redirect ovl->s_writers to upper->s_writers. Then VFS should do
> > the right thing from the start unless overlayfs calls back into operations
> > on 'upper' that will try to acquire the freeze protection again. Thoughts?
>
> Overlayfs definitely calls into operations on upper and upper certainly
> acquires several levels of s_writers itself.

sb_start_write() calls cannot be nested (for the same reason two
shared locks may be part of a deadlock), so this would mean having to
make sure that none of the code that overlay calls does
sb_start_write() itself.  Which means quite a bit of vfs api churn and
the associated pain...

> The problem with the proposal to change locking order to
> ovl freeze -> upper freeze -> ovl inode -> upper inode
> is that for some non-write operations (e.g. lookup, readdir)
> overlay may end up updating xattrs on upper, so will need
> to take upper freeze after ovl inode lock without ovl freeze
> being called by vfs.
>
> I suggested that we may use upper freeze trylock in those
> cases and skip xattr update if trylock fails.
>
> Not sure if my assumption is correct that this would be ok
> w.r.t locking rules?

Yes, using trylock is always deadlock free.

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: possible deadlock in pipe_lock (2)
  2019-02-12 13:39             ` Amir Goldstein
  2019-02-12 14:19               ` Miklos Szeredi
@ 2019-02-12 16:11               ` Jan Kara
  1 sibling, 0 replies; 13+ messages in thread
From: Jan Kara @ 2019-02-12 16:11 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: Jan Kara, Miklos Szeredi, linux-fsdevel, linux-kernel,
	syzkaller-bugs, Al Viro, syzbot, overlayfs

On Tue 12-02-19 15:39:38, Amir Goldstein wrote:
> > > My other thought is that perhaps sb_start_write() should invoke
> > > s_ops->start_write() so that overlay can do the freeze protection on
> > > the upper early.
> >
> > So my understanding of overlayfs is pretty basic so I'm sorry if I miss
> > something. If I'm right, we have three superblocks here: ovl, upper, lower.
> > Now 'lower' is read-only so for freezing purposes we can just forget about
> > it. 'upper' is where the real changes are going into and 'ovl' is a wrapper
> > virtual superblock that handles merging of 'lower' and 'upper'. Correct so
> > far?
> 
> Yes.
> 
> >
> > And the problem seems to be that when you acquire freeze protection for the
> > 'ovl' superblock, you in fact want to acquire freeze protection for the
> > 'upper' (as 'ovl' is just virtual and has no disk state to protect). So I
> 
> There are use case for freezing ovl (i.e. ovl snapshots) but it is not
> implemented
> at the moment.
> 
> Overlayfs already gets upper freeze protection internally before any
> modification
> to upper.
> The problem that locking order of upper freeze is currently under overlay
> inode mutex. And that brings a problem with the above pipe case.
> 
> > agree that a callback to allow overlayfs to acquire freeze protection on
> > 'upper' right away would be one solution. Or we could make s_writers a
> > pointer and redirect ovl->s_writers to upper->s_writers. Then VFS should do
> > the right thing from the start unless overlayfs calls back into operations
> > on 'upper' that will try to acquire the freeze protection again. Thoughts?
> 
> Overlayfs definitely calls into operations on upper and upper certainly
> acquires several levels of s_writers itself.
> 
> The problem with the proposal to change locking order to
> ovl freeze -> upper freeze -> ovl inode -> upper inode
> is that for some non-write operations (e.g. lookup, readdir)
> overlay may end up updating xattrs on upper, so will need
> to take upper freeze after ovl inode lock without ovl freeze
> being called by vfs.
> 
> I suggested that we may use upper freeze trylock in those
> cases and skip xattr update if trylock fails.

Yes, that's what VFS does as well e.g. for atime updates. In fact I don't
see other sensible possibility since blocking read operation on frozen
filesystem is surprising to the user.

> Not sure if my assumption is correct that this would be ok
> w.r.t locking rules?

It should be fine AFAICT.

> Not sure if we can get away with trylock in all the cases that
> we need to modify upper.

I don't know overlayfs enough to be able to tell :).

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-02-12 16:11 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-10 18:23 possible deadlock in pipe_lock (2) syzbot
2019-02-11  7:38 ` Amir Goldstein
2019-02-11 12:06   ` Miklos Szeredi
2019-02-11 12:37     ` Miklos Szeredi
2019-02-11 13:08       ` Amir Goldstein
2019-02-11 14:33         ` Miklos Szeredi
2019-02-11 15:06           ` Amir Goldstein
2019-02-11 15:40             ` Miklos Szeredi
2019-02-11 17:04               ` Amir Goldstein
2019-02-12 11:14           ` Jan Kara
2019-02-12 13:39             ` Amir Goldstein
2019-02-12 14:19               ` Miklos Szeredi
2019-02-12 16:11               ` Jan Kara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).