linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: possible deadlock in ovl_write_iter
       [not found] ` <CAOQ4uxik7QbQVBQbaPWCxPG90HuJK=T0yckoyfe5NjLHhg898Q@mail.gmail.com>
@ 2018-09-27  3:51   ` Amir Goldstein
  0 siblings, 0 replies; 7+ messages in thread
From: Amir Goldstein @ 2018-09-27  3:51 UTC (permalink / raw)
  To: syzbot+695726bc473f9c36a4b6, Miklos Szeredi
  Cc: linux-kernel, overlayfs, syzkaller-bugs, linux-fsdevel

[CC: fsdevel]

On Thu, Sep 27, 2018 at 6:48 AM Amir Goldstein <amir73il@gmail.com> wrote:
>
> On Wed, Sep 26, 2018 at 11:44 PM syzbot
> <syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com> wrote:
> >
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit:    a38523185b40 erge tag 'libnvdimm-fixes-4.19-rc6' of git://..
> > git tree:       upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=178f63fa400000
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=dfb440e26f0a6f6f
> > dashboard link: https://syzkaller.appspot.com/bug?extid=695726bc473f9c36a4b6
> > compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
> >
> > Unfortunately, I don't have any reproducer for this crash yet.
> >
> > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > Reported-by: syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com
> >
> > Process accounting resumed
> >
> > ======================================================
> > WARNING: possible circular locking dependency detected
> > 4.19.0-rc5+ #32 Not tainted
> > ------------------------------------------------------
> > overlayfs: option "workdir=./file1\" is useless in a non-upper mount, ignore
> > syz-executor1/7265 is trying to acquire lock:
> > 00000000fec87ddb (&ovl_i_mutex_key[depth]){+.+.}, at: inode_lock
> > include/linux/fs.h:738 [inline]
> > 00000000fec87ddb (&ovl_i_mutex_key[depth]){+.+.}, at:
> > ovl_write_iter+0x151/0xb00 fs/overlayfs/file.c:231
> >
> > but task is already holding lock:
> > 00000000998db2f0 (&acct->lock#2){+.+.}, at: acct_get kernel/acct.c:161
> > [inline]
> > 00000000998db2f0 (&acct->lock#2){+.+.}, at: slow_acct_process
> > kernel/acct.c:577 [inline]
> > 00000000998db2f0 (&acct->lock#2){+.+.}, at: acct_process+0x48b/0x875
> > kernel/acct.c:605
> >
> > which lock already depends on the new lock.
> >
> >
> > the existing dependency chain (in reverse order) is:
> >
> > -> #2 (&acct->lock#2){+.+.}:
> > overlayfs: at least 2 lowerdir are needed while upperdir nonexistent
> >         __mutex_lock_common kernel/locking/mutex.c:925 [inline]
> >         __mutex_lock+0x166/0x1700 kernel/locking/mutex.c:1072
> > kobject: 'kvm' (00000000bb4f2ec2): kobject_uevent_env
> >         mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
> >         acct_pin_kill+0x26/0x100 kernel/acct.c:173
> >         pin_kill+0x29d/0xab0 fs/fs_pin.c:50
> > kobject: 'kvm' (00000000bb4f2ec2): fill_kobj_path: path
> > = '/devices/virtual/misc/kvm'
> >         acct_on+0x64a/0x8c0 kernel/acct.c:254
> >         __do_sys_acct kernel/acct.c:286 [inline]
> >         __se_sys_acct kernel/acct.c:273 [inline]
> >         __x64_sys_acct+0xc2/0x1f0 kernel/acct.c:273
> >         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
> >         entry_SYSCALL_64_after_hwframe+0x49/0xbe
> >
> > -> #1 (sb_writers#3){.+.+}:
> >         percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36
> > [inline]
> >         percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
> >         __sb_start_write+0x214/0x370 fs/super.c:1387
> >         sb_start_write include/linux/fs.h:1566 [inline]
> >         mnt_want_write+0x3f/0xc0 fs/namespace.c:360
> >         ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
> >         ovl_setattr+0x10b/0xaf0 fs/overlayfs/inode.c:30
> >         notify_change+0xbde/0x1110 fs/attr.c:334
> >         do_truncate+0x1bd/0x2d0 fs/open.c:63
> >         handle_truncate fs/namei.c:3008 [inline]
> >         do_last fs/namei.c:3424 [inline]
> >         path_openat+0x3762/0x5160 fs/namei.c:3534
> >         do_filp_open+0x255/0x380 fs/namei.c:3564
> > kobject: 'loop4' (0000000088f052bf): kobject_uevent_env
> >         do_sys_open+0x568/0x700 fs/open.c:1063
> >         ksys_open include/linux/syscalls.h:1276 [inline]
> >         __do_sys_creat fs/open.c:1121 [inline]
> >         __se_sys_creat fs/open.c:1119 [inline]
> >         __x64_sys_creat+0x61/0x80 fs/open.c:1119
> >         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
> > kobject: 'loop4' (0000000088f052bf): fill_kobj_path: path
> > = '/devices/virtual/block/loop4'
> >         entry_SYSCALL_64_after_hwframe+0x49/0xbe
> >
> > -> #0
> > kobject: 'kvm' (00000000bb4f2ec2): kobject_uevent_env
> >   (&ovl_i_mutex_key[depth]){+.+.}:
> >         lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3900
> >         down_write+0x8a/0x130 kernel/locking/rwsem.c:70
> >         inode_lock include/linux/fs.h:738 [inline]
> >         ovl_write_iter+0x151/0xb00 fs/overlayfs/file.c:231
> > kobject: 'kvm' (00000000bb4f2ec2): fill_kobj_path: path
> > = '/devices/virtual/misc/kvm'
> >         call_write_iter include/linux/fs.h:1808 [inline]
> >         new_sync_write fs/read_write.c:474 [inline]
> >         __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
> >         __kernel_write+0x10c/0x370 fs/read_write.c:506
> >         do_acct_process+0x1144/0x1660 kernel/acct.c:520
> >         slow_acct_process kernel/acct.c:579 [inline]
> >         acct_process+0x6b1/0x875 kernel/acct.c:605
> >         do_exit+0x1aaf/0x2610 kernel/exit.c:857
> >         do_group_exit+0x177/0x440 kernel/exit.c:970
> >         get_signal+0x8b0/0x1980 kernel/signal.c:2513
> >         do_signal+0x9c/0x21e0 arch/x86/kernel/signal.c:816
> >         exit_to_usermode_loop+0x2e5/0x380 arch/x86/entry/common.c:162
> >         prepare_exit_to_usermode arch/x86/entry/common.c:197 [inline]
> >         syscall_return_slowpath arch/x86/entry/common.c:268 [inline]
> >         do_syscall_64+0x6be/0x820 arch/x86/entry/common.c:293
> >         entry_SYSCALL_64_after_hwframe+0x49/0xbe
> >
> > other info that might help us debug this:
> >
> > Chain exists of:
> >    &ovl_i_mutex_key[depth] --> sb_writers#3 --> &acct->lock#2
> >
>
> I am not sure this is the root cause, but there seems to be a locking order
> violation in code that replaces accounting file.
> If new file is on the same fs as old file, acct_pin_kill(old) will skip writing
> the file, because sb_writers is still already taken by acct_on().
> If new file is not on same fs as old file, this ordering violation creates
> an unneeded dependency between new_sb_writers and old_sb_writers,
> which may be later reported as possible deadlock.
>
> This could result in an actual deadlock if accounting file is replaced
> from file in overlayfs over "real fs" to file in "real fs".
> acct_on() takes freeze protection on "real fs" and tries to write to
> overlayfs file. overlayfs is not freeze protected so do_acct_process()
> can carry on with __kernel_write() to overlayfs file, which SHOULD
> take "real fs" freeze protection and deadlock.
> Ironically (or not) it doesn't deadlock because of a bug fixed with
> 898cc19d8af2 ovl: fix freeze protection bypass in ovl_write_iter()
> which is not upstream yet, so wasn't in the kernel that syzbot tested.
>
> Following untested patch should fix the alleged deadlock.
>
> Miklos,
>
> If you feel confident about this patch I can re-post it for you to add
> to next. Otherwise, as I didn't see Al around to comment on the patch,
> I will try to reproduce the deadlock.
>
> Thanks,
> Amir.
> ---
>
> diff --git a/kernel/acct.c b/kernel/acct.c
> index addf7732fb56..c09355a7ae46 100644
> --- a/kernel/acct.c
> +++ b/kernel/acct.c
> @@ -251,8 +251,8 @@ static int acct_on(struct filename *pathname)
>         rcu_read_lock();
>         old = xchg(&ns->bacct, &acct->pin);
>         mutex_unlock(&acct->lock);
> -       pin_kill(old);
>         mnt_drop_write(mnt);
> +       pin_kill(old);
>         mntput(mnt);
>         return 0;
>  }

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in ovl_write_iter
       [not found] <00000000000074e10d0576cc48f1@google.com>
       [not found] ` <CAOQ4uxik7QbQVBQbaPWCxPG90HuJK=T0yckoyfe5NjLHhg898Q@mail.gmail.com>
@ 2018-11-27  7:06 ` syzbot
  2018-11-27  7:44   ` Amir Goldstein
  2018-11-27 15:07   ` Amir Goldstein
  1 sibling, 2 replies; 7+ messages in thread
From: syzbot @ 2018-11-27  7:06 UTC (permalink / raw)
  To: amir73il, linux-fsdevel, linux-kernel, linux-unionfs, miklos,
	syzkaller-bugs

syzbot has found a reproducer for the following crash on:

HEAD commit:    6f8b52ba442c Merge tag 'hwmon-for-v4.20-rc5' of git://git...
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=120f3905400000
kernel config:  https://syzkaller.appspot.com/x/.config?x=c94f9f0c0363db4b
dashboard link: https://syzkaller.appspot.com/bug?extid=695726bc473f9c36a4b6
compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=10cad225400000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=13813093400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com

overlayfs: filesystem on './file0' not supported as upperdir

======================================================
WARNING: possible circular locking dependency detected
4.20.0-rc4+ #351 Not tainted
------------------------------------------------------
syz-executor338/5996 is trying to acquire lock:
00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at: inode_lock  
include/linux/fs.h:757 [inline]
00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at:  
ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231

but task is already holding lock:
00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62  
[inline]
00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80  
fs/pipe.c:70

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&pipe->mutex/1){+.+.}:
        __mutex_lock_common kernel/locking/mutex.c:925 [inline]
        __mutex_lock+0x166/0x16f0 kernel/locking/mutex.c:1072
        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
        pipe_lock_nested fs/pipe.c:62 [inline]
        pipe_lock+0x6e/0x80 fs/pipe.c:70
        iter_file_splice_write+0x27d/0x1050 fs/splice.c:700
        do_splice_from fs/splice.c:851 [inline]
        do_splice+0x64a/0x1430 fs/splice.c:1147
        __do_sys_splice fs/splice.c:1414 [inline]
        __se_sys_splice fs/splice.c:1394 [inline]
        __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
        do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
        entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #1 (sb_writers#3){.+.+}:
        percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36  
[inline]
        percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
        __sb_start_write+0x214/0x370 fs/super.c:1387
        sb_start_write include/linux/fs.h:1597 [inline]
        mnt_want_write+0x3f/0xc0 fs/namespace.c:360
        ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
        ovl_setattr+0x10b/0xaf0 fs/overlayfs/inode.c:30
        notify_change+0xbde/0x1110 fs/attr.c:334
        do_truncate+0x1bd/0x2d0 fs/open.c:63
        handle_truncate fs/namei.c:3008 [inline]
        do_last fs/namei.c:3424 [inline]
        path_openat+0x375f/0x5150 fs/namei.c:3534
        do_filp_open+0x255/0x380 fs/namei.c:3564
        do_sys_open+0x568/0x700 fs/open.c:1063
        __do_sys_openat fs/open.c:1090 [inline]
        __se_sys_openat fs/open.c:1084 [inline]
        __x64_sys_openat+0x9d/0x100 fs/open.c:1084
        do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
        entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #0 (&ovl_i_mutex_key[depth]){+.+.}:
        lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
        down_write+0x8a/0x130 kernel/locking/rwsem.c:70
        inode_lock include/linux/fs.h:757 [inline]
        ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
        call_write_iter include/linux/fs.h:1857 [inline]
        new_sync_write fs/read_write.c:474 [inline]
        __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
        __kernel_write+0x10c/0x370 fs/read_write.c:506
        write_pipe_buf+0x180/0x240 fs/splice.c:797
        splice_from_pipe_feed fs/splice.c:503 [inline]
        __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
        splice_from_pipe+0x1ec/0x340 fs/splice.c:662
        default_file_splice_write+0x3c/0x90 fs/splice.c:809
        do_splice_from fs/splice.c:851 [inline]
        do_splice+0x64a/0x1430 fs/splice.c:1147
        __do_sys_splice fs/splice.c:1414 [inline]
        __se_sys_splice fs/splice.c:1394 [inline]
        __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
        do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
        entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
   &ovl_i_mutex_key[depth] --> sb_writers#3 --> &pipe->mutex/1

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&pipe->mutex/1);
                                lock(sb_writers#3);
                                lock(&pipe->mutex/1);
   lock(&ovl_i_mutex_key[depth]);

  *** DEADLOCK ***

2 locks held by syz-executor338/5996:
  #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: file_start_write  
include/linux/fs.h:2810 [inline]
  #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: do_splice+0xd2e/0x1430  
fs/splice.c:1146
  #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested  
fs/pipe.c:62 [inline]
  #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80  
fs/pipe.c:70

stack backtrace:
CPU: 0 PID: 5996 Comm: syz-executor338 Not tainted 4.20.0-rc4+ #351
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS  
Google 01/01/2011
Call Trace:
  __dump_stack lib/dump_stack.c:77 [inline]
  dump_stack+0x244/0x39d lib/dump_stack.c:113
  print_circular_bug.isra.35.cold.54+0x1bd/0x27d  
kernel/locking/lockdep.c:1221
  check_prev_add kernel/locking/lockdep.c:1863 [inline]
  check_prevs_add kernel/locking/lockdep.c:1976 [inline]
  validate_chain kernel/locking/lockdep.c:2347 [inline]
  __lock_acquire+0x3399/0x4c20 kernel/locking/lockdep.c:3341
  lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
  down_write+0x8a/0x130 kernel/locking/rwsem.c:70
  inode_lock include/linux/fs.h:757 [inline]
  ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
  call_write_iter include/linux/fs.h:1857 [inline]
  new_sync_write fs/read_write.c:474 [inline]
  __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
  __kernel_write+0x10c/0x370 fs/read_write.c:506
  write_pipe_buf+0x180/0x240 fs/splice.c:797
  splice_from_pipe_feed fs/splice.c:503 [inline]
  __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
  splice_from_pipe+0x1ec/0x340 fs/splice.c:662
  default_file_splice_write+0x3c/0x90 fs/splice.c:809
  do_splice_from fs/splice.c:851 [inline]
  do_splice+0x64a/0x1430 fs/splice.c:1147
  __do_sys_splice fs/splice.c:1414 [inline]
  __se_sys_splice fs/splice.c:1394 [inline]
  __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
  do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
  entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x445ad9
Code: e8 5c b7 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7  
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff  
ff 0f 83 2b 12 fc ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007f18e3f71cd8 EFLAGS: 00000216 ORIG_RAX: 0000000000000113
RAX: ffffffffffffffda RBX: 00000000006dac78 RCX: 0000000000445ad9
RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000007
RBP: 00000000006dac70 R08: 000100000000000a R09: 0000000000000007
R10: 0000000000000000 R11: 0000000000000216 R12: 00000000006dac7c
R13: 00007ffde0706e9f R14: 00007f18e3f729c0 R15: 00000000006dad4c

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in ovl_write_iter
  2018-11-27  7:06 ` syzbot
@ 2018-11-27  7:44   ` Amir Goldstein
  2018-11-27 14:12     ` Dmitry Vyukov
  2018-11-27 15:07   ` Amir Goldstein
  1 sibling, 1 reply; 7+ messages in thread
From: Amir Goldstein @ 2018-11-27  7:44 UTC (permalink / raw)
  To: syzbot+695726bc473f9c36a4b6, Dmitry Vyukov
  Cc: linux-fsdevel, linux-kernel, overlayfs, Miklos Szeredi,
	syzkaller-bugs, Al Viro

On Tue, Nov 27, 2018 at 9:06 AM syzbot
<syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com> wrote:
>
> syzbot has found a reproducer for the following crash on:
>
> HEAD commit:    6f8b52ba442c Merge tag 'hwmon-for-v4.20-rc5' of git://git...
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=120f3905400000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=c94f9f0c0363db4b
> dashboard link: https://syzkaller.appspot.com/bug?extid=695726bc473f9c36a4b6
> compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=10cad225400000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=13813093400000
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com
>
> overlayfs: filesystem on './file0' not supported as upperdir
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 4.20.0-rc4+ #351 Not tainted
> ------------------------------------------------------
> syz-executor338/5996 is trying to acquire lock:
> 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at: inode_lock
> include/linux/fs.h:757 [inline]
> 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at:
> ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>
> but task is already holding lock:
> 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62
> [inline]
> 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
> fs/pipe.c:70
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #2 (&pipe->mutex/1){+.+.}:
>         __mutex_lock_common kernel/locking/mutex.c:925 [inline]
>         __mutex_lock+0x166/0x16f0 kernel/locking/mutex.c:1072
>         mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
>         pipe_lock_nested fs/pipe.c:62 [inline]
>         pipe_lock+0x6e/0x80 fs/pipe.c:70
>         iter_file_splice_write+0x27d/0x1050 fs/splice.c:700
>         do_splice_from fs/splice.c:851 [inline]
>         do_splice+0x64a/0x1430 fs/splice.c:1147
>         __do_sys_splice fs/splice.c:1414 [inline]
>         __se_sys_splice fs/splice.c:1394 [inline]
>         __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> -> #1 (sb_writers#3){.+.+}:
>         percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36
> [inline]
>         percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
>         __sb_start_write+0x214/0x370 fs/super.c:1387
>         sb_start_write include/linux/fs.h:1597 [inline]
>         mnt_want_write+0x3f/0xc0 fs/namespace.c:360
>         ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
>         ovl_setattr+0x10b/0xaf0 fs/overlayfs/inode.c:30
>         notify_change+0xbde/0x1110 fs/attr.c:334
>         do_truncate+0x1bd/0x2d0 fs/open.c:63
>         handle_truncate fs/namei.c:3008 [inline]
>         do_last fs/namei.c:3424 [inline]
>         path_openat+0x375f/0x5150 fs/namei.c:3534
>         do_filp_open+0x255/0x380 fs/namei.c:3564
>         do_sys_open+0x568/0x700 fs/open.c:1063
>         __do_sys_openat fs/open.c:1090 [inline]
>         __se_sys_openat fs/open.c:1084 [inline]
>         __x64_sys_openat+0x9d/0x100 fs/open.c:1084
>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> -> #0 (&ovl_i_mutex_key[depth]){+.+.}:
>         lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
>         down_write+0x8a/0x130 kernel/locking/rwsem.c:70
>         inode_lock include/linux/fs.h:757 [inline]
>         ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>         call_write_iter include/linux/fs.h:1857 [inline]
>         new_sync_write fs/read_write.c:474 [inline]
>         __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
>         __kernel_write+0x10c/0x370 fs/read_write.c:506
>         write_pipe_buf+0x180/0x240 fs/splice.c:797
>         splice_from_pipe_feed fs/splice.c:503 [inline]
>         __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
>         splice_from_pipe+0x1ec/0x340 fs/splice.c:662
>         default_file_splice_write+0x3c/0x90 fs/splice.c:809
>         do_splice_from fs/splice.c:851 [inline]
>         do_splice+0x64a/0x1430 fs/splice.c:1147
>         __do_sys_splice fs/splice.c:1414 [inline]
>         __se_sys_splice fs/splice.c:1394 [inline]
>         __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> other info that might help us debug this:
>
> Chain exists of:
>    &ovl_i_mutex_key[depth] --> sb_writers#3 --> &pipe->mutex/1
>
>   Possible unsafe locking scenario:
>
>         CPU0                    CPU1
>         ----                    ----
>    lock(&pipe->mutex/1);
>                                 lock(sb_writers#3);
>                                 lock(&pipe->mutex/1);
>    lock(&ovl_i_mutex_key[depth]);
>
>   *** DEADLOCK ***
>
> 2 locks held by syz-executor338/5996:
>   #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: file_start_write
> include/linux/fs.h:2810 [inline]
>   #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: do_splice+0xd2e/0x1430
> fs/splice.c:1146
>   #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested
> fs/pipe.c:62 [inline]
>   #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
> fs/pipe.c:70
>
> stack backtrace:
> CPU: 0 PID: 5996 Comm: syz-executor338 Not tainted 4.20.0-rc4+ #351
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
> Google 01/01/2011
> Call Trace:
>   __dump_stack lib/dump_stack.c:77 [inline]
>   dump_stack+0x244/0x39d lib/dump_stack.c:113
>   print_circular_bug.isra.35.cold.54+0x1bd/0x27d
> kernel/locking/lockdep.c:1221
>   check_prev_add kernel/locking/lockdep.c:1863 [inline]
>   check_prevs_add kernel/locking/lockdep.c:1976 [inline]
>   validate_chain kernel/locking/lockdep.c:2347 [inline]
>   __lock_acquire+0x3399/0x4c20 kernel/locking/lockdep.c:3341
>   lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
>   down_write+0x8a/0x130 kernel/locking/rwsem.c:70
>   inode_lock include/linux/fs.h:757 [inline]
>   ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>   call_write_iter include/linux/fs.h:1857 [inline]
>   new_sync_write fs/read_write.c:474 [inline]
>   __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
>   __kernel_write+0x10c/0x370 fs/read_write.c:506
>   write_pipe_buf+0x180/0x240 fs/splice.c:797
>   splice_from_pipe_feed fs/splice.c:503 [inline]
>   __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
>   splice_from_pipe+0x1ec/0x340 fs/splice.c:662
>   default_file_splice_write+0x3c/0x90 fs/splice.c:809
>   do_splice_from fs/splice.c:851 [inline]
>   do_splice+0x64a/0x1430 fs/splice.c:1147
>   __do_sys_splice fs/splice.c:1414 [inline]
>   __se_sys_splice fs/splice.c:1394 [inline]
>   __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>   do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>   entry_SYSCALL_64_after_hwframe+0x49/0xbe
> RIP: 0033:0x445ad9
> Code: e8 5c b7 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7
> 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
> ff 0f 83 2b 12 fc ff c3 66 2e 0f 1f 84 00 00 00 00
> RSP: 002b:00007f18e3f71cd8 EFLAGS: 00000216 ORIG_RAX: 0000000000000113
> RAX: ffffffffffffffda RBX: 00000000006dac78 RCX: 0000000000445ad9
> RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000007
> RBP: 00000000006dac70 R08: 000100000000000a R09: 0000000000000007
> R10: 0000000000000000 R11: 0000000000000216 R12: 00000000006dac7c
> R13: 00007ffde0706e9f R14: 00007f18e3f729c0 R15: 00000000006dad4c
>

Dmitry,

FYI, this is not the same root cause as this report with the same syzbot id:
https://syzkaller.appspot.com/text?tag=CrashReport&x=16a074a1400000

The former report (which syzbot has no repro for) is fixed by:
"knernel/acct.c: fix locking order when switching acct files"
It is available on: https://github.com/amir73il/linux ovl-fixes
but Miklos did not get around to properly reviewing the patch yet
and neither did Al...

The commit message of my also includes a very simple reproducer.
Perhaps you want to split those two reports into two different syzbot
report ids?

Can you do that?
If yes, let me know, so will change the reported by tag on my fix.

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in ovl_write_iter
  2018-11-27  7:44   ` Amir Goldstein
@ 2018-11-27 14:12     ` Dmitry Vyukov
  0 siblings, 0 replies; 7+ messages in thread
From: Dmitry Vyukov @ 2018-11-27 14:12 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: syzbot+695726bc473f9c36a4b6, linux-fsdevel, linux-kernel,
	overlayfs, Miklos Szeredi, syzkaller-bugs, Al Viro

On Tue, Nov 27, 2018 at 8:44 AM, Amir Goldstein <amir73il@gmail.com> wrote:
> On Tue, Nov 27, 2018 at 9:06 AM syzbot
> <syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com> wrote:
>>
>> syzbot has found a reproducer for the following crash on:
>>
>> HEAD commit:    6f8b52ba442c Merge tag 'hwmon-for-v4.20-rc5' of git://git...
>> git tree:       upstream
>> console output: https://syzkaller.appspot.com/x/log.txt?x=120f3905400000
>> kernel config:  https://syzkaller.appspot.com/x/.config?x=c94f9f0c0363db4b
>> dashboard link: https://syzkaller.appspot.com/bug?extid=695726bc473f9c36a4b6
>> compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
>> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=10cad225400000
>> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=13813093400000
>>
>> IMPORTANT: if you fix the bug, please add the following tag to the commit:
>> Reported-by: syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com
>>
>> overlayfs: filesystem on './file0' not supported as upperdir
>>
>> ======================================================
>> WARNING: possible circular locking dependency detected
>> 4.20.0-rc4+ #351 Not tainted
>> ------------------------------------------------------
>> syz-executor338/5996 is trying to acquire lock:
>> 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at: inode_lock
>> include/linux/fs.h:757 [inline]
>> 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at:
>> ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>>
>> but task is already holding lock:
>> 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62
>> [inline]
>> 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
>> fs/pipe.c:70
>>
>> which lock already depends on the new lock.
>>
>>
>> the existing dependency chain (in reverse order) is:
>>
>> -> #2 (&pipe->mutex/1){+.+.}:
>>         __mutex_lock_common kernel/locking/mutex.c:925 [inline]
>>         __mutex_lock+0x166/0x16f0 kernel/locking/mutex.c:1072
>>         mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
>>         pipe_lock_nested fs/pipe.c:62 [inline]
>>         pipe_lock+0x6e/0x80 fs/pipe.c:70
>>         iter_file_splice_write+0x27d/0x1050 fs/splice.c:700
>>         do_splice_from fs/splice.c:851 [inline]
>>         do_splice+0x64a/0x1430 fs/splice.c:1147
>>         __do_sys_splice fs/splice.c:1414 [inline]
>>         __se_sys_splice fs/splice.c:1394 [inline]
>>         __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>
>> -> #1 (sb_writers#3){.+.+}:
>>         percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36
>> [inline]
>>         percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
>>         __sb_start_write+0x214/0x370 fs/super.c:1387
>>         sb_start_write include/linux/fs.h:1597 [inline]
>>         mnt_want_write+0x3f/0xc0 fs/namespace.c:360
>>         ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
>>         ovl_setattr+0x10b/0xaf0 fs/overlayfs/inode.c:30
>>         notify_change+0xbde/0x1110 fs/attr.c:334
>>         do_truncate+0x1bd/0x2d0 fs/open.c:63
>>         handle_truncate fs/namei.c:3008 [inline]
>>         do_last fs/namei.c:3424 [inline]
>>         path_openat+0x375f/0x5150 fs/namei.c:3534
>>         do_filp_open+0x255/0x380 fs/namei.c:3564
>>         do_sys_open+0x568/0x700 fs/open.c:1063
>>         __do_sys_openat fs/open.c:1090 [inline]
>>         __se_sys_openat fs/open.c:1084 [inline]
>>         __x64_sys_openat+0x9d/0x100 fs/open.c:1084
>>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>
>> -> #0 (&ovl_i_mutex_key[depth]){+.+.}:
>>         lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
>>         down_write+0x8a/0x130 kernel/locking/rwsem.c:70
>>         inode_lock include/linux/fs.h:757 [inline]
>>         ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>>         call_write_iter include/linux/fs.h:1857 [inline]
>>         new_sync_write fs/read_write.c:474 [inline]
>>         __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
>>         __kernel_write+0x10c/0x370 fs/read_write.c:506
>>         write_pipe_buf+0x180/0x240 fs/splice.c:797
>>         splice_from_pipe_feed fs/splice.c:503 [inline]
>>         __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
>>         splice_from_pipe+0x1ec/0x340 fs/splice.c:662
>>         default_file_splice_write+0x3c/0x90 fs/splice.c:809
>>         do_splice_from fs/splice.c:851 [inline]
>>         do_splice+0x64a/0x1430 fs/splice.c:1147
>>         __do_sys_splice fs/splice.c:1414 [inline]
>>         __se_sys_splice fs/splice.c:1394 [inline]
>>         __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>
>> other info that might help us debug this:
>>
>> Chain exists of:
>>    &ovl_i_mutex_key[depth] --> sb_writers#3 --> &pipe->mutex/1
>>
>>   Possible unsafe locking scenario:
>>
>>         CPU0                    CPU1
>>         ----                    ----
>>    lock(&pipe->mutex/1);
>>                                 lock(sb_writers#3);
>>                                 lock(&pipe->mutex/1);
>>    lock(&ovl_i_mutex_key[depth]);
>>
>>   *** DEADLOCK ***
>>
>> 2 locks held by syz-executor338/5996:
>>   #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: file_start_write
>> include/linux/fs.h:2810 [inline]
>>   #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: do_splice+0xd2e/0x1430
>> fs/splice.c:1146
>>   #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested
>> fs/pipe.c:62 [inline]
>>   #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
>> fs/pipe.c:70
>>
>> stack backtrace:
>> CPU: 0 PID: 5996 Comm: syz-executor338 Not tainted 4.20.0-rc4+ #351
>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
>> Google 01/01/2011
>> Call Trace:
>>   __dump_stack lib/dump_stack.c:77 [inline]
>>   dump_stack+0x244/0x39d lib/dump_stack.c:113
>>   print_circular_bug.isra.35.cold.54+0x1bd/0x27d
>> kernel/locking/lockdep.c:1221
>>   check_prev_add kernel/locking/lockdep.c:1863 [inline]
>>   check_prevs_add kernel/locking/lockdep.c:1976 [inline]
>>   validate_chain kernel/locking/lockdep.c:2347 [inline]
>>   __lock_acquire+0x3399/0x4c20 kernel/locking/lockdep.c:3341
>>   lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
>>   down_write+0x8a/0x130 kernel/locking/rwsem.c:70
>>   inode_lock include/linux/fs.h:757 [inline]
>>   ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>>   call_write_iter include/linux/fs.h:1857 [inline]
>>   new_sync_write fs/read_write.c:474 [inline]
>>   __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
>>   __kernel_write+0x10c/0x370 fs/read_write.c:506
>>   write_pipe_buf+0x180/0x240 fs/splice.c:797
>>   splice_from_pipe_feed fs/splice.c:503 [inline]
>>   __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
>>   splice_from_pipe+0x1ec/0x340 fs/splice.c:662
>>   default_file_splice_write+0x3c/0x90 fs/splice.c:809
>>   do_splice_from fs/splice.c:851 [inline]
>>   do_splice+0x64a/0x1430 fs/splice.c:1147
>>   __do_sys_splice fs/splice.c:1414 [inline]
>>   __se_sys_splice fs/splice.c:1394 [inline]
>>   __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>>   do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>   entry_SYSCALL_64_after_hwframe+0x49/0xbe
>> RIP: 0033:0x445ad9
>> Code: e8 5c b7 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7
>> 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
>> ff 0f 83 2b 12 fc ff c3 66 2e 0f 1f 84 00 00 00 00
>> RSP: 002b:00007f18e3f71cd8 EFLAGS: 00000216 ORIG_RAX: 0000000000000113
>> RAX: ffffffffffffffda RBX: 00000000006dac78 RCX: 0000000000445ad9
>> RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000007
>> RBP: 00000000006dac70 R08: 000100000000000a R09: 0000000000000007
>> R10: 0000000000000000 R11: 0000000000000216 R12: 00000000006dac7c
>> R13: 00007ffde0706e9f R14: 00007f18e3f729c0 R15: 00000000006dad4c
>>
>
> Dmitry,
>
> FYI, this is not the same root cause as this report with the same syzbot id:
> https://syzkaller.appspot.com/text?tag=CrashReport&x=16a074a1400000
>
> The former report (which syzbot has no repro for) is fixed by:
> "knernel/acct.c: fix locking order when switching acct files"
> It is available on: https://github.com/amir73il/linux ovl-fixes
> but Miklos did not get around to properly reviewing the patch yet
> and neither did Al...
>
> The commit message of my also includes a very simple reproducer.
> Perhaps you want to split those two reports into two different syzbot
> report ids?
>
> Can you do that?
> If yes, let me know, so will change the reported by tag on my fix.


ETOOMANYBUGS :)

syzkaller extract crash identity from crash message, what you see in
email subject and then uses it to sort crashes into bug buckets. It's
not always precise. And I think it can't be 100% precise. In the end
there can be 2 completely identical crash reports, yet caused by
different root causes. There can also be radically different crashes,
yet caused by the same root cause.

Manually patching up things won't work, how will syzkaller sort future
crashes into buckets? Is it the first one? The second one? Or a new
third one?
The logic for extracting bug identity can be improved:
https://github.com/google/syzkaller/blob/master/pkg/report/linux.go
but I don't know how exactly for this case. Deadlocks are a bit tricky
(well, like most crashes).

This does not tend to be too bad in practice. You hit an example of
the problem, but overall it does not happen too frequently. The
overall plan for dealing with this is fixing bugs. One by one. Once
this one is marked as fixed and closed by syzbot, it will report any
other bugs hiding behind it in new bug reports (maybe there are not
even two, maybe there are five, who knows).

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in ovl_write_iter
  2018-11-27  7:06 ` syzbot
  2018-11-27  7:44   ` Amir Goldstein
@ 2018-11-27 15:07   ` Amir Goldstein
  2018-11-27 15:48     ` Dmitry Vyukov
  1 sibling, 1 reply; 7+ messages in thread
From: Amir Goldstein @ 2018-11-27 15:07 UTC (permalink / raw)
  To: syzbot+695726bc473f9c36a4b6, Miklos Szeredi
  Cc: linux-fsdevel, linux-kernel, overlayfs, syzkaller-bugs

On Tue, Nov 27, 2018 at 9:06 AM syzbot
<syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com> wrote:
>
> syzbot has found a reproducer for the following crash on:
>
> HEAD commit:    6f8b52ba442c Merge tag 'hwmon-for-v4.20-rc5' of git://git...
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=120f3905400000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=c94f9f0c0363db4b
> dashboard link: https://syzkaller.appspot.com/bug?extid=695726bc473f9c36a4b6
> compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=10cad225400000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=13813093400000
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com
>
> overlayfs: filesystem on './file0' not supported as upperdir
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 4.20.0-rc4+ #351 Not tainted
> ------------------------------------------------------
> syz-executor338/5996 is trying to acquire lock:
> 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at: inode_lock
> include/linux/fs.h:757 [inline]
> 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at:
> ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>
> but task is already holding lock:
> 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62
> [inline]
> 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
> fs/pipe.c:70
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #2 (&pipe->mutex/1){+.+.}:
>         __mutex_lock_common kernel/locking/mutex.c:925 [inline]
>         __mutex_lock+0x166/0x16f0 kernel/locking/mutex.c:1072
>         mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
>         pipe_lock_nested fs/pipe.c:62 [inline]
>         pipe_lock+0x6e/0x80 fs/pipe.c:70
>         iter_file_splice_write+0x27d/0x1050 fs/splice.c:700
>         do_splice_from fs/splice.c:851 [inline]
>         do_splice+0x64a/0x1430 fs/splice.c:1147
>         __do_sys_splice fs/splice.c:1414 [inline]
>         __se_sys_splice fs/splice.c:1394 [inline]
>         __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> -> #1 (sb_writers#3){.+.+}:
>         percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36
> [inline]
>         percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
>         __sb_start_write+0x214/0x370 fs/super.c:1387
>         sb_start_write include/linux/fs.h:1597 [inline]
>         mnt_want_write+0x3f/0xc0 fs/namespace.c:360
>         ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
>         ovl_setattr+0x10b/0xaf0 fs/overlayfs/inode.c:30
>         notify_change+0xbde/0x1110 fs/attr.c:334
>         do_truncate+0x1bd/0x2d0 fs/open.c:63
>         handle_truncate fs/namei.c:3008 [inline]
>         do_last fs/namei.c:3424 [inline]
>         path_openat+0x375f/0x5150 fs/namei.c:3534
>         do_filp_open+0x255/0x380 fs/namei.c:3564
>         do_sys_open+0x568/0x700 fs/open.c:1063
>         __do_sys_openat fs/open.c:1090 [inline]
>         __se_sys_openat fs/open.c:1084 [inline]
>         __x64_sys_openat+0x9d/0x100 fs/open.c:1084
>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> -> #0 (&ovl_i_mutex_key[depth]){+.+.}:
>         lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
>         down_write+0x8a/0x130 kernel/locking/rwsem.c:70
>         inode_lock include/linux/fs.h:757 [inline]
>         ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>         call_write_iter include/linux/fs.h:1857 [inline]
>         new_sync_write fs/read_write.c:474 [inline]
>         __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
>         __kernel_write+0x10c/0x370 fs/read_write.c:506
>         write_pipe_buf+0x180/0x240 fs/splice.c:797
>         splice_from_pipe_feed fs/splice.c:503 [inline]
>         __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
>         splice_from_pipe+0x1ec/0x340 fs/splice.c:662
>         default_file_splice_write+0x3c/0x90 fs/splice.c:809
>         do_splice_from fs/splice.c:851 [inline]
>         do_splice+0x64a/0x1430 fs/splice.c:1147
>         __do_sys_splice fs/splice.c:1414 [inline]
>         __se_sys_splice fs/splice.c:1394 [inline]
>         __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> other info that might help us debug this:
>
> Chain exists of:
>    &ovl_i_mutex_key[depth] --> sb_writers#3 --> &pipe->mutex/1
>
>   Possible unsafe locking scenario:
>
>         CPU0                    CPU1
>         ----                    ----
>    lock(&pipe->mutex/1);
>                                 lock(sb_writers#3);
>                                 lock(&pipe->mutex/1);
>    lock(&ovl_i_mutex_key[depth]);
>
>   *** DEADLOCK ***
>
> 2 locks held by syz-executor338/5996:
>   #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: file_start_write
> include/linux/fs.h:2810 [inline]
>   #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: do_splice+0xd2e/0x1430
> fs/splice.c:1146
>   #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested
> fs/pipe.c:62 [inline]
>   #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
> fs/pipe.c:70
>
> stack backtrace:
> CPU: 0 PID: 5996 Comm: syz-executor338 Not tainted 4.20.0-rc4+ #351
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
> Google 01/01/2011
> Call Trace:
>   __dump_stack lib/dump_stack.c:77 [inline]
>   dump_stack+0x244/0x39d lib/dump_stack.c:113
>   print_circular_bug.isra.35.cold.54+0x1bd/0x27d
> kernel/locking/lockdep.c:1221
>   check_prev_add kernel/locking/lockdep.c:1863 [inline]
>   check_prevs_add kernel/locking/lockdep.c:1976 [inline]
>   validate_chain kernel/locking/lockdep.c:2347 [inline]
>   __lock_acquire+0x3399/0x4c20 kernel/locking/lockdep.c:3341
>   lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
>   down_write+0x8a/0x130 kernel/locking/rwsem.c:70
>   inode_lock include/linux/fs.h:757 [inline]
>   ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>   call_write_iter include/linux/fs.h:1857 [inline]
>   new_sync_write fs/read_write.c:474 [inline]
>   __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
>   __kernel_write+0x10c/0x370 fs/read_write.c:506
>   write_pipe_buf+0x180/0x240 fs/splice.c:797
>   splice_from_pipe_feed fs/splice.c:503 [inline]
>   __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
>   splice_from_pipe+0x1ec/0x340 fs/splice.c:662
>   default_file_splice_write+0x3c/0x90 fs/splice.c:809
>   do_splice_from fs/splice.c:851 [inline]
>   do_splice+0x64a/0x1430 fs/splice.c:1147
>   __do_sys_splice fs/splice.c:1414 [inline]
>   __se_sys_splice fs/splice.c:1394 [inline]
>   __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>   do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>   entry_SYSCALL_64_after_hwframe+0x49/0xbe
> RIP: 0033:0x445ad9
> Code: e8 5c b7 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7
> 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
> ff 0f 83 2b 12 fc ff c3 66 2e 0f 1f 84 00 00 00 00
> RSP: 002b:00007f18e3f71cd8 EFLAGS: 00000216 ORIG_RAX: 0000000000000113
> RAX: ffffffffffffffda RBX: 00000000006dac78 RCX: 0000000000445ad9
> RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000007
> RBP: 00000000006dac70 R08: 000100000000000a R09: 0000000000000007
> R10: 0000000000000000 R11: 0000000000000216 R12: 00000000006dac7c
> R13: 00007ffde0706e9f R14: 00007f18e3f729c0 R15: 00000000006dad4c
>

This looks like a false positive because lockdep is not aware of
s_stack_depth of the file (fs) associated with the pipe.

HOWEVER, because overlayfs calls do_splice_direct() on copy up,
it is important to make sure that higher layer do_splice_direct() cannot
recursively trigger copy up.

At this time, overlayfs does the copy up on open for write, so any
higher layer do_splice_direct() will already get an out file that has been
copied up, but with future plans for "lazy copy up on first write", we need
to be careful.

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in ovl_write_iter
  2018-11-27 15:07   ` Amir Goldstein
@ 2018-11-27 15:48     ` Dmitry Vyukov
  2018-11-27 16:13       ` Amir Goldstein
  0 siblings, 1 reply; 7+ messages in thread
From: Dmitry Vyukov @ 2018-11-27 15:48 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: syzbot+695726bc473f9c36a4b6, Miklos Szeredi, linux-fsdevel,
	linux-kernel, overlayfs, syzkaller-bugs

On Tue, Nov 27, 2018 at 4:07 PM, Amir Goldstein <amir73il@gmail.com> wrote:
> On Tue, Nov 27, 2018 at 9:06 AM syzbot
> <syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com> wrote:
>>
>> syzbot has found a reproducer for the following crash on:
>>
>> HEAD commit:    6f8b52ba442c Merge tag 'hwmon-for-v4.20-rc5' of git://git...
>> git tree:       upstream
>> console output: https://syzkaller.appspot.com/x/log.txt?x=120f3905400000
>> kernel config:  https://syzkaller.appspot.com/x/.config?x=c94f9f0c0363db4b
>> dashboard link: https://syzkaller.appspot.com/bug?extid=695726bc473f9c36a4b6
>> compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
>> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=10cad225400000
>> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=13813093400000
>>
>> IMPORTANT: if you fix the bug, please add the following tag to the commit:
>> Reported-by: syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com
>>
>> overlayfs: filesystem on './file0' not supported as upperdir
>>
>> ======================================================
>> WARNING: possible circular locking dependency detected
>> 4.20.0-rc4+ #351 Not tainted
>> ------------------------------------------------------
>> syz-executor338/5996 is trying to acquire lock:
>> 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at: inode_lock
>> include/linux/fs.h:757 [inline]
>> 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at:
>> ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>>
>> but task is already holding lock:
>> 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62
>> [inline]
>> 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
>> fs/pipe.c:70
>>
>> which lock already depends on the new lock.
>>
>>
>> the existing dependency chain (in reverse order) is:
>>
>> -> #2 (&pipe->mutex/1){+.+.}:
>>         __mutex_lock_common kernel/locking/mutex.c:925 [inline]
>>         __mutex_lock+0x166/0x16f0 kernel/locking/mutex.c:1072
>>         mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
>>         pipe_lock_nested fs/pipe.c:62 [inline]
>>         pipe_lock+0x6e/0x80 fs/pipe.c:70
>>         iter_file_splice_write+0x27d/0x1050 fs/splice.c:700
>>         do_splice_from fs/splice.c:851 [inline]
>>         do_splice+0x64a/0x1430 fs/splice.c:1147
>>         __do_sys_splice fs/splice.c:1414 [inline]
>>         __se_sys_splice fs/splice.c:1394 [inline]
>>         __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>
>> -> #1 (sb_writers#3){.+.+}:
>>         percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36
>> [inline]
>>         percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
>>         __sb_start_write+0x214/0x370 fs/super.c:1387
>>         sb_start_write include/linux/fs.h:1597 [inline]
>>         mnt_want_write+0x3f/0xc0 fs/namespace.c:360
>>         ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
>>         ovl_setattr+0x10b/0xaf0 fs/overlayfs/inode.c:30
>>         notify_change+0xbde/0x1110 fs/attr.c:334
>>         do_truncate+0x1bd/0x2d0 fs/open.c:63
>>         handle_truncate fs/namei.c:3008 [inline]
>>         do_last fs/namei.c:3424 [inline]
>>         path_openat+0x375f/0x5150 fs/namei.c:3534
>>         do_filp_open+0x255/0x380 fs/namei.c:3564
>>         do_sys_open+0x568/0x700 fs/open.c:1063
>>         __do_sys_openat fs/open.c:1090 [inline]
>>         __se_sys_openat fs/open.c:1084 [inline]
>>         __x64_sys_openat+0x9d/0x100 fs/open.c:1084
>>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>
>> -> #0 (&ovl_i_mutex_key[depth]){+.+.}:
>>         lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
>>         down_write+0x8a/0x130 kernel/locking/rwsem.c:70
>>         inode_lock include/linux/fs.h:757 [inline]
>>         ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>>         call_write_iter include/linux/fs.h:1857 [inline]
>>         new_sync_write fs/read_write.c:474 [inline]
>>         __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
>>         __kernel_write+0x10c/0x370 fs/read_write.c:506
>>         write_pipe_buf+0x180/0x240 fs/splice.c:797
>>         splice_from_pipe_feed fs/splice.c:503 [inline]
>>         __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
>>         splice_from_pipe+0x1ec/0x340 fs/splice.c:662
>>         default_file_splice_write+0x3c/0x90 fs/splice.c:809
>>         do_splice_from fs/splice.c:851 [inline]
>>         do_splice+0x64a/0x1430 fs/splice.c:1147
>>         __do_sys_splice fs/splice.c:1414 [inline]
>>         __se_sys_splice fs/splice.c:1394 [inline]
>>         __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>>         do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>         entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>
>> other info that might help us debug this:
>>
>> Chain exists of:
>>    &ovl_i_mutex_key[depth] --> sb_writers#3 --> &pipe->mutex/1
>>
>>   Possible unsafe locking scenario:
>>
>>         CPU0                    CPU1
>>         ----                    ----
>>    lock(&pipe->mutex/1);
>>                                 lock(sb_writers#3);
>>                                 lock(&pipe->mutex/1);
>>    lock(&ovl_i_mutex_key[depth]);
>>
>>   *** DEADLOCK ***
>>
>> 2 locks held by syz-executor338/5996:
>>   #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: file_start_write
>> include/linux/fs.h:2810 [inline]
>>   #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: do_splice+0xd2e/0x1430
>> fs/splice.c:1146
>>   #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested
>> fs/pipe.c:62 [inline]
>>   #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
>> fs/pipe.c:70
>>
>> stack backtrace:
>> CPU: 0 PID: 5996 Comm: syz-executor338 Not tainted 4.20.0-rc4+ #351
>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
>> Google 01/01/2011
>> Call Trace:
>>   __dump_stack lib/dump_stack.c:77 [inline]
>>   dump_stack+0x244/0x39d lib/dump_stack.c:113
>>   print_circular_bug.isra.35.cold.54+0x1bd/0x27d
>> kernel/locking/lockdep.c:1221
>>   check_prev_add kernel/locking/lockdep.c:1863 [inline]
>>   check_prevs_add kernel/locking/lockdep.c:1976 [inline]
>>   validate_chain kernel/locking/lockdep.c:2347 [inline]
>>   __lock_acquire+0x3399/0x4c20 kernel/locking/lockdep.c:3341
>>   lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844
>>   down_write+0x8a/0x130 kernel/locking/rwsem.c:70
>>   inode_lock include/linux/fs.h:757 [inline]
>>   ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231
>>   call_write_iter include/linux/fs.h:1857 [inline]
>>   new_sync_write fs/read_write.c:474 [inline]
>>   __vfs_write+0x6b8/0x9f0 fs/read_write.c:487
>>   __kernel_write+0x10c/0x370 fs/read_write.c:506
>>   write_pipe_buf+0x180/0x240 fs/splice.c:797
>>   splice_from_pipe_feed fs/splice.c:503 [inline]
>>   __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627
>>   splice_from_pipe+0x1ec/0x340 fs/splice.c:662
>>   default_file_splice_write+0x3c/0x90 fs/splice.c:809
>>   do_splice_from fs/splice.c:851 [inline]
>>   do_splice+0x64a/0x1430 fs/splice.c:1147
>>   __do_sys_splice fs/splice.c:1414 [inline]
>>   __se_sys_splice fs/splice.c:1394 [inline]
>>   __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394
>>   do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>   entry_SYSCALL_64_after_hwframe+0x49/0xbe
>> RIP: 0033:0x445ad9
>> Code: e8 5c b7 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7
>> 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
>> ff 0f 83 2b 12 fc ff c3 66 2e 0f 1f 84 00 00 00 00
>> RSP: 002b:00007f18e3f71cd8 EFLAGS: 00000216 ORIG_RAX: 0000000000000113
>> RAX: ffffffffffffffda RBX: 00000000006dac78 RCX: 0000000000445ad9
>> RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000007
>> RBP: 00000000006dac70 R08: 000100000000000a R09: 0000000000000007
>> R10: 0000000000000000 R11: 0000000000000216 R12: 00000000006dac7c
>> R13: 00007ffde0706e9f R14: 00007f18e3f729c0 R15: 00000000006dad4c
>>
>
> This looks like a false positive because lockdep is not aware of
> s_stack_depth of the file (fs) associated with the pipe.

There must be some annotation to tell lockdep about this.


>
> HOWEVER, because overlayfs calls do_splice_direct() on copy up,
> it is important to make sure that higher layer do_splice_direct() cannot
> recursively trigger copy up.
>
> At this time, overlayfs does the copy up on open for write, so any
> higher layer do_splice_direct() will already get an out file that has been
> copied up, but with future plans for "lazy copy up on first write", we need
> to be careful.
>
> Thanks,
> Amir.
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bugs+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/CAOQ4uxgiERNAyX4PDHf1DQLBg56rBANWLe575TGkz_WpbxaoCg%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: possible deadlock in ovl_write_iter
  2018-11-27 15:48     ` Dmitry Vyukov
@ 2018-11-27 16:13       ` Amir Goldstein
  0 siblings, 0 replies; 7+ messages in thread
From: Amir Goldstein @ 2018-11-27 16:13 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: syzbot+695726bc473f9c36a4b6, Miklos Szeredi, linux-fsdevel,
	linux-kernel, overlayfs, syzkaller-bugs

> > This looks like a false positive because lockdep is not aware of
> > s_stack_depth of the file (fs) associated with the pipe.
>
> There must be some annotation to tell lockdep about this.
>

It's a long story that can be summed up as "not simple":
https://lkml.org/lkml/2017/9/21/63

Overlayfs annotates inode mutex in lockdep friendly manner.
This is why you see:
 (&ovl_i_mutex_key[depth]){+.+.}, at: inode_lock

But this does not extend to other locks that may be associated
with filesystem or blockdev objects (e.g. pipe_lock()) and does
not cover the case of stacked blockdev (e.g. loop).

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-11-28  3:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <00000000000074e10d0576cc48f1@google.com>
     [not found] ` <CAOQ4uxik7QbQVBQbaPWCxPG90HuJK=T0yckoyfe5NjLHhg898Q@mail.gmail.com>
2018-09-27  3:51   ` possible deadlock in ovl_write_iter Amir Goldstein
2018-11-27  7:06 ` syzbot
2018-11-27  7:44   ` Amir Goldstein
2018-11-27 14:12     ` Dmitry Vyukov
2018-11-27 15:07   ` Amir Goldstein
2018-11-27 15:48     ` Dmitry Vyukov
2018-11-27 16:13       ` Amir Goldstein

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).