From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=3.0 tests=FROM_LOCAL_HEX, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E9DBC43441 for ; Tue, 27 Nov 2018 07:06:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 160D4208E4 for ; Tue, 27 Nov 2018 07:06:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 160D4208E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=syzkaller.appspotmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729018AbeK0SC5 (ORCPT ); Tue, 27 Nov 2018 13:02:57 -0500 Received: from mail-it1-f200.google.com ([209.85.166.200]:55804 "EHLO mail-it1-f200.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727684AbeK0SC5 (ORCPT ); Tue, 27 Nov 2018 13:02:57 -0500 Received: by mail-it1-f200.google.com with SMTP id j3so2798995itf.5 for ; Mon, 26 Nov 2018 23:06:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:date:in-reply-to:message-id:subject :from:to; bh=Bt+q+NMqYf3ocRJEfF2X+Jo+3ugpUAkQmvQhsPmcU8I=; b=PLpRiksfbcP1t/sPCa07jTmZ7NF9Yjpvf3kOkImPDfBmgwvxs3XZ0w8LS8zeP3XoBC HW3SiOolBE/jhFROViO8EyIfcI732rhqbQ+40+Zt+ONN1ybqNty1v4Qzmp+Riltbjej0 Ff5ikkxPJ5y1luzYb2uqcFJ+yO1sJGVeQfMjl10mRh/0j2fDvf9H2KOThfaUUgk46xhT 3tTYueT4hlxLdB53IZCxHhS+Xsed6Np5FoNLaT8OMFLhNGWCgoKl34fIqO8Kuidqqe4z LDTQmW6Se6J4p/0LOjWP79dVRDXrNHhH1Bp+EazF7OimHksdEvQbbcRHHRVywRbHQo+C f4wg== X-Gm-Message-State: AGRZ1gIyKx1Q29XUTuudaiLzrgSakCVuK68+IKo5q0cRQbIq1RTIBJ0S gQowzdDywQJXLpZnAVLBNxUDGCrGtRIUQtqr6MI7FUvI50zd X-Google-Smtp-Source: AJdET5exKJXidX88f7uMtST05stTFMZbB81psdGAsftv4JvbCOtyASrM9cqMdVpBi75Taom977P9WQe/Y1rt56f/QXVItQpgbp/U MIME-Version: 1.0 X-Received: by 2002:a24:5dd4:: with SMTP id w203mr19640226ita.8.1543302362871; Mon, 26 Nov 2018 23:06:02 -0800 (PST) Date: Mon, 26 Nov 2018 23:06:02 -0800 In-Reply-To: <00000000000074e10d0576cc48f1@google.com> X-Google-Appengine-App-Id: s~syzkaller X-Google-Appengine-App-Id-Alias: syzkaller Message-ID: <0000000000001ec857057ba01589@google.com> Subject: Re: possible deadlock in ovl_write_iter From: syzbot To: amir73il@gmail.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-unionfs@vger.kernel.org, miklos@szeredi.hu, syzkaller-bugs@googlegroups.com Content-Type: text/plain; charset="UTF-8"; format=flowed; delsp=yes Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org syzbot has found a reproducer for the following crash on: HEAD commit: 6f8b52ba442c Merge tag 'hwmon-for-v4.20-rc5' of git://git... git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?x=120f3905400000 kernel config: https://syzkaller.appspot.com/x/.config?x=c94f9f0c0363db4b dashboard link: https://syzkaller.appspot.com/bug?extid=695726bc473f9c36a4b6 compiler: gcc (GCC) 8.0.1 20180413 (experimental) syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10cad225400000 C reproducer: https://syzkaller.appspot.com/x/repro.c?x=13813093400000 IMPORTANT: if you fix the bug, please add the following tag to the commit: Reported-by: syzbot+695726bc473f9c36a4b6@syzkaller.appspotmail.com overlayfs: filesystem on './file0' not supported as upperdir ====================================================== WARNING: possible circular locking dependency detected 4.20.0-rc4+ #351 Not tainted ------------------------------------------------------ syz-executor338/5996 is trying to acquire lock: 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at: inode_lock include/linux/fs.h:757 [inline] 00000000b59bb66d (&ovl_i_mutex_key[depth]){+.+.}, at: ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231 but task is already holding lock: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62 [inline] 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80 fs/pipe.c:70 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&pipe->mutex/1){+.+.}: __mutex_lock_common kernel/locking/mutex.c:925 [inline] __mutex_lock+0x166/0x16f0 kernel/locking/mutex.c:1072 mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087 pipe_lock_nested fs/pipe.c:62 [inline] pipe_lock+0x6e/0x80 fs/pipe.c:70 iter_file_splice_write+0x27d/0x1050 fs/splice.c:700 do_splice_from fs/splice.c:851 [inline] do_splice+0x64a/0x1430 fs/splice.c:1147 __do_sys_splice fs/splice.c:1414 [inline] __se_sys_splice fs/splice.c:1394 [inline] __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #1 (sb_writers#3){.+.+}: percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline] percpu_down_read include/linux/percpu-rwsem.h:59 [inline] __sb_start_write+0x214/0x370 fs/super.c:1387 sb_start_write include/linux/fs.h:1597 [inline] mnt_want_write+0x3f/0xc0 fs/namespace.c:360 ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24 ovl_setattr+0x10b/0xaf0 fs/overlayfs/inode.c:30 notify_change+0xbde/0x1110 fs/attr.c:334 do_truncate+0x1bd/0x2d0 fs/open.c:63 handle_truncate fs/namei.c:3008 [inline] do_last fs/namei.c:3424 [inline] path_openat+0x375f/0x5150 fs/namei.c:3534 do_filp_open+0x255/0x380 fs/namei.c:3564 do_sys_open+0x568/0x700 fs/open.c:1063 __do_sys_openat fs/open.c:1090 [inline] __se_sys_openat fs/open.c:1084 [inline] __x64_sys_openat+0x9d/0x100 fs/open.c:1084 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #0 (&ovl_i_mutex_key[depth]){+.+.}: lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844 down_write+0x8a/0x130 kernel/locking/rwsem.c:70 inode_lock include/linux/fs.h:757 [inline] ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231 call_write_iter include/linux/fs.h:1857 [inline] new_sync_write fs/read_write.c:474 [inline] __vfs_write+0x6b8/0x9f0 fs/read_write.c:487 __kernel_write+0x10c/0x370 fs/read_write.c:506 write_pipe_buf+0x180/0x240 fs/splice.c:797 splice_from_pipe_feed fs/splice.c:503 [inline] __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627 splice_from_pipe+0x1ec/0x340 fs/splice.c:662 default_file_splice_write+0x3c/0x90 fs/splice.c:809 do_splice_from fs/splice.c:851 [inline] do_splice+0x64a/0x1430 fs/splice.c:1147 __do_sys_splice fs/splice.c:1414 [inline] __se_sys_splice fs/splice.c:1394 [inline] __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe other info that might help us debug this: Chain exists of: &ovl_i_mutex_key[depth] --> sb_writers#3 --> &pipe->mutex/1 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&pipe->mutex/1); lock(sb_writers#3); lock(&pipe->mutex/1); lock(&ovl_i_mutex_key[depth]); *** DEADLOCK *** 2 locks held by syz-executor338/5996: #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: file_start_write include/linux/fs.h:2810 [inline] #0: 00000000024e7b73 (sb_writers#8){.+.+}, at: do_splice+0xd2e/0x1430 fs/splice.c:1146 #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62 [inline] #1: 00000000e0274330 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80 fs/pipe.c:70 stack backtrace: CPU: 0 PID: 5996 Comm: syz-executor338 Not tainted 4.20.0-rc4+ #351 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x244/0x39d lib/dump_stack.c:113 print_circular_bug.isra.35.cold.54+0x1bd/0x27d kernel/locking/lockdep.c:1221 check_prev_add kernel/locking/lockdep.c:1863 [inline] check_prevs_add kernel/locking/lockdep.c:1976 [inline] validate_chain kernel/locking/lockdep.c:2347 [inline] __lock_acquire+0x3399/0x4c20 kernel/locking/lockdep.c:3341 lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844 down_write+0x8a/0x130 kernel/locking/rwsem.c:70 inode_lock include/linux/fs.h:757 [inline] ovl_write_iter+0x151/0xd10 fs/overlayfs/file.c:231 call_write_iter include/linux/fs.h:1857 [inline] new_sync_write fs/read_write.c:474 [inline] __vfs_write+0x6b8/0x9f0 fs/read_write.c:487 __kernel_write+0x10c/0x370 fs/read_write.c:506 write_pipe_buf+0x180/0x240 fs/splice.c:797 splice_from_pipe_feed fs/splice.c:503 [inline] __splice_from_pipe+0x38b/0x7c0 fs/splice.c:627 splice_from_pipe+0x1ec/0x340 fs/splice.c:662 default_file_splice_write+0x3c/0x90 fs/splice.c:809 do_splice_from fs/splice.c:851 [inline] do_splice+0x64a/0x1430 fs/splice.c:1147 __do_sys_splice fs/splice.c:1414 [inline] __se_sys_splice fs/splice.c:1394 [inline] __x64_sys_splice+0x2c1/0x330 fs/splice.c:1394 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x445ad9 Code: e8 5c b7 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 2b 12 fc ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007f18e3f71cd8 EFLAGS: 00000216 ORIG_RAX: 0000000000000113 RAX: ffffffffffffffda RBX: 00000000006dac78 RCX: 0000000000445ad9 RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000007 RBP: 00000000006dac70 R08: 000100000000000a R09: 0000000000000007 R10: 0000000000000000 R11: 0000000000000216 R12: 00000000006dac7c R13: 00007ffde0706e9f R14: 00007f18e3f729c0 R15: 00000000006dad4c