linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Anand Jain <anand.jain@oracle.com>
To: fdmanana@gmail.com, Josef Bacik <josef@toxicpanda.com>
Cc: linux-btrfs <linux-btrfs@vger.kernel.org>, kernel-team@fb.com
Subject: Re: [PATCH v2 2/7] btrfs: do not take the uuid_mutex in btrfs_rm_device
Date: Thu, 23 Sep 2021 12:15:58 +0800	[thread overview]
Message-ID: <ff6014a3-42b9-351f-c7c8-6779a3407e66@oracle.com> (raw)
In-Reply-To: <CAL3q7H6r-d_m5UbvOyU=tt_EJ400O0V9zvoBx5Op+fTMAciErQ@mail.gmail.com>




> generic/648, on latest misc-next (that has this patch integrated),
> also triggers the same type of lockdep warning involving the same two
> locks:


  This lockdep warning is fixed by the yet to merge patch:

   [PATCH v2 3/7] btrfs: do not read super look for a device path


Thanks, Anand


> 
> [19738.081729] ======================================================
> [19738.082620] WARNING: possible circular locking dependency detected
> [19738.083511] 5.15.0-rc2-btrfs-next-99 #1 Not tainted
> [19738.084234] ------------------------------------------------------
> [19738.085149] umount/508378 is trying to acquire lock:
> [19738.085884] ffff97a34c161d48 ((wq_completion)loop0){+.+.}-{0:0},
> at: flush_workqueue+0x8b/0x5b0
> [19738.087180]
>                 but task is already holding lock:
> [19738.088048] ffff97a31f64d4a0 (&lo->lo_mutex){+.+.}-{3:3}, at:
> __loop_clr_fd+0x5a/0x680 [loop]
> [19738.089274]
>                 which lock already depends on the new lock.
> 
> [19738.090287]
>                 the existing dependency chain (in reverse order) is:
> [19738.091216]
>                 -> #8 (&lo->lo_mutex){+.+.}-{3:3}:
> [19738.091959]        __mutex_lock+0x92/0x900
> [19738.092473]        lo_open+0x28/0x60 [loop]
> [19738.093018]        blkdev_get_whole+0x28/0x90
> [19738.093650]        blkdev_get_by_dev.part.0+0x142/0x320
> [19738.094298]        blkdev_open+0x5e/0xa0
> [19738.094790]        do_dentry_open+0x163/0x390
> [19738.095425]        path_openat+0x3f0/0xa80
> [19738.096041]        do_filp_open+0xa9/0x150
> [19738.096657]        do_sys_openat2+0x97/0x160
> [19738.097299]        __x64_sys_openat+0x54/0x90
> [19738.097914]        do_syscall_64+0x3b/0xc0
> [19738.098433]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [19738.099243]
>                 -> #7 (&disk->open_mutex){+.+.}-{3:3}:
> [19738.100259]        __mutex_lock+0x92/0x900
> [19738.100865]        blkdev_get_by_dev.part.0+0x56/0x320
> [19738.101530]        swsusp_check+0x19/0x150
> [19738.102046]        software_resume.part.0+0xb8/0x150
> [19738.102678]        resume_store+0xaf/0xd0
> [19738.103181]        kernfs_fop_write_iter+0x140/0x1e0
> [19738.103799]        new_sync_write+0x122/0x1b0
> [19738.104341]        vfs_write+0x29e/0x3d0
> [19738.104831]        ksys_write+0x68/0xe0
> [19738.105309]        do_syscall_64+0x3b/0xc0
> [19738.105823]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [19738.106524]
>                 -> #6 (system_transition_mutex/1){+.+.}-{3:3}:
> [19738.107393]        __mutex_lock+0x92/0x900
> [19738.107911]        software_resume.part.0+0x18/0x150
> [19738.108537]        resume_store+0xaf/0xd0
> [19738.109057]        kernfs_fop_write_iter+0x140/0x1e0
> [19738.109675]        new_sync_write+0x122/0x1b0
> [19738.110218]        vfs_write+0x29e/0x3d0
> [19738.110711]        ksys_write+0x68/0xe0
> [19738.111190]        do_syscall_64+0x3b/0xc0
> [19738.111699]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [19738.112388]
>                 -> #5 (&of->mutex){+.+.}-{3:3}:
> [19738.113089]        __mutex_lock+0x92/0x900
> [19738.113600]        kernfs_seq_start+0x2a/0xb0
> [19738.114141]        seq_read_iter+0x101/0x4d0
> [19738.114679]        new_sync_read+0x11b/0x1a0
> [19738.115212]        vfs_read+0x128/0x1c0
> [19738.115691]        ksys_read+0x68/0xe0
> [19738.116159]        do_syscall_64+0x3b/0xc0
> [19738.116670]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [19738.117382]
>                 -> #4 (&p->lock){+.+.}-{3:3}:
> [19738.118062]        __mutex_lock+0x92/0x900
> [19738.118580]        seq_read_iter+0x51/0x4d0
> [19738.119102]        proc_reg_read_iter+0x48/0x80
> [19738.119651]        generic_file_splice_read+0x102/0x1b0
> [19738.120301]        splice_file_to_pipe+0xbc/0xd0
> [19738.120879]        do_sendfile+0x14e/0x5a0
> [19738.121389]        do_syscall_64+0x3b/0xc0
> [19738.121901]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [19738.122597]
>                 -> #3 (&pipe->mutex/1){+.+.}-{3:3}:
> [19738.123339]        __mutex_lock+0x92/0x900
> [19738.123850]        iter_file_splice_write+0x98/0x440
> [19738.124475]        do_splice+0x36b/0x880
> [19738.124981]        __do_splice+0xde/0x160
> [19738.125483]        __x64_sys_splice+0x92/0x110
> [19738.126037]        do_syscall_64+0x3b/0xc0
> [19738.126553]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [19738.127245]
>                 -> #2 (sb_writers#14){.+.+}-{0:0}:
> [19738.127978]        lo_write_bvec+0xea/0x2a0 [loop]
> [19738.128576]        loop_process_work+0x257/0xdb0 [loop]
> [19738.129224]        process_one_work+0x24c/0x5b0
> [19738.129789]        worker_thread+0x55/0x3c0
> [19738.130311]        kthread+0x155/0x180
> [19738.130783]        ret_from_fork+0x22/0x30
> [19738.131296]
>                 -> #1 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}:
> [19738.132262]        process_one_work+0x223/0x5b0
> [19738.132827]        worker_thread+0x55/0x3c0
> [19738.133365]        kthread+0x155/0x180
> [19738.133834]        ret_from_fork+0x22/0x30
> [19738.134350]
>                 -> #0 ((wq_completion)loop0){+.+.}-{0:0}:
> [19738.135153]        __lock_acquire+0x130e/0x2210
> [19738.135715]        lock_acquire+0xd7/0x310
> [19738.136224]        flush_workqueue+0xb5/0x5b0
> [19738.136766]        drain_workqueue+0xa0/0x110
> [19738.137308]        destroy_workqueue+0x36/0x280
> [19738.137870]        __loop_clr_fd+0xb4/0x680 [loop]
> [19738.138473]        blkdev_put+0xc7/0x220
> [19738.138964]        close_fs_devices+0x95/0x220 [btrfs]
> [19738.139685]        btrfs_close_devices+0x48/0x160 [btrfs]
> [19738.140379]        generic_shutdown_super+0x74/0x110
> [19738.141011]        kill_anon_super+0x14/0x30
> [19738.141542]        btrfs_kill_super+0x12/0x20 [btrfs]
> [19738.142189]        deactivate_locked_super+0x31/0xa0
> [19738.142812]        cleanup_mnt+0x147/0x1c0
> [19738.143322]        task_work_run+0x5c/0xa0
> [19738.143831]        exit_to_user_mode_prepare+0x20c/0x210
> [19738.144487]        syscall_exit_to_user_mode+0x27/0x60
> [19738.145125]        do_syscall_64+0x48/0xc0
> [19738.145636]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [19738.146466]
>                 other info that might help us debug this:
> 
> [19738.147602] Chain exists of:
>                   (wq_completion)loop0 --> &disk->open_mutex --> &lo->lo_mutex
> 
> [19738.149221]  Possible unsafe locking scenario:
> 
> [19738.149952]        CPU0                    CPU1
> [19738.150520]        ----                    ----
> [19738.151082]   lock(&lo->lo_mutex);
> [19738.151508]                                lock(&disk->open_mutex);
> [19738.152276]                                lock(&lo->lo_mutex);
> [19738.153010]   lock((wq_completion)loop0);
> [19738.153510]
>                  *** DEADLOCK ***
> 
> [19738.154241] 4 locks held by umount/508378:
> [19738.154756]  #0: ffff97a30dd9c0e8
> (&type->s_umount_key#62){++++}-{3:3}, at: deactivate_super+0x2c/0x40
> [19738.155900]  #1: ffffffffc0ac5f10 (uuid_mutex){+.+.}-{3:3}, at:
> btrfs_close_devices+0x40/0x160 [btrfs]
> [19738.157094]  #2: ffff97a31bc6d928 (&disk->open_mutex){+.+.}-{3:3},
> at: blkdev_put+0x3a/0x220
> [19738.158137]  #3: ffff97a31f64d4a0 (&lo->lo_mutex){+.+.}-{3:3}, at:
> __loop_clr_fd+0x5a/0x680 [loop]
> [19738.159244]
>                 stack backtrace:
> [19738.159784] CPU: 2 PID: 508378 Comm: umount Not tainted
> 5.15.0-rc2-btrfs-next-99 #1
> [19738.160723] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [19738.162132] Call Trace:
> [19738.162448]  dump_stack_lvl+0x57/0x72
> [19738.162908]  check_noncircular+0xf3/0x110
> [19738.163411]  __lock_acquire+0x130e/0x2210
> [19738.163912]  lock_acquire+0xd7/0x310
> [19738.164358]  ? flush_workqueue+0x8b/0x5b0
> [19738.164859]  ? lockdep_init_map_type+0x51/0x260
> [19738.165437]  ? lockdep_init_map_type+0x51/0x260
> [19738.165999]  flush_workqueue+0xb5/0x5b0
> [19738.166481]  ? flush_workqueue+0x8b/0x5b0
> [19738.166990]  ? __mutex_unlock_slowpath+0x45/0x280
> [19738.167574]  drain_workqueue+0xa0/0x110
> [19738.168052]  destroy_workqueue+0x36/0x280
> [19738.168551]  __loop_clr_fd+0xb4/0x680 [loop]
> [19738.169084]  blkdev_put+0xc7/0x220
> [19738.169510]  close_fs_devices+0x95/0x220 [btrfs]
> [19738.170109]  btrfs_close_devices+0x48/0x160 [btrfs]
> [19738.170745]  generic_shutdown_super+0x74/0x110
> [19738.171300]  kill_anon_super+0x14/0x30
> [19738.171760]  btrfs_kill_super+0x12/0x20 [btrfs]
> [19738.172342]  deactivate_locked_super+0x31/0xa0
> [19738.172880]  cleanup_mnt+0x147/0x1c0
> [19738.173343]  task_work_run+0x5c/0xa0
> [19738.173781]  exit_to_user_mode_prepare+0x20c/0x210
> [19738.174381]  syscall_exit_to_user_mode+0x27/0x60
> [19738.174957]  do_syscall_64+0x48/0xc0
> [19738.175407]  entry_SYSCALL_64_after_hwframe+0x44/0xae
> [19738.176037] RIP: 0033:0x7f4d7104fee7
> [19738.176487] Code: ff 0b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f
> 44 00 00 31 f6 e9 09 00 00 00 66 0f 1f 84 00 00 00 00 00 b8 a6 00 00
> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 79 ff 0b 00 f7 d8 64 89
> 01 48
> [19738.178787] RSP: 002b:00007ffeca2fd758 EFLAGS: 00000246 ORIG_RAX:
> 00000000000000a6
> [19738.179722] RAX: 0000000000000000 RBX: 00007f4d71175264 RCX: 00007f4d7104fee7
> [19738.180601] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00005615eb38bdd0
> [19738.181496] RBP: 00005615eb38bba0 R08: 0000000000000000 R09: 00007ffeca2fc4d0
> [19738.182376] R10: 00005615eb38bdf0 R11: 0000000000000246 R12: 0000000000000000
> [19738.183249] R13: 00005615eb38bdd0 R14: 00005615eb38bcb0 R15: 0000000000000000

  reply	other threads:[~2021-09-23  4:16 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-27 21:01 [PATCH v2 0/7] Josef Bacik
2021-07-27 21:01 ` [PATCH v2 1/7] btrfs: do not call close_fs_devices in btrfs_rm_device Josef Bacik
2021-09-01  8:13   ` Anand Jain
2021-07-27 21:01 ` [PATCH v2 2/7] btrfs: do not take the uuid_mutex " Josef Bacik
2021-09-01 12:01   ` Anand Jain
2021-09-01 17:08     ` David Sterba
2021-09-01 17:10     ` Josef Bacik
2021-09-01 19:49       ` Anand Jain
2021-09-02 12:58   ` David Sterba
2021-09-02 14:10     ` Josef Bacik
2021-09-17 14:33       ` David Sterba
2021-09-20  7:45   ` Anand Jain
2021-09-20  8:26     ` David Sterba
2021-09-20  9:41       ` Anand Jain
2021-09-23  4:33         ` Anand Jain
2021-09-21 11:59   ` Filipe Manana
2021-09-21 12:17     ` Filipe Manana
2021-09-22 15:33       ` Filipe Manana
2021-09-23  4:15         ` Anand Jain [this message]
2021-09-23  3:58   ` [PATCH] btrfs: drop lockdep assert in close_fs_devices() Anand Jain
2021-09-23  4:04     ` Anand Jain
2021-07-27 21:01 ` [PATCH v2 3/7] btrfs: do not read super look for a device path Josef Bacik
2021-08-25  2:00   ` Anand Jain
2021-09-27 15:32     ` Josef Bacik
2021-09-28 11:50       ` Anand Jain
2021-07-27 21:01 ` [PATCH v2 4/7] btrfs: update the bdev time directly when closing Josef Bacik
2021-08-25  0:35   ` Anand Jain
2021-09-02 12:16   ` David Sterba
2021-07-27 21:01 ` [PATCH v2 5/7] btrfs: delay blkdev_put until after the device remove Josef Bacik
2021-08-25  1:00   ` Anand Jain
2021-09-02 12:16   ` David Sterba
2021-07-27 21:01 ` [PATCH v2 6/7] btrfs: unify common code for the v1 and v2 versions of " Josef Bacik
2021-08-25  1:19   ` Anand Jain
2021-09-01 14:05   ` Nikolay Borisov
2021-07-27 21:01 ` [PATCH v2 7/7] btrfs: do not take the device_list_mutex in clone_fs_devices Josef Bacik
2021-08-24 22:08   ` Anand Jain
2021-09-01 13:35   ` Nikolay Borisov
2021-09-02 12:59   ` David Sterba
2021-09-17 15:06 ` [PATCH v2 0/7] David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ff6014a3-42b9-351f-c7c8-6779a3407e66@oracle.com \
    --to=anand.jain@oracle.com \
    --cc=fdmanana@gmail.com \
    --cc=josef@toxicpanda.com \
    --cc=kernel-team@fb.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).