All of lore.kernel.org
 help / color / mirror / Atom feed
From: Anand Jain <anand.jain@oracle.com>
To: Josef Bacik <josef@toxicpanda.com>,
	linux-btrfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 2/6] btrfs: do not take the uuid_mutex in btrfs_rm_device
Date: Wed, 28 Jul 2021 08:15:31 +0800	[thread overview]
Message-ID: <8b170a9d-b810-ade5-db74-acb618eb9419@oracle.com> (raw)
In-Reply-To: <bb8ebd37d7ca60bc78fab5368274c99a03004fe5.1627414703.git.josef@toxicpanda.com>

On 28/07/2021 03:47, Josef Bacik wrote:
> We got the following lockdep splat while running xfstests (specifically
> btrfs/003 and btrfs/020 in a row) with the new rc.  This was uncovered
> by 87579e9b7d8d ("loop: use worker per cgroup instead of kworker") which
> converted loop to using workqueues, which comes with lockdep
> annotations that don't exist with kworkers.  The lockdep splat is as
> follows
> 
> ======================================================
> WARNING: possible circular locking dependency detected
> 5.14.0-rc2-custom+ #34 Not tainted
> ------------------------------------------------------
> losetup/156417 is trying to acquire lock:
> ffff9c7645b02d38 ((wq_completion)loop0){+.+.}-{0:0}, at: flush_workqueue+0x84/0x600
> 
> but task is already holding lock:
> ffff9c7647395468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0x41/0x650 [loop]
> 
> which lock already depends on the new lock.
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #5 (&lo->lo_mutex){+.+.}-{3:3}:
>         __mutex_lock+0xba/0x7c0
>         lo_open+0x28/0x60 [loop]
>         blkdev_get_whole+0x28/0xf0
>         blkdev_get_by_dev.part.0+0x168/0x3c0
>         blkdev_open+0xd2/0xe0
>         do_dentry_open+0x163/0x3a0
>         path_openat+0x74d/0xa40
>         do_filp_open+0x9c/0x140
>         do_sys_openat2+0xb1/0x170
>         __x64_sys_openat+0x54/0x90
>         do_syscall_64+0x3b/0x90
>         entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> -> #4 (&disk->open_mutex){+.+.}-{3:3}:
>         __mutex_lock+0xba/0x7c0
>         blkdev_get_by_dev.part.0+0xd1/0x3c0
>         blkdev_get_by_path+0xc0/0xd0
>         btrfs_scan_one_device+0x52/0x1f0 [btrfs]
>         btrfs_control_ioctl+0xac/0x170 [btrfs]
>         __x64_sys_ioctl+0x83/0xb0
>         do_syscall_64+0x3b/0x90
>         entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> -> #3 (uuid_mutex){+.+.}-{3:3}:
>         __mutex_lock+0xba/0x7c0
>         btrfs_rm_device+0x48/0x6a0 [btrfs]
>         btrfs_ioctl+0x2d1c/0x3110 [btrfs]
>         __x64_sys_ioctl+0x83/0xb0
>         do_syscall_64+0x3b/0x90
>         entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> -> #2 (sb_writers#11){.+.+}-{0:0}:
>         lo_write_bvec+0x112/0x290 [loop]
>         loop_process_work+0x25f/0xcb0 [loop]
>         process_one_work+0x28f/0x5d0
>         worker_thread+0x55/0x3c0
>         kthread+0x140/0x170
>         ret_from_fork+0x22/0x30
> 
> -> #1 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}:
>         process_one_work+0x266/0x5d0
>         worker_thread+0x55/0x3c0
>         kthread+0x140/0x170
>         ret_from_fork+0x22/0x30
> 
> -> #0 ((wq_completion)loop0){+.+.}-{0:0}:
>         __lock_acquire+0x1130/0x1dc0
>         lock_acquire+0xf5/0x320
>         flush_workqueue+0xae/0x600
>         drain_workqueue+0xa0/0x110
>         destroy_workqueue+0x36/0x250
>         __loop_clr_fd+0x9a/0x650 [loop]
>         lo_ioctl+0x29d/0x780 [loop]
>         block_ioctl+0x3f/0x50
>         __x64_sys_ioctl+0x83/0xb0
>         do_syscall_64+0x3b/0x90
>         entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> other info that might help us debug this:
> Chain exists of:
>    (wq_completion)loop0 --> &disk->open_mutex --> &lo->lo_mutex
>   Possible unsafe locking scenario:
>         CPU0                    CPU1
>         ----                    ----
>    lock(&lo->lo_mutex);
>                                 lock(&disk->open_mutex);
>                                 lock(&lo->lo_mutex);
>    lock((wq_completion)loop0);
> 
>   *** DEADLOCK ***
> 1 lock held by losetup/156417:
>   #0: ffff9c7647395468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0x41/0x650 [loop]
> 
> stack backtrace:
> CPU: 8 PID: 156417 Comm: losetup Not tainted 5.14.0-rc2-custom+ #34
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
> Call Trace:
>   dump_stack_lvl+0x57/0x72
>   check_noncircular+0x10a/0x120
>   __lock_acquire+0x1130/0x1dc0
>   lock_acquire+0xf5/0x320
>   ? flush_workqueue+0x84/0x600
>   flush_workqueue+0xae/0x600
>   ? flush_workqueue+0x84/0x600
>   drain_workqueue+0xa0/0x110
>   destroy_workqueue+0x36/0x250
>   __loop_clr_fd+0x9a/0x650 [loop]
>   lo_ioctl+0x29d/0x780 [loop]
>   ? __lock_acquire+0x3a0/0x1dc0
>   ? update_dl_rq_load_avg+0x152/0x360
>   ? lock_is_held_type+0xa5/0x120
>   ? find_held_lock.constprop.0+0x2b/0x80
>   block_ioctl+0x3f/0x50
>   __x64_sys_ioctl+0x83/0xb0
>   do_syscall_64+0x3b/0x90
>   entry_SYSCALL_64_after_hwframe+0x44/0xae
> RIP: 0033:0x7f645884de6b
> 
> Usually the uuid_mutex exists to protect the fs_devices that map
> together all of the devices that match a specific uuid.  In rm_device
> we're messing with the uuid of a device, so it makes sense to protect
> that here.
> 
> However in doing that it pulls in a whole host of lockdep dependencies,
> as we call mnt_may_write() on the sb before we grab the uuid_mutex, thus
> we end up with the dependency chain under the uuid_mutex being added
> under the normal sb write dependency chain, which causes problems with
> loop devices.
> 
> We don't need the uuid mutex here however.  If we call
> btrfs_scan_one_device() before we scratch the super block we will find
> the fs_devices and not find the device itself and return EBUSY because
> the fs_devices is open.  If we call it after the scratch happens it will
> not appear to be a valid btrfs file system.
> 
> We do not need to worry about other fs_devices modifying operations here
> because we're protected by the exclusive operations locking.
> 
> So drop the uuid_mutex here in order to fix the lockdep splat.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
>   fs/btrfs/volumes.c | 5 -----
>   1 file changed, 5 deletions(-)
> 
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index 373be4e54f28..62cb7d39435c 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -2082,8 +2082,6 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
>   	u64 num_devices;
>   	int ret = 0;
>   
> -	mutex_lock(&uuid_mutex);
> -
>   	num_devices = btrfs_num_devices(fs_info);
>   
>   	ret = btrfs_check_raid_min_devices(fs_info, num_devices - 1);
> @@ -2127,11 +2125,9 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
>   		mutex_unlock(&fs_info->chunk_mutex);
>   	}
>   
> -	mutex_unlock(&uuid_mutex);
>   	ret = btrfs_shrink_device(device, 0);
>   	if (!ret)
>   		btrfs_reada_remove_dev(device);
> -	mutex_lock(&uuid_mutex);
>   	if (ret)
>   		goto error_undo;
>   
> @@ -2200,7 +2196,6 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
>   	synchronize_rcu();
>   	btrfs_free_device(device);
>   out:
> -	mutex_unlock(&uuid_mutex);
>   	return ret;
>   
>   error_undo:
> 

  btrfs_rm_device happens on a mounted device, so device scan --forget
  will skip this device. For the rest of the ioctl based device maneuver,
  we have ...exclop_start(). Yeah, I see no reason to keep uuid_mutex
  here.

Reviewed-by: Anand Jain <anand.jain@oracle.com>

Thanks, Anand

  reply	other threads:[~2021-07-28  0:15 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-27 19:47 [PATCH 0/6] Fix lockdep issues around device removal Josef Bacik
2021-07-27 19:47 ` [PATCH 1/6] btrfs: do not check for ->num_devices == 0 in rm_device Josef Bacik
2021-07-27 22:35   ` Anand Jain
2021-07-27 22:42     ` Josef Bacik
2021-07-27 19:47 ` [PATCH 2/6] btrfs: do not take the uuid_mutex in btrfs_rm_device Josef Bacik
2021-07-28  0:15   ` Anand Jain [this message]
2021-07-27 19:47 ` [PATCH 3/6] btrfs: do not read super look for a device path Josef Bacik
2021-07-27 19:47 ` [PATCH 4/6] btrfs: update the bdev time directly when closing Josef Bacik
2021-07-27 19:47 ` [PATCH 5/6] btrfs: delay blkdev_put until after the device remove Josef Bacik
2021-07-27 19:47 ` [PATCH 6/6] btrfs: unify common code for the v1 and v2 versions of " Josef Bacik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8b170a9d-b810-ade5-db74-acb618eb9419@oracle.com \
    --to=anand.jain@oracle.com \
    --cc=josef@toxicpanda.com \
    --cc=kernel-team@fb.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.