All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
To: Christoph Hellwig <hch@infradead.org>
Cc: Jan Kara <jack@suse.cz>, Dave Chinner <dchinner@redhat.com>,
	linux-block@vger.kernel.org, Jens Axboe <axboe@kernel.dk>
Subject: Re: [PATCH] loop: make autoclear operation asynchronous
Date: Thu, 2 Dec 2021 20:03:35 +0900	[thread overview]
Message-ID: <fb6adcdc-fb56-3b90-355b-3f5a81220f2b@i-love.sakura.ne.jp> (raw)
In-Reply-To: <Yah0Q//l+I+eM+kf@infradead.org>

On 2021/12/02 16:22, Christoph Hellwig wrote:
> On Wed, Dec 01, 2021 at 11:41:23PM +0900, Tetsuo Handa wrote:
>> OK. Here is a patch.
>> Is this better than temporarily dropping disk->open_mutex ?
> 
> This looks much better, and also cleans up the horrible locking warts
> in __loop_clr_fd.
> 

What "the horrible locking warts" refer to? Below approach temporarily
drops disk->open_mutex. I think there is no locking difference between
synchronous and asynchronous...

Anyway, I resent
https://lkml.kernel.org/r/d1f760f9-cdb2-f40d-33d8-bfa517c731be@i-love.sakura.ne.jp
in order to apply before "loop: replace loop_validate_mutex with loop_validate_spinlock".

---
 drivers/block/loop.c | 33 +++++++--------------------------
 1 file changed, 7 insertions(+), 26 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index ba76319b5544..31d3fbe67fea 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1082,7 +1082,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	return error;
 }
 
-static void __loop_clr_fd(struct loop_device *lo, bool release)
+static void __loop_clr_fd(struct loop_device *lo)
 {
 	struct file *filp;
 	gfp_t gfp = lo->old_gfp_mask;
@@ -1153,31 +1153,15 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
 	if (lo->lo_flags & LO_FLAGS_PARTSCAN) {
 		int err;
 
-		/*
-		 * open_mutex has been held already in release path, so don't
-		 * acquire it if this function is called in such case.
-		 *
-		 * If the reread partition isn't from release path, lo_refcnt
-		 * must be at least one and it can only become zero when the
-		 * current holder is released.
-		 */
-		if (!release)
-			mutex_lock(&lo->lo_disk->open_mutex);
+		mutex_lock(&lo->lo_disk->open_mutex);
 		err = bdev_disk_changed(lo->lo_disk, false);
-		if (!release)
-			mutex_unlock(&lo->lo_disk->open_mutex);
+		mutex_unlock(&lo->lo_disk->open_mutex);
 		if (err)
 			pr_warn("%s: partition scan of loop%d failed (rc=%d)\n",
 				__func__, lo->lo_number, err);
 		/* Device is gone, no point in returning error */
 	}
 
-	/*
-	 * lo->lo_state is set to Lo_unbound here after above partscan has
-	 * finished. There cannot be anybody else entering __loop_clr_fd() as
-	 * Lo_rundown state protects us from all the other places trying to
-	 * change the 'lo' device.
-	 */
 	lo->lo_flags = 0;
 	if (!part_shift)
 		lo->lo_disk->flags |= GENHD_FL_NO_PART;
@@ -1185,11 +1169,6 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
 	lo->lo_state = Lo_unbound;
 	mutex_unlock(&lo->lo_mutex);
 
-	/*
-	 * Need not hold lo_mutex to fput backing file. Calling fput holding
-	 * lo_mutex triggers a circular lock dependency possibility warning as
-	 * fput can take open_mutex which is usually taken before lo_mutex.
-	 */
 	fput(filp);
 }
 
@@ -1222,7 +1201,7 @@ static int loop_clr_fd(struct loop_device *lo)
 	lo->lo_state = Lo_rundown;
 	mutex_unlock(&lo->lo_mutex);
 
-	__loop_clr_fd(lo, false);
+	__loop_clr_fd(lo);
 	return 0;
 }
 
@@ -1747,7 +1726,9 @@ static void lo_release(struct gendisk *disk, fmode_t mode)
 		 * In autoclear mode, stop the loop thread
 		 * and remove configuration after last close.
 		 */
-		__loop_clr_fd(lo, true);
+		mutex_unlock(&lo->lo_disk->open_mutex);
+		__loop_clr_fd(lo);
+		mutex_lock(&lo->lo_disk->open_mutex);
 		return;
 	} else if (lo->lo_state == Lo_bound) {
 		/*
-- 
2.18.4


  reply	other threads:[~2021-12-02 11:03 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-27  7:10 [syzbot] possible deadlock in blkdev_put (2) syzbot
2021-11-27 11:27 ` Tetsuo Handa
2021-11-28  5:32   ` Tetsuo Handa
2021-11-28  7:42     ` Tetsuo Handa
2021-11-29 10:21       ` Christoph Hellwig
2021-11-29 10:36         ` Tetsuo Handa
2021-11-29 14:13           ` Tetsuo Handa
2021-11-30 12:57           ` Christoph Hellwig
2021-12-01 14:41             ` [PATCH] loop: make autoclear operation asynchronous Tetsuo Handa
2021-12-02  7:22               ` Christoph Hellwig
2021-12-02 11:03                 ` Tetsuo Handa [this message]
2021-12-02 12:16               ` Jan Kara
2021-12-02 14:39                 ` Tetsuo Handa
2021-12-02 18:05                   ` Jan Kara
2021-12-03  6:50                     ` Christoph Hellwig
2021-12-03 11:01                       ` Tetsuo Handa
2021-12-08  9:56                         ` Tetsuo Handa
2021-12-08  5:05 ` [syzbot] possible deadlock in blkdev_put (2) syzbot
2021-12-08 11:42 ` syzbot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fb6adcdc-fb56-3b90-355b-3f5a81220f2b@i-love.sakura.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=axboe@kernel.dk \
    --cc=dchinner@redhat.com \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=linux-block@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.