All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] block: Fix deadlock when merging requests with BFQ
@ 2021-05-20 22:33 Jan Kara
  2021-05-20 22:33 ` [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge() Jan Kara
  2021-05-20 22:33 ` [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock Jan Kara
  0 siblings, 2 replies; 13+ messages in thread
From: Jan Kara @ 2021-05-20 22:33 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Khazhy Kumykov, Paolo Valente, Jan Kara

Hello,

This patch series fixes a lockdep complaint and a possible deadlock that
can happen when blk_mq_sched_try_insert_merge() merges and frees a request.

								Honza

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge()
  2021-05-20 22:33 [PATCH 0/2] block: Fix deadlock when merging requests with BFQ Jan Kara
@ 2021-05-20 22:33 ` Jan Kara
  2021-05-21  0:42   ` Ming Lei
  2021-05-20 22:33 ` [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock Jan Kara
  1 sibling, 1 reply; 13+ messages in thread
From: Jan Kara @ 2021-05-20 22:33 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Khazhy Kumykov, Paolo Valente, Jan Kara

Most of the merging happens at bio level. There should not be much
merging happening at request level anymore. Furthermore if we backmerged
a request to the previous one, the chances to be able to merge the
result to even previous request are slim - that could succeed only if
requests were inserted in 2 1 3 order. Merging more requests in
elv_attempt_insert_merge() will be difficult to handle when we want to
pass requests to free back to the caller of
blk_mq_sched_try_insert_merge(). So just remove the possibility of
merging multiple requests in elv_attempt_insert_merge().

Signed-off-by: Jan Kara <jack@suse.cz>
---
 block/elevator.c | 19 +++++--------------
 1 file changed, 5 insertions(+), 14 deletions(-)

diff --git a/block/elevator.c b/block/elevator.c
index 440699c28119..098f4bd226f5 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -350,12 +350,11 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
  * we can append 'rq' to an existing request, so we can throw 'rq' away
  * afterwards.
  *
- * Returns true if we merged, false otherwise
+ * Returns true if we merged, false otherwise.
  */
 bool elv_attempt_insert_merge(struct request_queue *q, struct request *rq)
 {
 	struct request *__rq;
-	bool ret;
 
 	if (blk_queue_nomerges(q))
 		return false;
@@ -369,21 +368,13 @@ bool elv_attempt_insert_merge(struct request_queue *q, struct request *rq)
 	if (blk_queue_noxmerges(q))
 		return false;
 
-	ret = false;
 	/*
 	 * See if our hash lookup can find a potential backmerge.
 	 */
-	while (1) {
-		__rq = elv_rqhash_find(q, blk_rq_pos(rq));
-		if (!__rq || !blk_attempt_req_merge(q, __rq, rq))
-			break;
-
-		/* The merged request could be merged with others, try again */
-		ret = true;
-		rq = __rq;
-	}
-
-	return ret;
+	__rq = elv_rqhash_find(q, blk_rq_pos(rq));
+	if (!__rq || !blk_attempt_req_merge(q, __rq, rq))
+		return false;
+	return true;
 }
 
 void elv_merged_request(struct request_queue *q, struct request *rq,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock
  2021-05-20 22:33 [PATCH 0/2] block: Fix deadlock when merging requests with BFQ Jan Kara
  2021-05-20 22:33 ` [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge() Jan Kara
@ 2021-05-20 22:33 ` Jan Kara
  2021-05-21  0:57   ` Ming Lei
  1 sibling, 1 reply; 13+ messages in thread
From: Jan Kara @ 2021-05-20 22:33 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Khazhy Kumykov, Paolo Valente, Jan Kara

Lockdep complains about lock inversion between ioc->lock and bfqd->lock:

bfqd -> ioc:
 put_io_context+0x33/0x90 -> ioc->lock grabbed
 blk_mq_free_request+0x51/0x140
 blk_put_request+0xe/0x10
 blk_attempt_req_merge+0x1d/0x30
 elv_attempt_insert_merge+0x56/0xa0
 blk_mq_sched_try_insert_merge+0x4b/0x60
 bfq_insert_requests+0x9e/0x18c0 -> bfqd->lock grabbed
 blk_mq_sched_insert_requests+0xd6/0x2b0
 blk_mq_flush_plug_list+0x154/0x280
 blk_finish_plug+0x40/0x60
 ext4_writepages+0x696/0x1320
 do_writepages+0x1c/0x80
 __filemap_fdatawrite_range+0xd7/0x120
 sync_file_range+0xac/0xf0

ioc->bfqd:
 bfq_exit_icq+0xa3/0xe0 -> bfqd->lock grabbed
 put_io_context_active+0x78/0xb0 -> ioc->lock grabbed
 exit_io_context+0x48/0x50
 do_exit+0x7e9/0xdd0
 do_group_exit+0x54/0xc0

To avoid this inversion we change blk_mq_sched_try_insert_merge() to not
free the merged request but rather leave that upto the caller similarly
to blk_mq_sched_try_merge(). And in bfq_insert_requests() we make sure
to free the request after dropping bfqd->lock. As a nice consequence,
this also makes locking rules in bfq_finish_requeue_request() more
consistent.

Fixes: aee69d78dec0 ("block, bfq: introduce the BFQ-v0 I/O scheduler as an extra scheduler")
Signed-off-by: Jan Kara <jack@suse.cz>
---
 block/bfq-iosched.c | 20 +++++++-------------
 block/blk-merge.c   | 19 ++++++++-----------
 block/blk.h         |  2 +-
 block/mq-deadline.c |  4 +++-
 4 files changed, 19 insertions(+), 26 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index acd1f881273e..4afdf0b93124 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2317,9 +2317,9 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
 
 	ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
 
+	spin_unlock_irq(&bfqd->lock);
 	if (free)
 		blk_mq_free_request(free);
-	spin_unlock_irq(&bfqd->lock);
 
 	return ret;
 }
@@ -5933,6 +5933,7 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 	spin_lock_irq(&bfqd->lock);
 	if (blk_mq_sched_try_insert_merge(q, rq)) {
 		spin_unlock_irq(&bfqd->lock);
+		blk_put_request(rq);
 		return;
 	}
 
@@ -6376,6 +6377,7 @@ static void bfq_finish_requeue_request(struct request *rq)
 {
 	struct bfq_queue *bfqq = RQ_BFQQ(rq);
 	struct bfq_data *bfqd;
+	unsigned long flags;
 
 	/*
 	 * rq either is not associated with any icq, or is an already
@@ -6393,18 +6395,12 @@ static void bfq_finish_requeue_request(struct request *rq)
 					     rq->io_start_time_ns,
 					     rq->cmd_flags);
 
+	spin_lock_irqsave(&bfqd->lock, flags);
 	if (likely(rq->rq_flags & RQF_STARTED)) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&bfqd->lock, flags);
-
 		if (rq == bfqd->waited_rq)
 			bfq_update_inject_limit(bfqd, bfqq);
 
 		bfq_completed_request(bfqq, bfqd);
-		bfq_finish_requeue_request_body(bfqq);
-
-		spin_unlock_irqrestore(&bfqd->lock, flags);
 	} else {
 		/*
 		 * Request rq may be still/already in the scheduler,
@@ -6414,18 +6410,16 @@ static void bfq_finish_requeue_request(struct request *rq)
 		 * inconsistencies in the time interval from the end
 		 * of this function to the start of the deferred work.
 		 * This situation seems to occur only in process
-		 * context, as a consequence of a merge. In the
-		 * current version of the code, this implies that the
-		 * lock is held.
+		 * context, as a consequence of a merge.
 		 */
-
 		if (!RB_EMPTY_NODE(&rq->rb_node)) {
 			bfq_remove_request(rq->q, rq);
 			bfqg_stats_update_io_remove(bfqq_group(bfqq),
 						    rq->cmd_flags);
 		}
-		bfq_finish_requeue_request_body(bfqq);
 	}
+	bfq_finish_requeue_request_body(bfqq);
+	spin_unlock_irqrestore(&bfqd->lock, flags);
 
 	/*
 	 * Reset private fields. In case of a requeue, this allows
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 4d97fb6dd226..1398b52a24b4 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -846,18 +846,15 @@ static struct request *attempt_front_merge(struct request_queue *q,
 	return NULL;
 }
 
-int blk_attempt_req_merge(struct request_queue *q, struct request *rq,
-			  struct request *next)
+/*
+ * Try to merge 'next' into 'rq'. Return true if the merge happened, false
+ * otherwise. The caller is responsible for freeing 'next' if the merge
+ * happened.
+ */
+bool blk_attempt_req_merge(struct request_queue *q, struct request *rq,
+			   struct request *next)
 {
-	struct request *free;
-
-	free = attempt_merge(q, rq, next);
-	if (free) {
-		blk_put_request(free);
-		return 1;
-	}
-
-	return 0;
+	return attempt_merge(q, rq, next);
 }
 
 bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
diff --git a/block/blk.h b/block/blk.h
index 8b3591aee0a5..99ef4f7e7a70 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -225,7 +225,7 @@ ssize_t part_timeout_store(struct device *, struct device_attribute *,
 void __blk_queue_split(struct bio **bio, unsigned int *nr_segs);
 int ll_back_merge_fn(struct request *req, struct bio *bio,
 		unsigned int nr_segs);
-int blk_attempt_req_merge(struct request_queue *q, struct request *rq,
+bool blk_attempt_req_merge(struct request_queue *q, struct request *rq,
 				struct request *next);
 unsigned int blk_recalc_rq_segments(struct request *rq);
 void blk_rq_set_mixed_merge(struct request *rq);
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 8eea2cbf2bf4..64dd78005ae6 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -494,8 +494,10 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 	 */
 	blk_req_zone_write_unlock(rq);
 
-	if (blk_mq_sched_try_insert_merge(q, rq))
+	if (blk_mq_sched_try_insert_merge(q, rq)) {
+		blk_put_request(rq);
 		return;
+	}
 
 	trace_block_rq_insert(rq);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge()
  2021-05-20 22:33 ` [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge() Jan Kara
@ 2021-05-21  0:42   ` Ming Lei
  2021-05-21 11:53     ` Jan Kara
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2021-05-21  0:42 UTC (permalink / raw)
  To: Jan Kara; +Cc: Jens Axboe, linux-block, Khazhy Kumykov, Paolo Valente

On Fri, May 21, 2021 at 12:33:52AM +0200, Jan Kara wrote:
> Most of the merging happens at bio level. There should not be much
> merging happening at request level anymore. Furthermore if we backmerged
> a request to the previous one, the chances to be able to merge the
> result to even previous request are slim - that could succeed only if
> requests were inserted in 2 1 3 order. Merging more requests in

Right, but some workload has this kind of pattern.

For example of qemu IO emulation, it often can be thought as single job,
native aio, direct io with high queue depth. IOs is originated from one VM, but
may be from multiple jobs in the VM, so bio merge may not hit much because of IO
emulation timing(virtio-scsi/blk's MQ, or IO can be interleaved from multiple
jobs via the SQ transport), but request merge can really make a difference, see
recent patch in the following link:

https://lore.kernel.org/linux-block/3f61e939-d95a-1dd1-6870-e66795cfc1b1@suse.de/T/#t

> elv_attempt_insert_merge() will be difficult to handle when we want to
> pass requests to free back to the caller of
> blk_mq_sched_try_insert_merge(). So just remove the possibility of
> merging multiple requests in elv_attempt_insert_merge().

This way will cause regression.


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock
  2021-05-20 22:33 ` [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock Jan Kara
@ 2021-05-21  0:57   ` Ming Lei
  2021-05-21  3:29     ` Khazhy Kumykov
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2021-05-21  0:57 UTC (permalink / raw)
  To: Jan Kara; +Cc: Jens Axboe, linux-block, Khazhy Kumykov, Paolo Valente

On Fri, May 21, 2021 at 12:33:53AM +0200, Jan Kara wrote:
> Lockdep complains about lock inversion between ioc->lock and bfqd->lock:
> 
> bfqd -> ioc:
>  put_io_context+0x33/0x90 -> ioc->lock grabbed
>  blk_mq_free_request+0x51/0x140
>  blk_put_request+0xe/0x10
>  blk_attempt_req_merge+0x1d/0x30
>  elv_attempt_insert_merge+0x56/0xa0
>  blk_mq_sched_try_insert_merge+0x4b/0x60
>  bfq_insert_requests+0x9e/0x18c0 -> bfqd->lock grabbed

We could move blk_put_request() into scheduler code, then the lock
inversion is avoided. So far only mq-deadline and bfq calls into
blk_mq_sched_try_insert_merge(), and this change should be small.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock
  2021-05-21  0:57   ` Ming Lei
@ 2021-05-21  3:29     ` Khazhy Kumykov
  2021-05-21  6:54       ` Ming Lei
  0 siblings, 1 reply; 13+ messages in thread
From: Khazhy Kumykov @ 2021-05-21  3:29 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jan Kara, Jens Axboe, linux-block, Paolo Valente

[-- Attachment #1: Type: text/plain, Size: 895 bytes --]

On Thu, May 20, 2021 at 5:57 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Fri, May 21, 2021 at 12:33:53AM +0200, Jan Kara wrote:
> > Lockdep complains about lock inversion between ioc->lock and bfqd->lock:
> >
> > bfqd -> ioc:
> >  put_io_context+0x33/0x90 -> ioc->lock grabbed
> >  blk_mq_free_request+0x51/0x140
> >  blk_put_request+0xe/0x10
> >  blk_attempt_req_merge+0x1d/0x30
> >  elv_attempt_insert_merge+0x56/0xa0
> >  blk_mq_sched_try_insert_merge+0x4b/0x60
> >  bfq_insert_requests+0x9e/0x18c0 -> bfqd->lock grabbed
>
> We could move blk_put_request() into scheduler code, then the lock
> inversion is avoided. So far only mq-deadline and bfq calls into
> blk_mq_sched_try_insert_merge(), and this change should be small.

We'd potentially be putting multiple requests if we keep the recursive merge.

Could we move backmerge loop to the schedulers, perhaps?

>
>
> Thanks,
> Ming
>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3996 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock
  2021-05-21  3:29     ` Khazhy Kumykov
@ 2021-05-21  6:54       ` Ming Lei
  2021-05-21 12:05         ` Jan Kara
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2021-05-21  6:54 UTC (permalink / raw)
  To: Khazhy Kumykov; +Cc: Jan Kara, Jens Axboe, linux-block, Paolo Valente

On Thu, May 20, 2021 at 08:29:49PM -0700, Khazhy Kumykov wrote:
> On Thu, May 20, 2021 at 5:57 PM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > On Fri, May 21, 2021 at 12:33:53AM +0200, Jan Kara wrote:
> > > Lockdep complains about lock inversion between ioc->lock and bfqd->lock:
> > >
> > > bfqd -> ioc:
> > >  put_io_context+0x33/0x90 -> ioc->lock grabbed
> > >  blk_mq_free_request+0x51/0x140
> > >  blk_put_request+0xe/0x10
> > >  blk_attempt_req_merge+0x1d/0x30
> > >  elv_attempt_insert_merge+0x56/0xa0
> > >  blk_mq_sched_try_insert_merge+0x4b/0x60
> > >  bfq_insert_requests+0x9e/0x18c0 -> bfqd->lock grabbed
> >
> > We could move blk_put_request() into scheduler code, then the lock
> > inversion is avoided. So far only mq-deadline and bfq calls into
> > blk_mq_sched_try_insert_merge(), and this change should be small.
> 
> We'd potentially be putting multiple requests if we keep the recursive merge.

Oh, we still can pass a list to hold all requests to be freed, then free
them all outside in scheduler code.


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge()
  2021-05-21  0:42   ` Ming Lei
@ 2021-05-21 11:53     ` Jan Kara
  2021-05-21 13:12       ` Ming Lei
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Kara @ 2021-05-21 11:53 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jan Kara, Jens Axboe, linux-block, Khazhy Kumykov, Paolo Valente

On Fri 21-05-21 08:42:16, Ming Lei wrote:
> On Fri, May 21, 2021 at 12:33:52AM +0200, Jan Kara wrote:
> > Most of the merging happens at bio level. There should not be much
> > merging happening at request level anymore. Furthermore if we backmerged
> > a request to the previous one, the chances to be able to merge the
> > result to even previous request are slim - that could succeed only if
> > requests were inserted in 2 1 3 order. Merging more requests in
> 
> Right, but some workload has this kind of pattern.
> 
> For example of qemu IO emulation, it often can be thought as single job,
> native aio, direct io with high queue depth. IOs is originated from one VM, but
> may be from multiple jobs in the VM, so bio merge may not hit much because of IO
> emulation timing(virtio-scsi/blk's MQ, or IO can be interleaved from multiple
> jobs via the SQ transport), but request merge can really make a difference, see
> recent patch in the following link:
> 
> https://lore.kernel.org/linux-block/3f61e939-d95a-1dd1-6870-e66795cfc1b1@suse.de/T/#t

Oh, request merging definitely does make a difference. But the elevator
hash & merge logic I'm modifying here is used only by BFQ and MQ-DEADLINE
AFAICT. And these IO schedulers will already call blk_mq_sched_try_merge()
from their \.bio_merge handler which gets called from blk_mq_submit_bio().
So all the merging that can happen in the code I remove should have already
happened. Or am I missing something?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock
  2021-05-21  6:54       ` Ming Lei
@ 2021-05-21 12:05         ` Jan Kara
  2021-05-21 13:36           ` Ming Lei
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Kara @ 2021-05-21 12:05 UTC (permalink / raw)
  To: Ming Lei; +Cc: Khazhy Kumykov, Jan Kara, Jens Axboe, linux-block, Paolo Valente

On Fri 21-05-21 14:54:09, Ming Lei wrote:
> On Thu, May 20, 2021 at 08:29:49PM -0700, Khazhy Kumykov wrote:
> > On Thu, May 20, 2021 at 5:57 PM Ming Lei <ming.lei@redhat.com> wrote:
> > >
> > > On Fri, May 21, 2021 at 12:33:53AM +0200, Jan Kara wrote:
> > > > Lockdep complains about lock inversion between ioc->lock and bfqd->lock:
> > > >
> > > > bfqd -> ioc:
> > > >  put_io_context+0x33/0x90 -> ioc->lock grabbed
> > > >  blk_mq_free_request+0x51/0x140
> > > >  blk_put_request+0xe/0x10
> > > >  blk_attempt_req_merge+0x1d/0x30
> > > >  elv_attempt_insert_merge+0x56/0xa0
> > > >  blk_mq_sched_try_insert_merge+0x4b/0x60
> > > >  bfq_insert_requests+0x9e/0x18c0 -> bfqd->lock grabbed
> > >
> > > We could move blk_put_request() into scheduler code, then the lock
> > > inversion is avoided. So far only mq-deadline and bfq calls into
> > > blk_mq_sched_try_insert_merge(), and this change should be small.
> > 
> > We'd potentially be putting multiple requests if we keep the recursive merge.
> 
> Oh, we still can pass a list to hold all requests to be freed, then free
> them all outside in scheduler code.

If we cannot really get rid of the recursive merge (not yet convinced),
this is also an option I've considered. I was afraid what can we use in
struct request to attach request to a list but it seems .merged_requests
handlers remove the request from the queuelist already so we should be fine
using that.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge()
  2021-05-21 11:53     ` Jan Kara
@ 2021-05-21 13:12       ` Ming Lei
  2021-05-21 13:44         ` Jan Kara
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2021-05-21 13:12 UTC (permalink / raw)
  To: Jan Kara; +Cc: Jens Axboe, linux-block, Khazhy Kumykov, Paolo Valente

On Fri, May 21, 2021 at 01:53:54PM +0200, Jan Kara wrote:
> On Fri 21-05-21 08:42:16, Ming Lei wrote:
> > On Fri, May 21, 2021 at 12:33:52AM +0200, Jan Kara wrote:
> > > Most of the merging happens at bio level. There should not be much
> > > merging happening at request level anymore. Furthermore if we backmerged
> > > a request to the previous one, the chances to be able to merge the
> > > result to even previous request are slim - that could succeed only if
> > > requests were inserted in 2 1 3 order. Merging more requests in
> > 
> > Right, but some workload has this kind of pattern.
> > 
> > For example of qemu IO emulation, it often can be thought as single job,
> > native aio, direct io with high queue depth. IOs is originated from one VM, but
> > may be from multiple jobs in the VM, so bio merge may not hit much because of IO
> > emulation timing(virtio-scsi/blk's MQ, or IO can be interleaved from multiple
> > jobs via the SQ transport), but request merge can really make a difference, see
> > recent patch in the following link:
> > 
> > https://lore.kernel.org/linux-block/3f61e939-d95a-1dd1-6870-e66795cfc1b1@suse.de/T/#t
> 
> Oh, request merging definitely does make a difference. But the elevator
> hash & merge logic I'm modifying here is used only by BFQ and MQ-DEADLINE
> AFAICT. And these IO schedulers will already call blk_mq_sched_try_merge()
> from their \.bio_merge handler which gets called from blk_mq_submit_bio().
> So all the merging that can happen in the code I remove should have already
> happened. Or am I missing something?

There might be at least two reasons:

1) when .bio_merge() is called, some requests are kept in plug list, so
the bio may not be merged to requests in scheduler queue; when flushing plug
list and inserts these requests to scheduler queue, we have to try to
merge them further

2) only blk_mq_sched_try_insert_merge() is capable of doing aggressive
request merge, such as, when req A is merged to req B, the function will
continue to try to merge req B with other in-queue requests, until no
any further merge can't be done; neither blk_mq_sched_try_merge() nor
blk_attempt_plug_merge can do such aggressive request merge.



Thanks,
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock
  2021-05-21 12:05         ` Jan Kara
@ 2021-05-21 13:36           ` Ming Lei
  2021-05-21 13:47             ` Jan Kara
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2021-05-21 13:36 UTC (permalink / raw)
  To: Jan Kara; +Cc: Khazhy Kumykov, Jens Axboe, linux-block, Paolo Valente

On Fri, May 21, 2021 at 02:05:51PM +0200, Jan Kara wrote:
> On Fri 21-05-21 14:54:09, Ming Lei wrote:
> > On Thu, May 20, 2021 at 08:29:49PM -0700, Khazhy Kumykov wrote:
> > > On Thu, May 20, 2021 at 5:57 PM Ming Lei <ming.lei@redhat.com> wrote:
> > > >
> > > > On Fri, May 21, 2021 at 12:33:53AM +0200, Jan Kara wrote:
> > > > > Lockdep complains about lock inversion between ioc->lock and bfqd->lock:
> > > > >
> > > > > bfqd -> ioc:
> > > > >  put_io_context+0x33/0x90 -> ioc->lock grabbed
> > > > >  blk_mq_free_request+0x51/0x140
> > > > >  blk_put_request+0xe/0x10
> > > > >  blk_attempt_req_merge+0x1d/0x30
> > > > >  elv_attempt_insert_merge+0x56/0xa0
> > > > >  blk_mq_sched_try_insert_merge+0x4b/0x60
> > > > >  bfq_insert_requests+0x9e/0x18c0 -> bfqd->lock grabbed
> > > >
> > > > We could move blk_put_request() into scheduler code, then the lock
> > > > inversion is avoided. So far only mq-deadline and bfq calls into
> > > > blk_mq_sched_try_insert_merge(), and this change should be small.
> > > 
> > > We'd potentially be putting multiple requests if we keep the recursive merge.
> > 
> > Oh, we still can pass a list to hold all requests to be freed, then free
> > them all outside in scheduler code.
> 
> If we cannot really get rid of the recursive merge (not yet convinced),
> this is also an option I've considered. I was afraid what can we use in
> struct request to attach request to a list but it seems .merged_requests
> handlers remove the request from the queuelist already so we should be fine
> using that.

The request has been removed from scheduler queue, and safe to free,
so it is safe to be held in one temporary list.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge()
  2021-05-21 13:12       ` Ming Lei
@ 2021-05-21 13:44         ` Jan Kara
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Kara @ 2021-05-21 13:44 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jan Kara, Jens Axboe, linux-block, Khazhy Kumykov, Paolo Valente

On Fri 21-05-21 21:12:14, Ming Lei wrote:
> On Fri, May 21, 2021 at 01:53:54PM +0200, Jan Kara wrote:
> > On Fri 21-05-21 08:42:16, Ming Lei wrote:
> > > On Fri, May 21, 2021 at 12:33:52AM +0200, Jan Kara wrote:
> > > > Most of the merging happens at bio level. There should not be much
> > > > merging happening at request level anymore. Furthermore if we backmerged
> > > > a request to the previous one, the chances to be able to merge the
> > > > result to even previous request are slim - that could succeed only if
> > > > requests were inserted in 2 1 3 order. Merging more requests in
> > > 
> > > Right, but some workload has this kind of pattern.
> > > 
> > > For example of qemu IO emulation, it often can be thought as single job,
> > > native aio, direct io with high queue depth. IOs is originated from one VM, but
> > > may be from multiple jobs in the VM, so bio merge may not hit much because of IO
> > > emulation timing(virtio-scsi/blk's MQ, or IO can be interleaved from multiple
> > > jobs via the SQ transport), but request merge can really make a difference, see
> > > recent patch in the following link:
> > > 
> > > https://lore.kernel.org/linux-block/3f61e939-d95a-1dd1-6870-e66795cfc1b1@suse.de/T/#t
> > 
> > Oh, request merging definitely does make a difference. But the elevator
> > hash & merge logic I'm modifying here is used only by BFQ and MQ-DEADLINE
> > AFAICT. And these IO schedulers will already call blk_mq_sched_try_merge()
> > from their \.bio_merge handler which gets called from blk_mq_submit_bio().
> > So all the merging that can happen in the code I remove should have already
> > happened. Or am I missing something?
> 
> There might be at least two reasons:
> 
> 1) when .bio_merge() is called, some requests are kept in plug list, so
> the bio may not be merged to requests in scheduler queue; when flushing plug
> list and inserts these requests to scheduler queue, we have to try to
> merge them further

Oh, right, I forgot that plug list stores already requests, not bios.

> 2) only blk_mq_sched_try_insert_merge() is capable of doing aggressive
> request merge, such as, when req A is merged to req B, the function will
> continue to try to merge req B with other in-queue requests, until no
> any further merge can't be done; neither blk_mq_sched_try_merge() nor
> blk_attempt_plug_merge can do such aggressive request merge.

Yes, fair point. I was thinking only about a few requests but it the request
sequence is like 0 2 4 6 ... 2n 1 3 5 7 ... 2n+1, then bio merging will
result in 'n' requests while request merging will be able to get it down to
1 request.

I'll keep the recursive merge and pass back list of requests to free
instead. Thanks for explanations!

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock
  2021-05-21 13:36           ` Ming Lei
@ 2021-05-21 13:47             ` Jan Kara
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Kara @ 2021-05-21 13:47 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jan Kara, Khazhy Kumykov, Jens Axboe, linux-block, Paolo Valente

On Fri 21-05-21 21:36:05, Ming Lei wrote:
> On Fri, May 21, 2021 at 02:05:51PM +0200, Jan Kara wrote:
> > On Fri 21-05-21 14:54:09, Ming Lei wrote:
> > > On Thu, May 20, 2021 at 08:29:49PM -0700, Khazhy Kumykov wrote:
> > > > On Thu, May 20, 2021 at 5:57 PM Ming Lei <ming.lei@redhat.com> wrote:
> > > > >
> > > > > On Fri, May 21, 2021 at 12:33:53AM +0200, Jan Kara wrote:
> > > > > > Lockdep complains about lock inversion between ioc->lock and bfqd->lock:
> > > > > >
> > > > > > bfqd -> ioc:
> > > > > >  put_io_context+0x33/0x90 -> ioc->lock grabbed
> > > > > >  blk_mq_free_request+0x51/0x140
> > > > > >  blk_put_request+0xe/0x10
> > > > > >  blk_attempt_req_merge+0x1d/0x30
> > > > > >  elv_attempt_insert_merge+0x56/0xa0
> > > > > >  blk_mq_sched_try_insert_merge+0x4b/0x60
> > > > > >  bfq_insert_requests+0x9e/0x18c0 -> bfqd->lock grabbed
> > > > >
> > > > > We could move blk_put_request() into scheduler code, then the lock
> > > > > inversion is avoided. So far only mq-deadline and bfq calls into
> > > > > blk_mq_sched_try_insert_merge(), and this change should be small.
> > > > 
> > > > We'd potentially be putting multiple requests if we keep the recursive merge.
> > > 
> > > Oh, we still can pass a list to hold all requests to be freed, then free
> > > them all outside in scheduler code.
> > 
> > If we cannot really get rid of the recursive merge (not yet convinced),
> > this is also an option I've considered. I was afraid what can we use in
> > struct request to attach request to a list but it seems .merged_requests
> > handlers remove the request from the queuelist already so we should be fine
> > using that.
> 
> The request has been removed from scheduler queue, and safe to free,
> so it is safe to be held in one temporary list.

Not quite, there's still ->finish_request hook that will be called from
blk_mq_free_request() on the request and e.g. BFQ performs quite a lot of
cleanup there. But yes, at least queuelist seems to be available for reuse
here.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-05-21 13:47 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-20 22:33 [PATCH 0/2] block: Fix deadlock when merging requests with BFQ Jan Kara
2021-05-20 22:33 ` [PATCH 1/2] block: Do not merge recursively in elv_attempt_insert_merge() Jan Kara
2021-05-21  0:42   ` Ming Lei
2021-05-21 11:53     ` Jan Kara
2021-05-21 13:12       ` Ming Lei
2021-05-21 13:44         ` Jan Kara
2021-05-20 22:33 ` [PATCH 2/2] blk: Fix lock inversion between ioc lock and bfqd lock Jan Kara
2021-05-21  0:57   ` Ming Lei
2021-05-21  3:29     ` Khazhy Kumykov
2021-05-21  6:54       ` Ming Lei
2021-05-21 12:05         ` Jan Kara
2021-05-21 13:36           ` Ming Lei
2021-05-21 13:47             ` Jan Kara

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.