All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization
@ 2018-06-25 11:31 Ming Lei
  2018-06-25 11:31 ` [PATCH 1/5] blk-mq: cleanup blk_mq_get_driver_tag() Ming Lei
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Ming Lei @ 2018-06-25 11:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Omar Sandoval, Andrew Jones,
	Bart Van Assche, linux-scsi, Martin K. Petersen,
	Christoph Hellwig

Hi,

The 1st two patches cleanes up blk_mq_get_driver_tag() and
blk_mq_mark_tag_wait().

The 3rd patch fixes one race between adding hctx->dispatch_wait to
wq and removing it from wq.

The 4th patch avoids to iterate on all queues which share same tags
after completing one request, so that we can kill the synchronize_rcu()
in patch of queue cleanup, then long delay can be avoided during SCSI LUN
probe. Meantime IO performance can be improved.

The 5th patch avoids to synchronizing rcu in blk_cleanup_queue() when
queue isn't initialized, so long delay can be avoided during SCSI LUN
probe too.

Ming Lei (5):
  blk-mq: cleanup blk_mq_get_driver_tag()
  blk-mq: don't pass **hctx to blk_mq_mark_tag_wait()
  blk-mq: introduce new lock for protecting hctx->dispatch_wait
  blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set()
  blk-mq: avoid to synchronize rcu inside blk_cleanup_queue()

 block/blk-core.c       |  8 +++--
 block/blk-mq-sched.c   | 85 +++-----------------------------------------------
 block/blk-mq.c         | 68 +++++++++++++++++++---------------------
 block/blk-mq.h         |  3 +-
 include/linux/blk-mq.h |  1 +
 include/linux/blkdev.h |  2 --
 6 files changed, 45 insertions(+), 122 deletions(-)

Cc: Omar Sandoval <osandov@fb.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Cc: linux-scsi@vger.kernel.org
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>


-- 
2.9.5

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/5] blk-mq: cleanup blk_mq_get_driver_tag()
  2018-06-25 11:31 [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Ming Lei
@ 2018-06-25 11:31 ` Ming Lei
  2018-06-26 21:11   ` Omar Sandoval
  2018-06-25 11:31 ` [PATCH 2/5] blk-mq: don't pass **hctx to blk_mq_mark_tag_wait() Ming Lei
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Ming Lei @ 2018-06-25 11:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Omar Sandoval, Andrew Jones,
	Bart Van Assche, Christoph Hellwig

We never pass 'wait' as true to blk_mq_get_driver_tag(), then won't
get new hctx passed out.

So cleanup the usage and remove the two extra parameters.

Cc: Omar Sandoval <osandov@fb.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq.c | 19 +++++++------------
 block/blk-mq.h |  3 +--
 2 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index b429d515b568..62e153eb720c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -964,17 +964,14 @@ static inline unsigned int queued_to_index(unsigned int queued)
 	return min(BLK_MQ_MAX_DISPATCH_ORDER - 1, ilog2(queued) + 1);
 }
 
-bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
-			   bool wait)
+bool blk_mq_get_driver_tag(struct request *rq)
 {
 	struct blk_mq_alloc_data data = {
 		.q = rq->q,
 		.hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu),
-		.flags = wait ? 0 : BLK_MQ_REQ_NOWAIT,
+		.flags = BLK_MQ_REQ_NOWAIT,
 	};
 
-	might_sleep_if(wait);
-
 	if (rq->tag != -1)
 		goto done;
 
@@ -991,8 +988,6 @@ bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
 	}
 
 done:
-	if (hctx)
-		*hctx = data.hctx;
 	return rq->tag != -1;
 }
 
@@ -1034,7 +1029,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
 		 * Don't clear RESTART here, someone else could have set it.
 		 * At most this will cost an extra queue run.
 		 */
-		return blk_mq_get_driver_tag(rq, hctx, false);
+		return blk_mq_get_driver_tag(rq);
 	}
 
 	wait = &this_hctx->dispatch_wait;
@@ -1055,7 +1050,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
 	 * allocation failure and adding the hardware queue to the wait
 	 * queue.
 	 */
-	ret = blk_mq_get_driver_tag(rq, hctx, false);
+	ret = blk_mq_get_driver_tag(rq);
 	if (!ret) {
 		spin_unlock(&this_hctx->lock);
 		return false;
@@ -1102,7 +1097,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 		if (!got_budget && !blk_mq_get_dispatch_budget(hctx))
 			break;
 
-		if (!blk_mq_get_driver_tag(rq, NULL, false)) {
+		if (!blk_mq_get_driver_tag(rq)) {
 			/*
 			 * The initial allocation attempt failed, so we need to
 			 * rerun the hardware queue when a tag is freed. The
@@ -1134,7 +1129,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 			bd.last = true;
 		else {
 			nxt = list_first_entry(list, struct request, queuelist);
-			bd.last = !blk_mq_get_driver_tag(nxt, NULL, false);
+			bd.last = !blk_mq_get_driver_tag(nxt);
 		}
 
 		ret = q->mq_ops->queue_rq(hctx, &bd);
@@ -1688,7 +1683,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 	if (!blk_mq_get_dispatch_budget(hctx))
 		goto insert;
 
-	if (!blk_mq_get_driver_tag(rq, NULL, false)) {
+	if (!blk_mq_get_driver_tag(rq)) {
 		blk_mq_put_dispatch_budget(hctx);
 		goto insert;
 	}
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 89231e439b2f..23659f41bf2c 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -36,8 +36,7 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
 void blk_mq_wake_waiters(struct request_queue *q);
 bool blk_mq_dispatch_rq_list(struct request_queue *, struct list_head *, bool);
 void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
-bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
-				bool wait);
+bool blk_mq_get_driver_tag(struct request *rq);
 struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
 					struct blk_mq_ctx *start);
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/5] blk-mq: don't pass **hctx to blk_mq_mark_tag_wait()
  2018-06-25 11:31 [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Ming Lei
  2018-06-25 11:31 ` [PATCH 1/5] blk-mq: cleanup blk_mq_get_driver_tag() Ming Lei
@ 2018-06-25 11:31 ` Ming Lei
  2018-06-26 21:12   ` Omar Sandoval
  2018-06-25 11:31 ` [PATCH 3/5] blk-mq: introduce new lock for protecting hctx->dispatch_wait Ming Lei
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Ming Lei @ 2018-06-25 11:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Andrew Jones, Christoph Hellwig,
	Omar Sandoval, Bart Van Assche

'hctx' won't be changed at all, so not necessary to pass
'**hctx' to blk_mq_mark_tag_wait().

Cc: Andrew Jones <drjones@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 62e153eb720c..db2814eb050f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1009,17 +1009,16 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
  * restart. For both cases, take care to check the condition again after
  * marking us as waiting.
  */
-static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
+static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
 				 struct request *rq)
 {
-	struct blk_mq_hw_ctx *this_hctx = *hctx;
 	struct sbq_wait_state *ws;
 	wait_queue_entry_t *wait;
 	bool ret;
 
-	if (!(this_hctx->flags & BLK_MQ_F_TAG_SHARED)) {
-		if (!test_bit(BLK_MQ_S_SCHED_RESTART, &this_hctx->state))
-			set_bit(BLK_MQ_S_SCHED_RESTART, &this_hctx->state);
+	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED)) {
+		if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
+			set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
 
 		/*
 		 * It's possible that a tag was freed in the window between the
@@ -1032,17 +1031,17 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
 		return blk_mq_get_driver_tag(rq);
 	}
 
-	wait = &this_hctx->dispatch_wait;
+	wait = &hctx->dispatch_wait;
 	if (!list_empty_careful(&wait->entry))
 		return false;
 
-	spin_lock(&this_hctx->lock);
+	spin_lock(&hctx->lock);
 	if (!list_empty(&wait->entry)) {
-		spin_unlock(&this_hctx->lock);
+		spin_unlock(&hctx->lock);
 		return false;
 	}
 
-	ws = bt_wait_ptr(&this_hctx->tags->bitmap_tags, this_hctx);
+	ws = bt_wait_ptr(&hctx->tags->bitmap_tags, hctx);
 	add_wait_queue(&ws->wait, wait);
 
 	/*
@@ -1052,7 +1051,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
 	 */
 	ret = blk_mq_get_driver_tag(rq);
 	if (!ret) {
-		spin_unlock(&this_hctx->lock);
+		spin_unlock(&hctx->lock);
 		return false;
 	}
 
@@ -1063,7 +1062,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
 	spin_lock_irq(&ws->wait.lock);
 	list_del_init(&wait->entry);
 	spin_unlock_irq(&ws->wait.lock);
-	spin_unlock(&this_hctx->lock);
+	spin_unlock(&hctx->lock);
 
 	return true;
 }
@@ -1105,7 +1104,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 			 * before we add this entry back on the dispatch list,
 			 * we'll re-run it below.
 			 */
-			if (!blk_mq_mark_tag_wait(&hctx, rq)) {
+			if (!blk_mq_mark_tag_wait(hctx, rq)) {
 				blk_mq_put_dispatch_budget(hctx);
 				/*
 				 * For non-shared tags, the RESTART check
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/5] blk-mq: introduce new lock for protecting hctx->dispatch_wait
  2018-06-25 11:31 [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Ming Lei
  2018-06-25 11:31 ` [PATCH 1/5] blk-mq: cleanup blk_mq_get_driver_tag() Ming Lei
  2018-06-25 11:31 ` [PATCH 2/5] blk-mq: don't pass **hctx to blk_mq_mark_tag_wait() Ming Lei
@ 2018-06-25 11:31 ` Ming Lei
  2018-06-25 11:31 ` [PATCH 4/5] blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set() Ming Lei
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2018-06-25 11:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Andrew Jones, Christoph Hellwig,
	Omar Sandoval, Bart Van Assche

Now hctx->lock is only acquired when adding hctx->dispatch_wait to
one wait queue, but not held when removing it from the wait queue.

IO hang can be observed easily if SCHED RESTART is disabled, that means
now RESTART exits just for fixing the issue in blk_mq_mark_tag_wait().

This patch fixes the issue by introducing hctx->dispatch_wait_lock and
holding it for removing hctx->dispatch_wait in blk_mq_dispatch_wake(),
since we need to avoid acquiring hctx->lock in irq context.

Fixes: eb619fdb2d4cb8b3d3419 ("blk-mq: fix issue with shared tag queue re-running")
Cc: Andrew Jones <drjones@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq.c         | 26 +++++++++++++++++---------
 include/linux/blk-mq.h |  1 +
 2 files changed, 18 insertions(+), 9 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index db2814eb050f..ff3ff191ff0b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -998,7 +998,10 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
 
 	hctx = container_of(wait, struct blk_mq_hw_ctx, dispatch_wait);
 
+	spin_lock(&hctx->dispatch_wait_lock);
 	list_del_init(&wait->entry);
+	spin_unlock(&hctx->dispatch_wait_lock);
+
 	blk_mq_run_hw_queue(hctx, true);
 	return 1;
 }
@@ -1012,7 +1015,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
 static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
 				 struct request *rq)
 {
-	struct sbq_wait_state *ws;
+	struct wait_queue_head *wq;
 	wait_queue_entry_t *wait;
 	bool ret;
 
@@ -1035,14 +1038,18 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
 	if (!list_empty_careful(&wait->entry))
 		return false;
 
-	spin_lock(&hctx->lock);
+	wq = &bt_wait_ptr(&hctx->tags->bitmap_tags, hctx)->wait;
+
+	spin_lock_irq(&wq->lock);
+	spin_lock(&hctx->dispatch_wait_lock);
 	if (!list_empty(&wait->entry)) {
-		spin_unlock(&hctx->lock);
+		spin_unlock(&hctx->dispatch_wait_lock);
+		spin_unlock_irq(&wq->lock);
 		return false;
 	}
 
-	ws = bt_wait_ptr(&hctx->tags->bitmap_tags, hctx);
-	add_wait_queue(&ws->wait, wait);
+	wait->flags &= ~WQ_FLAG_EXCLUSIVE;
+	__add_wait_queue(wq, wait);
 
 	/*
 	 * It's possible that a tag was freed in the window between the
@@ -1051,7 +1058,8 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
 	 */
 	ret = blk_mq_get_driver_tag(rq);
 	if (!ret) {
-		spin_unlock(&hctx->lock);
+		spin_unlock(&hctx->dispatch_wait_lock);
+		spin_unlock_irq(&wq->lock);
 		return false;
 	}
 
@@ -1059,10 +1067,9 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
 	 * We got a tag, remove ourselves from the wait queue to ensure
 	 * someone else gets the wakeup.
 	 */
-	spin_lock_irq(&ws->wait.lock);
 	list_del_init(&wait->entry);
-	spin_unlock_irq(&ws->wait.lock);
-	spin_unlock(&hctx->lock);
+	spin_unlock(&hctx->dispatch_wait_lock);
+	spin_unlock_irq(&wq->lock);
 
 	return true;
 }
@@ -2130,6 +2137,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
 
 	hctx->nr_ctx = 0;
 
+	spin_lock_init(&hctx->dispatch_wait_lock);
 	init_waitqueue_func_entry(&hctx->dispatch_wait, blk_mq_dispatch_wake);
 	INIT_LIST_HEAD(&hctx->dispatch_wait.entry);
 
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index e3147eb74222..ea690254dab7 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -39,6 +39,7 @@ struct blk_mq_hw_ctx {
 	struct blk_mq_ctx	**ctxs;
 	unsigned int		nr_ctx;
 
+	spinlock_t		dispatch_wait_lock;
 	wait_queue_entry_t	dispatch_wait;
 	atomic_t		wait_index;
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/5] blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set()
  2018-06-25 11:31 [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Ming Lei
                   ` (2 preceding siblings ...)
  2018-06-25 11:31 ` [PATCH 3/5] blk-mq: introduce new lock for protecting hctx->dispatch_wait Ming Lei
@ 2018-06-25 11:31 ` Ming Lei
  2018-06-25 11:31 ` [PATCH 5/5] blk-mq: avoid to synchronize rcu inside blk_cleanup_queue() Ming Lei
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2018-06-25 11:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Omar Sandoval, Bart Van Assche,
	Christoph Hellwig, Martin K . Petersen, linux-scsi, Andrew Jones

We have to remove synchronize_rcu() from blk_queue_cleanup(),
otherwise long delay can be caused during lun probe. For removing
it, we have to avoid to iterate the set->tag_list in IO path, eg,
blk_mq_sched_restart().

This patch reverts 5b79413946d (Revert "blk-mq: don't handle
TAG_SHARED in restart"). Given we have fixed enough IO hang issue,
and there isn't any reason to restart all queues in one tags any more,
see the following reasons:

1) blk-mq core can deal with shared-tags case well via blk_mq_get_driver_tag(),
which can wake up queues waiting for driver tag.

2) SCSI is a bit special because it may return BLK_STS_RESOURCE if queue,
target or host is ready, but SCSI built-in restart can cover all these well,
see scsi_end_request(), queue will be rerun after any request initiated from
this host/target is completed.

In my test on scsi_debug(8 luns), this patch may improve IOPS by 20% ~ 30%
when running I/O on these 8 luns concurrently.

Fixes: 705cda97ee3a ("blk-mq: Make it safe to use RCU to iterate over blk_mq_tag_set.tag_list")
Cc: Omar Sandoval <osandov@fb.com>
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: linux-scsi@vger.kernel.org
Reported-by: Andrew Jones <drjones@redhat.com>
Cc: Andrew Jones <drjones@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-sched.c   | 85 +++-----------------------------------------------
 block/blk-mq.c         | 10 ++----
 include/linux/blkdev.h |  2 --
 3 files changed, 7 insertions(+), 90 deletions(-)

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 56c493c6cd90..4e027f6108ae 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -59,29 +59,16 @@ static void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx)
 	if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
 		return;
 
-	if (hctx->flags & BLK_MQ_F_TAG_SHARED) {
-		struct request_queue *q = hctx->queue;
-
-		if (!test_and_set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
-			atomic_inc(&q->shared_hctx_restart);
-	} else
-		set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
+	set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
 }
 
-static bool blk_mq_sched_restart_hctx(struct blk_mq_hw_ctx *hctx)
+void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
 {
 	if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
-		return false;
-
-	if (hctx->flags & BLK_MQ_F_TAG_SHARED) {
-		struct request_queue *q = hctx->queue;
-
-		if (test_and_clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
-			atomic_dec(&q->shared_hctx_restart);
-	} else
-		clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
+		return;
+	clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
 
-	return blk_mq_run_hw_queue(hctx, true);
+	blk_mq_run_hw_queue(hctx, true);
 }
 
 /*
@@ -380,68 +367,6 @@ static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
 	return false;
 }
 
-/**
- * list_for_each_entry_rcu_rr - iterate in a round-robin fashion over rcu list
- * @pos:    loop cursor.
- * @skip:   the list element that will not be examined. Iteration starts at
- *          @skip->next.
- * @head:   head of the list to examine. This list must have at least one
- *          element, namely @skip.
- * @member: name of the list_head structure within typeof(*pos).
- */
-#define list_for_each_entry_rcu_rr(pos, skip, head, member)		\
-	for ((pos) = (skip);						\
-	     (pos = (pos)->member.next != (head) ? list_entry_rcu(	\
-			(pos)->member.next, typeof(*pos), member) :	\
-	      list_entry_rcu((pos)->member.next->next, typeof(*pos), member)), \
-	     (pos) != (skip); )
-
-/*
- * Called after a driver tag has been freed to check whether a hctx needs to
- * be restarted. Restarts @hctx if its tag set is not shared. Restarts hardware
- * queues in a round-robin fashion if the tag set of @hctx is shared with other
- * hardware queues.
- */
-void blk_mq_sched_restart(struct blk_mq_hw_ctx *const hctx)
-{
-	struct blk_mq_tags *const tags = hctx->tags;
-	struct blk_mq_tag_set *const set = hctx->queue->tag_set;
-	struct request_queue *const queue = hctx->queue, *q;
-	struct blk_mq_hw_ctx *hctx2;
-	unsigned int i, j;
-
-	if (set->flags & BLK_MQ_F_TAG_SHARED) {
-		/*
-		 * If this is 0, then we know that no hardware queues
-		 * have RESTART marked. We're done.
-		 */
-		if (!atomic_read(&queue->shared_hctx_restart))
-			return;
-
-		rcu_read_lock();
-		list_for_each_entry_rcu_rr(q, queue, &set->tag_list,
-					   tag_set_list) {
-			queue_for_each_hw_ctx(q, hctx2, i)
-				if (hctx2->tags == tags &&
-				    blk_mq_sched_restart_hctx(hctx2))
-					goto done;
-		}
-		j = hctx->queue_num + 1;
-		for (i = 0; i < queue->nr_hw_queues; i++, j++) {
-			if (j == queue->nr_hw_queues)
-				j = 0;
-			hctx2 = queue->queue_hw_ctx[j];
-			if (hctx2->tags == tags &&
-			    blk_mq_sched_restart_hctx(hctx2))
-				break;
-		}
-done:
-		rcu_read_unlock();
-	} else {
-		blk_mq_sched_restart_hctx(hctx);
-	}
-}
-
 void blk_mq_sched_insert_request(struct request *rq, bool at_head,
 				 bool run_queue, bool async)
 {
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ff3ff191ff0b..c8c6c0373bee 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2323,15 +2323,10 @@ static void queue_set_hctx_shared(struct request_queue *q, bool shared)
 	int i;
 
 	queue_for_each_hw_ctx(q, hctx, i) {
-		if (shared) {
-			if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
-				atomic_inc(&q->shared_hctx_restart);
+		if (shared)
 			hctx->flags |= BLK_MQ_F_TAG_SHARED;
-		} else {
-			if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
-				atomic_dec(&q->shared_hctx_restart);
+		else
 			hctx->flags &= ~BLK_MQ_F_TAG_SHARED;
-		}
 	}
 }
 
@@ -2362,7 +2357,6 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
 		blk_mq_update_tag_set_depth(set, false);
 	}
 	mutex_unlock(&set->tag_list_lock);
-	synchronize_rcu();
 	INIT_LIST_HEAD(&q->tag_set_list);
 }
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 9154570edf29..ca40c7419edd 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -442,8 +442,6 @@ struct request_queue {
 	int			nr_rqs[2];	/* # allocated [a]sync rqs */
 	int			nr_rqs_elvpriv;	/* # allocated rqs w/ elvpriv */
 
-	atomic_t		shared_hctx_restart;
-
 	struct blk_queue_stats	*stats;
 	struct rq_wb		*rq_wb;
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/5] blk-mq: avoid to synchronize rcu inside blk_cleanup_queue()
  2018-06-25 11:31 [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Ming Lei
                   ` (3 preceding siblings ...)
  2018-06-25 11:31 ` [PATCH 4/5] blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set() Ming Lei
@ 2018-06-25 11:31 ` Ming Lei
  2018-06-25 15:23 ` [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Andrew Jones
  2018-06-28 19:21 ` Jens Axboe
  6 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2018-06-25 11:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Omar Sandoval, Andrew Jones,
	Bart Van Assche, linux-scsi, Martin K. Petersen,
	Christoph Hellwig

SCSI probing may synchronously create and destroy a lot of request_queues
for non-existent devices. Any synchronize_rcu() in queue creation or
destroy path may introduce long latency during booting, see detailed
description in comment of blk_register_queue().

This patch removes one synchronize_rcu() inside blk_cleanup_queue()
for this case, commit c2856ae2f315d75(blk-mq: quiesce queue before freeing queue)
needs synchronize_rcu() for implementing blk_mq_quiesce_queue(), but
when queue isn't initialized, it isn't necessary to do that since
only pass-through requests are involved, no original issue in
scsi_execute() at all.

Without this patch and previous one, it may take more 20+ seconds for
virtio-scsi to complete disk probe. With the two patches, the time becomes
less than 100ms.

Fixes: c2856ae2f315d75 ("blk-mq: quiesce queue before freeing queue")
Reported-by: Andrew Jones <drjones@redhat.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Cc: linux-scsi@vger.kernel.org
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index afd2596ea3d3..222d4fc0e524 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -762,9 +762,13 @@ void blk_cleanup_queue(struct request_queue *q)
 	 * make sure all in-progress dispatch are completed because
 	 * blk_freeze_queue() can only complete all requests, and
 	 * dispatch may still be in-progress since we dispatch requests
-	 * from more than one contexts
+	 * from more than one contexts.
+	 *
+	 * No need to quiesce queue if it isn't initialized yet since
+	 * blk_freeze_queue() should be enough for cases of passthrough
+	 * request.
 	 */
-	if (q->mq_ops)
+	if (q->mq_ops && blk_queue_init_done(q))
 		blk_mq_quiesce_queue(q);
 
 	/* for synchronous bio-based driver finish in-flight integrity i/o */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization
  2018-06-25 11:31 [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Ming Lei
                   ` (4 preceding siblings ...)
  2018-06-25 11:31 ` [PATCH 5/5] blk-mq: avoid to synchronize rcu inside blk_cleanup_queue() Ming Lei
@ 2018-06-25 15:23 ` Andrew Jones
  2018-06-28 19:21 ` Jens Axboe
  6 siblings, 0 replies; 10+ messages in thread
From: Andrew Jones @ 2018-06-25 15:23 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, Omar Sandoval, Bart Van Assche,
	linux-scsi, Martin K. Petersen, Christoph Hellwig

On Mon, Jun 25, 2018 at 07:31:44PM +0800, Ming Lei wrote:
> Hi,
> 
> The 1st two patches cleanes up blk_mq_get_driver_tag() and
> blk_mq_mark_tag_wait().
> 
> The 3rd patch fixes one race between adding hctx->dispatch_wait to
> wq and removing it from wq.
> 
> The 4th patch avoids to iterate on all queues which share same tags
> after completing one request, so that we can kill the synchronize_rcu()
> in patch of queue cleanup, then long delay can be avoided during SCSI LUN
> probe. Meantime IO performance can be improved.
> 
> The 5th patch avoids to synchronizing rcu in blk_cleanup_queue() when
> queue isn't initialized, so long delay can be avoided during SCSI LUN
> probe too.
> 
> Ming Lei (5):
>   blk-mq: cleanup blk_mq_get_driver_tag()
>   blk-mq: don't pass **hctx to blk_mq_mark_tag_wait()
>   blk-mq: introduce new lock for protecting hctx->dispatch_wait
>   blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set()
>   blk-mq: avoid to synchronize rcu inside blk_cleanup_queue()
> 
>  block/blk-core.c       |  8 +++--
>  block/blk-mq-sched.c   | 85 +++-----------------------------------------------
>  block/blk-mq.c         | 68 +++++++++++++++++++---------------------
>  block/blk-mq.h         |  3 +-
>  include/linux/blk-mq.h |  1 +
>  include/linux/blkdev.h |  2 --
>  6 files changed, 45 insertions(+), 122 deletions(-)
> 
> Cc: Omar Sandoval <osandov@fb.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Bart Van Assche <bart.vanassche@wdc.com>
> Cc: linux-scsi@vger.kernel.org
> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> Cc: Christoph Hellwig <hch@lst.de>
> 
>

I gave the series a test run and it worked for me. So

Tested-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/5] blk-mq: cleanup blk_mq_get_driver_tag()
  2018-06-25 11:31 ` [PATCH 1/5] blk-mq: cleanup blk_mq_get_driver_tag() Ming Lei
@ 2018-06-26 21:11   ` Omar Sandoval
  0 siblings, 0 replies; 10+ messages in thread
From: Omar Sandoval @ 2018-06-26 21:11 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, Omar Sandoval, Andrew Jones,
	Bart Van Assche, Christoph Hellwig

On Mon, Jun 25, 2018 at 07:31:45PM +0800, Ming Lei wrote:
> We never pass 'wait' as true to blk_mq_get_driver_tag(), then won't
> get new hctx passed out.
> 
> So cleanup the usage and remove the two extra parameters.

Might be worth mentioning that the last use went away in 0c2a6fe4dc3e
("blk-mq: don't special case flush inserts for blk-mq-sched").

Reviewed-by: Omar Sandoval <osandov@fb.com>

> Cc: Omar Sandoval <osandov@fb.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Bart Van Assche <bart.vanassche@wdc.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq.c | 19 +++++++------------
>  block/blk-mq.h |  3 +--
>  2 files changed, 8 insertions(+), 14 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index b429d515b568..62e153eb720c 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -964,17 +964,14 @@ static inline unsigned int queued_to_index(unsigned int queued)
>  	return min(BLK_MQ_MAX_DISPATCH_ORDER - 1, ilog2(queued) + 1);
>  }
>  
> -bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
> -			   bool wait)
> +bool blk_mq_get_driver_tag(struct request *rq)
>  {
>  	struct blk_mq_alloc_data data = {
>  		.q = rq->q,
>  		.hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu),
> -		.flags = wait ? 0 : BLK_MQ_REQ_NOWAIT,
> +		.flags = BLK_MQ_REQ_NOWAIT,
>  	};
>  
> -	might_sleep_if(wait);
> -
>  	if (rq->tag != -1)
>  		goto done;
>  
> @@ -991,8 +988,6 @@ bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
>  	}
>  
>  done:
> -	if (hctx)
> -		*hctx = data.hctx;
>  	return rq->tag != -1;
>  }
>  
> @@ -1034,7 +1029,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
>  		 * Don't clear RESTART here, someone else could have set it.
>  		 * At most this will cost an extra queue run.
>  		 */
> -		return blk_mq_get_driver_tag(rq, hctx, false);
> +		return blk_mq_get_driver_tag(rq);
>  	}
>  
>  	wait = &this_hctx->dispatch_wait;
> @@ -1055,7 +1050,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
>  	 * allocation failure and adding the hardware queue to the wait
>  	 * queue.
>  	 */
> -	ret = blk_mq_get_driver_tag(rq, hctx, false);
> +	ret = blk_mq_get_driver_tag(rq);
>  	if (!ret) {
>  		spin_unlock(&this_hctx->lock);
>  		return false;
> @@ -1102,7 +1097,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
>  		if (!got_budget && !blk_mq_get_dispatch_budget(hctx))
>  			break;
>  
> -		if (!blk_mq_get_driver_tag(rq, NULL, false)) {
> +		if (!blk_mq_get_driver_tag(rq)) {
>  			/*
>  			 * The initial allocation attempt failed, so we need to
>  			 * rerun the hardware queue when a tag is freed. The
> @@ -1134,7 +1129,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
>  			bd.last = true;
>  		else {
>  			nxt = list_first_entry(list, struct request, queuelist);
> -			bd.last = !blk_mq_get_driver_tag(nxt, NULL, false);
> +			bd.last = !blk_mq_get_driver_tag(nxt);
>  		}
>  
>  		ret = q->mq_ops->queue_rq(hctx, &bd);
> @@ -1688,7 +1683,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>  	if (!blk_mq_get_dispatch_budget(hctx))
>  		goto insert;
>  
> -	if (!blk_mq_get_driver_tag(rq, NULL, false)) {
> +	if (!blk_mq_get_driver_tag(rq)) {
>  		blk_mq_put_dispatch_budget(hctx);
>  		goto insert;
>  	}
> diff --git a/block/blk-mq.h b/block/blk-mq.h
> index 89231e439b2f..23659f41bf2c 100644
> --- a/block/blk-mq.h
> +++ b/block/blk-mq.h
> @@ -36,8 +36,7 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
>  void blk_mq_wake_waiters(struct request_queue *q);
>  bool blk_mq_dispatch_rq_list(struct request_queue *, struct list_head *, bool);
>  void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
> -bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
> -				bool wait);
> +bool blk_mq_get_driver_tag(struct request *rq);
>  struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
>  					struct blk_mq_ctx *start);
>  
> -- 
> 2.9.5
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/5] blk-mq: don't pass **hctx to blk_mq_mark_tag_wait()
  2018-06-25 11:31 ` [PATCH 2/5] blk-mq: don't pass **hctx to blk_mq_mark_tag_wait() Ming Lei
@ 2018-06-26 21:12   ` Omar Sandoval
  0 siblings, 0 replies; 10+ messages in thread
From: Omar Sandoval @ 2018-06-26 21:12 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, Andrew Jones, Christoph Hellwig,
	Omar Sandoval, Bart Van Assche

On Mon, Jun 25, 2018 at 07:31:46PM +0800, Ming Lei wrote:
> 'hctx' won't be changed at all, so not necessary to pass
> '**hctx' to blk_mq_mark_tag_wait().
> 
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Omar Sandoval <osandov@fb.com>
> Cc: Bart Van Assche <bart.vanassche@wdc.com>

Reviewed-by: Omar Sandoval <osandov@fb.com>

> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 62e153eb720c..db2814eb050f 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1009,17 +1009,16 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
>   * restart. For both cases, take care to check the condition again after
>   * marking us as waiting.
>   */
> -static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
> +static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
>  				 struct request *rq)
>  {
> -	struct blk_mq_hw_ctx *this_hctx = *hctx;
>  	struct sbq_wait_state *ws;
>  	wait_queue_entry_t *wait;
>  	bool ret;
>  
> -	if (!(this_hctx->flags & BLK_MQ_F_TAG_SHARED)) {
> -		if (!test_bit(BLK_MQ_S_SCHED_RESTART, &this_hctx->state))
> -			set_bit(BLK_MQ_S_SCHED_RESTART, &this_hctx->state);
> +	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED)) {
> +		if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
> +			set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
>  
>  		/*
>  		 * It's possible that a tag was freed in the window between the
> @@ -1032,17 +1031,17 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
>  		return blk_mq_get_driver_tag(rq);
>  	}
>  
> -	wait = &this_hctx->dispatch_wait;
> +	wait = &hctx->dispatch_wait;
>  	if (!list_empty_careful(&wait->entry))
>  		return false;
>  
> -	spin_lock(&this_hctx->lock);
> +	spin_lock(&hctx->lock);
>  	if (!list_empty(&wait->entry)) {
> -		spin_unlock(&this_hctx->lock);
> +		spin_unlock(&hctx->lock);
>  		return false;
>  	}
>  
> -	ws = bt_wait_ptr(&this_hctx->tags->bitmap_tags, this_hctx);
> +	ws = bt_wait_ptr(&hctx->tags->bitmap_tags, hctx);
>  	add_wait_queue(&ws->wait, wait);
>  
>  	/*
> @@ -1052,7 +1051,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
>  	 */
>  	ret = blk_mq_get_driver_tag(rq);
>  	if (!ret) {
> -		spin_unlock(&this_hctx->lock);
> +		spin_unlock(&hctx->lock);
>  		return false;
>  	}
>  
> @@ -1063,7 +1062,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
>  	spin_lock_irq(&ws->wait.lock);
>  	list_del_init(&wait->entry);
>  	spin_unlock_irq(&ws->wait.lock);
> -	spin_unlock(&this_hctx->lock);
> +	spin_unlock(&hctx->lock);
>  
>  	return true;
>  }
> @@ -1105,7 +1104,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
>  			 * before we add this entry back on the dispatch list,
>  			 * we'll re-run it below.
>  			 */
> -			if (!blk_mq_mark_tag_wait(&hctx, rq)) {
> +			if (!blk_mq_mark_tag_wait(hctx, rq)) {
>  				blk_mq_put_dispatch_budget(hctx);
>  				/*
>  				 * For non-shared tags, the RESTART check
> -- 
> 2.9.5
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization
  2018-06-25 11:31 [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Ming Lei
                   ` (5 preceding siblings ...)
  2018-06-25 15:23 ` [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Andrew Jones
@ 2018-06-28 19:21 ` Jens Axboe
  6 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2018-06-28 19:21 UTC (permalink / raw)
  To: Ming Lei
  Cc: linux-block, Omar Sandoval, Andrew Jones, Bart Van Assche,
	linux-scsi, Martin K. Petersen, Christoph Hellwig

On 6/25/18 5:31 AM, Ming Lei wrote:
> Hi,
> 
> The 1st two patches cleanes up blk_mq_get_driver_tag() and
> blk_mq_mark_tag_wait().
> 
> The 3rd patch fixes one race between adding hctx->dispatch_wait to
> wq and removing it from wq.
> 
> The 4th patch avoids to iterate on all queues which share same tags
> after completing one request, so that we can kill the synchronize_rcu()
> in patch of queue cleanup, then long delay can be avoided during SCSI LUN
> probe. Meantime IO performance can be improved.
> 
> The 5th patch avoids to synchronizing rcu in blk_cleanup_queue() when
> queue isn't initialized, so long delay can be avoided during SCSI LUN
> probe too.

Looks good to me, applied for 4.19.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-06-28 19:21 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-25 11:31 [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Ming Lei
2018-06-25 11:31 ` [PATCH 1/5] blk-mq: cleanup blk_mq_get_driver_tag() Ming Lei
2018-06-26 21:11   ` Omar Sandoval
2018-06-25 11:31 ` [PATCH 2/5] blk-mq: don't pass **hctx to blk_mq_mark_tag_wait() Ming Lei
2018-06-26 21:12   ` Omar Sandoval
2018-06-25 11:31 ` [PATCH 3/5] blk-mq: introduce new lock for protecting hctx->dispatch_wait Ming Lei
2018-06-25 11:31 ` [PATCH 4/5] blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set() Ming Lei
2018-06-25 11:31 ` [PATCH 5/5] blk-mq: avoid to synchronize rcu inside blk_cleanup_queue() Ming Lei
2018-06-25 15:23 ` [PATCH 0/5] blk-mq: dispatch related cleanup, fix and optimization Andrew Jones
2018-06-28 19:21 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.