linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V10 0/4] blk-mq: refactor code of issue directly
@ 2018-12-06  3:32 Jianchao Wang
  2018-12-06  3:32 ` [PATCH V10 1/4] blk-mq: insert to hctx dispatch list when bypass_insert is true Jianchao Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Jianchao Wang @ 2018-12-06  3:32 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

Hi Jens

Please consider this patchset for 4.21.

It refactors the code of issue request directly to unify the interface
and make the code clearer and more readable.

This patch set is rebased on the recent for-4.21/block and add the 1st
patch which inserts the non-read-write request to hctx dispatch
list to avoid to involve merge and io scheduler when bypass_insert
is true, otherwise, inserting is ignored, BLK_STS_RESOURCE is returned
and the caller will fail forever.

The 2nd patch refactors the code of issue request directly to unify the
helper interface which could handle all the cases.

The 3rd patch make blk_mq_sched_insert_requests issue requests directly
with 'bypass' false, then it needn't to handle the non-issued requests
any more.

The 4th patch replace and kill the blk_mq_request_issue_directly.

V10:
 - address Jen's comment.
 - let blk_mq_try_issue_directly return actual result for case
   'bypass == false'. (2/4)
 - use return value of blk_mq_try_issue_directly to identify
   whether the request is direct-issued successfully. (3/4)

V9:
 - rebase on recent for-4.21/block
 - add 1st patch

V8:
 - drop the original 2nd patch which try to insert requests into hctx->dispatch
   if quiesced or stopped.
 - remove two wrong 'unlikely'

V7:
 - drop the original 3rd patch which try to ensure hctx to be ran on
   its mapped cpu in issueing directly path.

V6:
 - drop original 1st patch to address Jen's comment
 - discard the enum mq_issue_decision and blk_mq_make_decision and use
   BLK_STS_* return values directly to address Jen's comment. (1/5)
 - add 'unlikely' in blk_mq_try_issue_directly (1/5)
 - refactor the 2nd and 3rd patch based on the new 1st patch.
 - reserve the unused_cookie in 4th and 5th patch

V5:
 - rebase against Jens' for-4.21/block branch
 - adjust the order of patch04 and patch05
 - add patch06 to replace and kill the one line blk_mq_request_bypass_insert
 - comment changes

V4:
 - split the original patch 1 into two patches, 1st and 2nd patch currently
 - rename the mq_decision to mq_issue_decision
 - comment changes

V3:
 - Correct the code about the case bypass_insert is true and io scheduler
   attached. The request still need to be issued in case above. (1/4)
 - Refactor the code to make code clearer. blk_mq_make_request is introduced
   to decide insert, end or just return based on the return value of .queue_rq
   and bypass_insert (1/4) 
 - Add the 2nd patch. It introduce a new decision result which indicates to
   insert request with blk_mq_request_bypass_insert.
 - Modify the code to adapt the new patch 1.

V2:
 - Add 1st and 2nd patch to refactor the code.


Jianchao Wang (4)
blk-mq: insert to hctx dispatch list when
blk-mq: refactor the code of issue request directly
blk-mq: issue directly with bypass 'false' in
blk-mq: replace and kill blk_mq_request_issue_directly

block/blk-core.c     |   4 +-
block/blk-mq-sched.c |   8 ++--
block/blk-mq.c       | 127 +++++++++++++++++++++++----------------------------
block/blk-mq.h       |   6 ++-
4 files changed, 68 insertions(+), 77 deletions(-)


Thanks
Jianchao

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH V10 1/4] blk-mq: insert to hctx dispatch list when bypass_insert is true
  2018-12-06  3:32 [PATCH V10 0/4] blk-mq: refactor code of issue directly Jianchao Wang
@ 2018-12-06  3:32 ` Jianchao Wang
  2018-12-06  3:32 ` [PATCH V10 2/4] blk-mq: refactor the code of issue request directly Jianchao Wang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Jianchao Wang @ 2018-12-06  3:32 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

We don't allow direct dispatch of anything but regular reads/writes
and insert all of non-read-write requests. However, this is not
correct for 'bypass_insert == true' case where inserting is ignored
and BLK_STS_RESOURCE is returned. The caller will fail forever.

Fix it with inserting the non-read-write request to hctx dispatch
list to avoid to involve merge and io scheduler when bypass_insert
is true.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9005505..01802bf 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1822,6 +1822,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 {
 	struct request_queue *q = rq->q;
 	bool run_queue = true;
+	bool force = false;
 
 	/*
 	 * RCU or SRCU read lock is needed before checking quiesced flag.
@@ -1836,9 +1837,18 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 		goto insert;
 	}
 
-	if (!blk_rq_can_direct_dispatch(rq) || (q->elevator && !bypass_insert))
+	if (q->elevator && !bypass_insert)
 		goto insert;
 
+	if (!blk_rq_can_direct_dispatch(rq)) {
+		/*
+		 * For 'bypass_insert == true' case, insert request into hctx
+		 * dispatch list.
+		 */
+		force = bypass_insert;
+		goto insert;
+	}
+
 	if (!blk_mq_get_dispatch_budget(hctx))
 		goto insert;
 
@@ -1849,8 +1859,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 
 	return __blk_mq_issue_directly(hctx, rq, cookie, last);
 insert:
-	if (bypass_insert)
+	if (force) {
+		blk_mq_request_bypass_insert(rq, run_queue);
+		return BLK_STS_OK;
+	} else if (bypass_insert) {
 		return BLK_STS_RESOURCE;
+	}
 
 	blk_mq_sched_insert_request(rq, false, run_queue, false);
 	return BLK_STS_OK;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V10 2/4] blk-mq: refactor the code of issue request directly
  2018-12-06  3:32 [PATCH V10 0/4] blk-mq: refactor code of issue directly Jianchao Wang
  2018-12-06  3:32 ` [PATCH V10 1/4] blk-mq: insert to hctx dispatch list when bypass_insert is true Jianchao Wang
@ 2018-12-06  3:32 ` Jianchao Wang
  2018-12-06  3:32 ` [PATCH V10 3/4] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests Jianchao Wang
  2018-12-06  3:32 ` [PATCH V10 4/4] blk-mq: replace and kill blk_mq_request_issue_directly Jianchao Wang
  3 siblings, 0 replies; 8+ messages in thread
From: Jianchao Wang @ 2018-12-06  3:32 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
into one interface to unify the interfaces to issue requests
directly. The merged interface takes over the requests totally,
it could insert, end or do nothing based on the return value of
.queue_rq and 'bypass' parameter. Then caller needn't any other
handling any more and then code could be cleaned up.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq.c | 112 ++++++++++++++++++++++++++-------------------------------
 1 file changed, 51 insertions(+), 61 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 01802bf..a1cccdd 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1815,92 +1815,82 @@ static bool blk_rq_can_direct_dispatch(struct request *rq)
 	return req_op(rq) == REQ_OP_READ || req_op(rq) == REQ_OP_WRITE;
 }
 
-static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 						struct request *rq,
 						blk_qc_t *cookie,
-						bool bypass_insert, bool last)
+						bool bypass, bool last)
 {
 	struct request_queue *q = rq->q;
 	bool run_queue = true;
+	blk_status_t ret = BLK_STS_RESOURCE;
+	int srcu_idx;
 	bool force = false;
 
+	if (!blk_rq_can_direct_dispatch(rq)) {
+		/*
+		 * Insert request to hctx dispatch list and return
+		 * BLK_STS_OK for 'bypass == true' case, otherwise,
+		 * the caller will fail forever.
+		 */
+		force = bypass;
+		goto out;
+	}
+
+	hctx_lock(hctx, &srcu_idx);
 	/*
-	 * RCU or SRCU read lock is needed before checking quiesced flag.
+	 * hctx_lock is needed before checking quiesced flag.
 	 *
-	 * When queue is stopped or quiesced, ignore 'bypass_insert' from
-	 * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
-	 * and avoid driver to try to dispatch again.
+	 * When queue is stopped or quiesced, ignore 'bypass', insert
+	 * and return BLK_STS_OK to caller, and avoid driver to try to
+	 * dispatch again.
 	 */
-	if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
+	if (unlikely(blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q))) {
 		run_queue = false;
-		bypass_insert = false;
-		goto insert;
+		bypass = false;
+		goto out_unlock;
 	}
 
-	if (q->elevator && !bypass_insert)
-		goto insert;
-
-	if (!blk_rq_can_direct_dispatch(rq)) {
-		/*
-		 * For 'bypass_insert == true' case, insert request into hctx
-		 * dispatch list.
-		 */
-		force = bypass_insert;
-		goto insert;
-	}
+	if (unlikely(q->elevator && !bypass))
+		goto out_unlock;
 
 	if (!blk_mq_get_dispatch_budget(hctx))
-		goto insert;
+		goto out_unlock;
 
 	if (!blk_mq_get_driver_tag(rq)) {
 		blk_mq_put_dispatch_budget(hctx);
-		goto insert;
+		goto out_unlock;
 	}
 
-	return __blk_mq_issue_directly(hctx, rq, cookie, last);
-insert:
-	if (force) {
-		blk_mq_request_bypass_insert(rq, run_queue);
-		return BLK_STS_OK;
-	} else if (bypass_insert) {
-		return BLK_STS_RESOURCE;
+	ret = __blk_mq_issue_directly(hctx, rq, cookie, last);
+out_unlock:
+	hctx_unlock(hctx, srcu_idx);
+out:
+	switch (ret) {
+	case BLK_STS_OK:
+		break;
+	case BLK_STS_DEV_RESOURCE:
+	case BLK_STS_RESOURCE:
+		if (force) {
+			blk_mq_request_bypass_insert(rq, run_queue);
+			ret = BLK_STS_OK;
+		} else if (!bypass) {
+			blk_mq_sched_insert_request(rq, false, run_queue, false);
+		}
+		break;
+	default:
+		if (!bypass)
+			blk_mq_end_request(rq, ret);
+		break;
 	}
 
-	blk_mq_sched_insert_request(rq, false, run_queue, false);
-	return BLK_STS_OK;
-}
-
-static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
-		struct request *rq, blk_qc_t *cookie)
-{
-	blk_status_t ret;
-	int srcu_idx;
-
-	might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
-
-	hctx_lock(hctx, &srcu_idx);
-
-	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
-	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
-		blk_mq_sched_insert_request(rq, false, true, false);
-	else if (ret != BLK_STS_OK)
-		blk_mq_end_request(rq, ret);
-
-	hctx_unlock(hctx, srcu_idx);
+	return ret;
 }
 
 blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
 {
-	blk_status_t ret;
-	int srcu_idx;
-	blk_qc_t unused_cookie;
-	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+	blk_qc_t unused;
 
-	hctx_lock(hctx, &srcu_idx);
-	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true, last);
-	hctx_unlock(hctx, srcu_idx);
-
-	return ret;
+	return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused, true, last);
 }
 
 void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
@@ -2043,13 +2033,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 		if (same_queue_rq) {
 			data.hctx = same_queue_rq->mq_hctx;
 			blk_mq_try_issue_directly(data.hctx, same_queue_rq,
-					&cookie);
+					&cookie, false, true);
 		}
 	} else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
 			!data.hctx->dispatch_busy)) {
 		blk_mq_put_ctx(data.ctx);
 		blk_mq_bio_to_request(rq, bio);
-		blk_mq_try_issue_directly(data.hctx, rq, &cookie);
+		blk_mq_try_issue_directly(data.hctx, rq, &cookie, false, true);
 	} else {
 		blk_mq_put_ctx(data.ctx);
 		blk_mq_bio_to_request(rq, bio);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V10 3/4] blk-mq: issue directly with bypass 'false' in  blk_mq_sched_insert_requests
  2018-12-06  3:32 [PATCH V10 0/4] blk-mq: refactor code of issue directly Jianchao Wang
  2018-12-06  3:32 ` [PATCH V10 1/4] blk-mq: insert to hctx dispatch list when bypass_insert is true Jianchao Wang
  2018-12-06  3:32 ` [PATCH V10 2/4] blk-mq: refactor the code of issue request directly Jianchao Wang
@ 2018-12-06  3:32 ` Jianchao Wang
  2018-12-06 15:19   ` Jens Axboe
  2018-12-06  3:32 ` [PATCH V10 4/4] blk-mq: replace and kill blk_mq_request_issue_directly Jianchao Wang
  3 siblings, 1 reply; 8+ messages in thread
From: Jianchao Wang @ 2018-12-06  3:32 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

It is not necessary to issue request directly with bypass 'true'
in blk_mq_sched_insert_requests and handle the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them totally. Remove the blk_rq_can_direct_dispatch check,
because blk_mq_try_issue_directly can handle it well.

With respect to commit_rqs hook, we only need to care about the last
request's result. If it is inserted, invoke commit_rqs.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq-sched.c |  8 +++-----
 block/blk-mq.c       | 26 +++++++++-----------------
 2 files changed, 12 insertions(+), 22 deletions(-)

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index f096d898..5b4d52d 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -417,12 +417,10 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
 		 * busy in case of 'none' scheduler, and this way may save
 		 * us one extra enqueue & dequeue to sw queue.
 		 */
-		if (!hctx->dispatch_busy && !e && !run_queue_async) {
+		if (!hctx->dispatch_busy && !e && !run_queue_async)
 			blk_mq_try_issue_list_directly(hctx, list);
-			if (list_empty(list))
-				return;
-		}
-		blk_mq_insert_requests(hctx, ctx, list);
+		else
+			blk_mq_insert_requests(hctx, ctx, list);
 	}
 
 	blk_mq_run_hw_queue(hctx, run_queue_async);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a1cccdd..dd07fe1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1896,32 +1896,24 @@ blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
 void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
 		struct list_head *list)
 {
+	blk_qc_t unused;
+	blk_status_t ret = BLK_STS_OK;
+
 	while (!list_empty(list)) {
-		blk_status_t ret;
 		struct request *rq = list_first_entry(list, struct request,
 				queuelist);
 
-		if (!blk_rq_can_direct_dispatch(rq))
-			break;
-
 		list_del_init(&rq->queuelist);
-		ret = blk_mq_request_issue_directly(rq, list_empty(list));
-		if (ret != BLK_STS_OK) {
-			if (ret == BLK_STS_RESOURCE ||
-					ret == BLK_STS_DEV_RESOURCE) {
-				list_add(&rq->queuelist, list);
-				break;
-			}
-			blk_mq_end_request(rq, ret);
-		}
+		ret = blk_mq_try_issue_directly(hctx, rq, &unused, false,
+						list_empty(list));
 	}
 
 	/*
-	 * If we didn't flush the entire list, we could have told
-	 * the driver there was more coming, but that turned out to
-	 * be a lie.
+	 * We only need to care about the last request's result,
+	 * if it is inserted, kick the hardware with commit_rqs hook.
 	 */
-	if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs)
+	if ((ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) &&
+	    hctx->queue->mq_ops->commit_rqs)
 		hctx->queue->mq_ops->commit_rqs(hctx);
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V10 4/4] blk-mq: replace and kill blk_mq_request_issue_directly
  2018-12-06  3:32 [PATCH V10 0/4] blk-mq: refactor code of issue directly Jianchao Wang
                   ` (2 preceding siblings ...)
  2018-12-06  3:32 ` [PATCH V10 3/4] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests Jianchao Wang
@ 2018-12-06  3:32 ` Jianchao Wang
  3 siblings, 0 replies; 8+ messages in thread
From: Jianchao Wang @ 2018-12-06  3:32 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-core.c | 4 +++-
 block/blk-mq.c   | 9 +--------
 block/blk-mq.h   | 6 ++++--
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index ad59102..c92d866 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1297,6 +1297,8 @@ static int blk_cloned_rq_check_limits(struct request_queue *q,
  */
 blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq)
 {
+	blk_qc_t unused;
+
 	if (blk_cloned_rq_check_limits(q, rq))
 		return BLK_STS_IOERR;
 
@@ -1312,7 +1314,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
 	 * bypass a potential scheduler on the bottom device for
 	 * insert.
 	 */
-	return blk_mq_request_issue_directly(rq, true);
+	return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused, true, true);
 }
 EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index dd07fe1..3ae5095 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1815,7 +1815,7 @@ static bool blk_rq_can_direct_dispatch(struct request *rq)
 	return req_op(rq) == REQ_OP_READ || req_op(rq) == REQ_OP_WRITE;
 }
 
-static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 						struct request *rq,
 						blk_qc_t *cookie,
 						bool bypass, bool last)
@@ -1886,13 +1886,6 @@ static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 	return ret;
 }
 
-blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
-{
-	blk_qc_t unused;
-
-	return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused, true, last);
-}
-
 void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
 		struct list_head *list)
 {
diff --git a/block/blk-mq.h b/block/blk-mq.h
index a664ea4..b81e619 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -68,8 +68,10 @@ void blk_mq_request_bypass_insert(struct request *rq, bool run_queue);
 void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
 				struct list_head *list);
 
-/* Used by blk_insert_cloned_request() to issue request directly */
-blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last);
+blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+						struct request *rq,
+						blk_qc_t *cookie,
+						bool bypass, bool last);
 void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
 				    struct list_head *list);
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V10 3/4] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests
  2018-12-06  3:32 ` [PATCH V10 3/4] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests Jianchao Wang
@ 2018-12-06 15:19   ` Jens Axboe
  2018-12-07  1:16     ` jianchao.wang
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2018-12-06 15:19 UTC (permalink / raw)
  To: Jianchao Wang; +Cc: ming.lei, linux-block, linux-kernel

On 12/5/18 8:32 PM, Jianchao Wang wrote:
> It is not necessary to issue request directly with bypass 'true'
> in blk_mq_sched_insert_requests and handle the non-issued requests
> itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
> handle them totally. Remove the blk_rq_can_direct_dispatch check,
> because blk_mq_try_issue_directly can handle it well.
> 
> With respect to commit_rqs hook, we only need to care about the last
> request's result. If it is inserted, invoke commit_rqs.

I don't think there's anything wrong, functionally, with this patch,
but I question the logic of continuing to attempt direct dispatch
if we fail one. If we get busy on one, for instance, we should just
insert that one to the dispatch list, and insert the rest of the list
normally.


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V10 3/4] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests
  2018-12-06 15:19   ` Jens Axboe
@ 2018-12-07  1:16     ` jianchao.wang
  2018-12-07  1:18       ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: jianchao.wang @ 2018-12-07  1:16 UTC (permalink / raw)
  To: Jens Axboe; +Cc: ming.lei, linux-block, linux-kernel



On 12/6/18 11:19 PM, Jens Axboe wrote:
> On 12/5/18 8:32 PM, Jianchao Wang wrote:
>> It is not necessary to issue request directly with bypass 'true'
>> in blk_mq_sched_insert_requests and handle the non-issued requests
>> itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
>> handle them totally. Remove the blk_rq_can_direct_dispatch check,
>> because blk_mq_try_issue_directly can handle it well.
>>
>> With respect to commit_rqs hook, we only need to care about the last
>> request's result. If it is inserted, invoke commit_rqs.
> 
> I don't think there's anything wrong, functionally, with this patch,
> but I question the logic of continuing to attempt direct dispatch
> if we fail one. If we get busy on one, for instance, we should just
> insert that one to the dispatch list, and insert the rest of the list
> normally.
> 
> 
It is OK for me to stop to attempt direct dispatch and insert all of the
rest when meet the non-ok case.

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V10 3/4] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests
  2018-12-07  1:16     ` jianchao.wang
@ 2018-12-07  1:18       ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2018-12-07  1:18 UTC (permalink / raw)
  To: jianchao.wang; +Cc: ming.lei, linux-block, linux-kernel

On 12/6/18 6:16 PM, jianchao.wang wrote:
> 
> 
> On 12/6/18 11:19 PM, Jens Axboe wrote:
>> On 12/5/18 8:32 PM, Jianchao Wang wrote:
>>> It is not necessary to issue request directly with bypass 'true'
>>> in blk_mq_sched_insert_requests and handle the non-issued requests
>>> itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
>>> handle them totally. Remove the blk_rq_can_direct_dispatch check,
>>> because blk_mq_try_issue_directly can handle it well.
>>>
>>> With respect to commit_rqs hook, we only need to care about the last
>>> request's result. If it is inserted, invoke commit_rqs.
>>
>> I don't think there's anything wrong, functionally, with this patch,
>> but I question the logic of continuing to attempt direct dispatch
>> if we fail one. If we get busy on one, for instance, we should just
>> insert that one to the dispatch list, and insert the rest of the list
>> normally.
>>
>>
> It is OK for me to stop to attempt direct dispatch and insert all of the
> rest when meet the non-ok case.

Great, let's do that then, I think that makes more sense. The usual case
of not being able to dispatch is resource limited, and for that case
we'd just be wasting our time continuing to attempt direct dispatch.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-12-07  1:19 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-06  3:32 [PATCH V10 0/4] blk-mq: refactor code of issue directly Jianchao Wang
2018-12-06  3:32 ` [PATCH V10 1/4] blk-mq: insert to hctx dispatch list when bypass_insert is true Jianchao Wang
2018-12-06  3:32 ` [PATCH V10 2/4] blk-mq: refactor the code of issue request directly Jianchao Wang
2018-12-06  3:32 ` [PATCH V10 3/4] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests Jianchao Wang
2018-12-06 15:19   ` Jens Axboe
2018-12-07  1:16     ` jianchao.wang
2018-12-07  1:18       ` Jens Axboe
2018-12-06  3:32 ` [PATCH V10 4/4] blk-mq: replace and kill blk_mq_request_issue_directly Jianchao Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).