linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly
@ 2018-11-02  7:08 Jianchao Wang
  2018-11-02  7:08 ` [PATCH V4 1/5] blk-mq: make __blk_mq_issue_directly be able to accept NULL cookie pointer Jianchao Wang
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Jianchao Wang @ 2018-11-02  7:08 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

Hi Jens

These patch set refactors the code of issueing request driectly and
fix some defects.

The 1st patch make __blk_mq_issue_directly be able to accept NULL cookie
pointer.

The 2nd patch refactors the code of issue request directly.
It merges the blk_mq_try_issue_directly and __blk_mq_try_issue_directly
and make it handle the return value of .queue_rq itself.

The 3rd patch let the requests be inserted into hctx->dispatch when
the queue is stopped or quiesced if bypass is true.

The 4th patch make blk_mq_sched_insert_requests issue requests directly
with 'bypass' false, then it needn't to handle the non-issued requests
any more.

The 5th patch ensures the hctx to be ran on its mapped cpu in issue directly
path.

V4:
 - split the original patch 1 into two patches, 1st and 2nd patch currently
 - rename the mq_decision to mq_issue_decision
 - comment changes

V3:
 - Correct the code about the case bypass_insert is true and io scheduler
   attached. The request still need to be issued in case above. (1/4)
 - Refactor the code to make code clearer. blk_mq_make_request is introduced
   to decide insert, end or just return based on the return value of .queue_rq
   and bypass_insert (1/4) 
 - Add the 2nd patch. It introduce a new decision result which indicates to
   insert request with blk_mq_request_bypass_insert.
 - Modify the code to adapt the new patch 1.

V2:
 - Add 1st and 2nd patch

Jianchao Wang (5)
blk-mq: make __blk_mq_issue_directly be able to accept
blk-mq: refactor the code of issue request directly
blk-mq: fix issue directly case when q is stopped or
blk-mq: issue directly with bypass 'false' in
blk-mq: ensure hctx to be ran on mapped cpu when issue

 block/blk-mq-sched.c |   8 ++-
 block/blk-mq.c       | 149 ++++++++++++++++++++++++++++++---------------------
 2 files changed, 92 insertions(+), 65 deletions(-)

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH V4 1/5] blk-mq: make __blk_mq_issue_directly be able to accept NULL cookie pointer
  2018-11-02  7:08 [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly Jianchao Wang
@ 2018-11-02  7:08 ` Jianchao Wang
  2018-11-02  7:08 ` [PATCH V4 2/5] blk-mq: refactor the code of issue request directly Jianchao Wang
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Jianchao Wang @ 2018-11-02  7:08 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

Make __blk_mq_issue_directly be able to accept a NULL cookie pointer
and remove the dummy unused_cookie in blk_mq_request_issue_directly.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index dcf10e3..af5b591 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1700,8 +1700,6 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
 	blk_qc_t new_cookie;
 	blk_status_t ret;
 
-	new_cookie = request_to_qc_t(hctx, rq);
-
 	/*
 	 * For OK queue, we are done. For error, caller may kill it.
 	 * Any other error (busy), just add it to our list as we
@@ -1711,19 +1709,23 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
 	switch (ret) {
 	case BLK_STS_OK:
 		blk_mq_update_dispatch_busy(hctx, false);
-		*cookie = new_cookie;
+		new_cookie = request_to_qc_t(hctx, rq);
 		break;
 	case BLK_STS_RESOURCE:
 	case BLK_STS_DEV_RESOURCE:
 		blk_mq_update_dispatch_busy(hctx, true);
 		__blk_mq_requeue_request(rq);
+		new_cookie = BLK_QC_T_NONE;
 		break;
 	default:
 		blk_mq_update_dispatch_busy(hctx, false);
-		*cookie = BLK_QC_T_NONE;
+		new_cookie = BLK_QC_T_NONE;
 		break;
 	}
 
+	if (cookie)
+		*cookie = new_cookie;
+
 	return ret;
 }
 
@@ -1791,12 +1793,11 @@ blk_status_t blk_mq_request_issue_directly(struct request *rq)
 {
 	blk_status_t ret;
 	int srcu_idx;
-	blk_qc_t unused_cookie;
 	struct blk_mq_ctx *ctx = rq->mq_ctx;
 	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu);
 
 	hctx_lock(hctx, &srcu_idx);
-	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true);
+	ret = __blk_mq_try_issue_directly(hctx, rq, NULL, true);
 	hctx_unlock(hctx, srcu_idx);
 
 	return ret;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V4 2/5] blk-mq: refactor the code of issue request directly
  2018-11-02  7:08 [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly Jianchao Wang
  2018-11-02  7:08 ` [PATCH V4 1/5] blk-mq: make __blk_mq_issue_directly be able to accept NULL cookie pointer Jianchao Wang
@ 2018-11-02  7:08 ` Jianchao Wang
  2018-11-02  7:08 ` [PATCH V4 3/5] blk-mq: fix issue directly case when q is stopped or quiesced Jianchao Wang
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Jianchao Wang @ 2018-11-02  7:08 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
into one interface which is able to handle the return value from
.queue_rq callback. To make the code clearer, introduce new helpers
enum mq_issue_decision and blk_mq_make_decision to decide how to
handle the non-issued requests.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq.c | 104 +++++++++++++++++++++++++++++++++------------------------
 1 file changed, 61 insertions(+), 43 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index af5b591..962fdfc 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1729,78 +1729,96 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
 	return ret;
 }
 
-static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+enum mq_issue_decision {
+	MQ_ISSUE_INSERT_QUEUE,
+	MQ_ISSUE_END_REQUEST,
+	MQ_ISSUE_DO_NOTHING,
+};
+
+static inline enum mq_issue_decision
+	blk_mq_make_dicision(blk_status_t ret, bool bypass)
+{
+	enum mq_issue_decision dec;
+
+	switch(ret) {
+	case BLK_STS_OK:
+		dec = MQ_ISSUE_DO_NOTHING;
+		break;
+	case BLK_STS_DEV_RESOURCE:
+	case BLK_STS_RESOURCE:
+		dec = bypass ? MQ_ISSUE_DO_NOTHING : MQ_ISSUE_INSERT_QUEUE;
+		break;
+	default:
+		dec = bypass ? MQ_ISSUE_DO_NOTHING : MQ_ISSUE_END_REQUEST;
+		break;
+	}
+
+	return dec;
+}
+
+static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 						struct request *rq,
 						blk_qc_t *cookie,
-						bool bypass_insert)
+						bool bypass)
 {
 	struct request_queue *q = rq->q;
 	bool run_queue = true;
+	blk_status_t ret = BLK_STS_RESOURCE;
+	enum mq_issue_decision dec;
+	int srcu_idx;
+
+	hctx_lock(hctx, &srcu_idx);
 
 	/*
-	 * RCU or SRCU read lock is needed before checking quiesced flag.
+	 * hctx_lock is needed before checking quiesced flag.
 	 *
-	 * When queue is stopped or quiesced, ignore 'bypass_insert' from
-	 * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
-	 * and avoid driver to try to dispatch again.
+	 * When queue is stopped or quiesced, ignore 'bypass', insert
+	 * and return BLK_STS_OK to caller, and avoid driver to try to
+	 * dispatch again.
 	 */
 	if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
 		run_queue = false;
-		bypass_insert = false;
-		goto insert;
+		bypass = false;
+		goto out_unlock;
 	}
 
-	if (q->elevator && !bypass_insert)
-		goto insert;
+	if (q->elevator && !bypass)
+		goto out_unlock;
 
 	if (!blk_mq_get_dispatch_budget(hctx))
-		goto insert;
+		goto out_unlock;
 
 	if (!blk_mq_get_driver_tag(rq)) {
 		blk_mq_put_dispatch_budget(hctx);
-		goto insert;
+		goto out_unlock;
 	}
 
-	return __blk_mq_issue_directly(hctx, rq, cookie);
-insert:
-	if (bypass_insert)
-		return BLK_STS_RESOURCE;
+	ret = __blk_mq_issue_directly(hctx, rq, cookie);
 
-	blk_mq_sched_insert_request(rq, false, run_queue, false);
-	return BLK_STS_OK;
-}
-
-static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
-		struct request *rq, blk_qc_t *cookie)
-{
-	blk_status_t ret;
-	int srcu_idx;
-
-	might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
-
-	hctx_lock(hctx, &srcu_idx);
+out_unlock:
+	hctx_unlock(hctx, srcu_idx);
 
-	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false);
-	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
-		blk_mq_sched_insert_request(rq, false, true, false);
-	else if (ret != BLK_STS_OK)
+	dec = blk_mq_make_dicision(ret, bypass);
+	switch(dec) {
+	case MQ_ISSUE_INSERT_QUEUE:
+		blk_mq_sched_insert_request(rq, false, run_queue, false);
+		break;
+	case MQ_ISSUE_END_REQUEST:
 		blk_mq_end_request(rq, ret);
+		break;
+	default:
+		return ret;
+	}
 
-	hctx_unlock(hctx, srcu_idx);
+	return BLK_STS_OK;
 }
 
 blk_status_t blk_mq_request_issue_directly(struct request *rq)
 {
-	blk_status_t ret;
-	int srcu_idx;
 	struct blk_mq_ctx *ctx = rq->mq_ctx;
 	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu);
 
-	hctx_lock(hctx, &srcu_idx);
-	ret = __blk_mq_try_issue_directly(hctx, rq, NULL, true);
-	hctx_unlock(hctx, srcu_idx);
-
-	return ret;
+	return blk_mq_try_issue_directly(hctx, rq, NULL, true);
 }
 
 void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
@@ -1922,13 +1940,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 			data.hctx = blk_mq_map_queue(q,
 					same_queue_rq->mq_ctx->cpu);
 			blk_mq_try_issue_directly(data.hctx, same_queue_rq,
-					&cookie);
+					&cookie, false);
 		}
 	} else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
 			!data.hctx->dispatch_busy)) {
 		blk_mq_put_ctx(data.ctx);
 		blk_mq_bio_to_request(rq, bio);
-		blk_mq_try_issue_directly(data.hctx, rq, &cookie);
+		blk_mq_try_issue_directly(data.hctx, rq, &cookie, false);
 	} else {
 		blk_mq_put_ctx(data.ctx);
 		blk_mq_bio_to_request(rq, bio);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V4 3/5] blk-mq: fix issue directly case when q is stopped or quiesced
  2018-11-02  7:08 [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly Jianchao Wang
  2018-11-02  7:08 ` [PATCH V4 1/5] blk-mq: make __blk_mq_issue_directly be able to accept NULL cookie pointer Jianchao Wang
  2018-11-02  7:08 ` [PATCH V4 2/5] blk-mq: refactor the code of issue request directly Jianchao Wang
@ 2018-11-02  7:08 ` Jianchao Wang
  2018-11-02  7:08 ` [PATCH V4 4/5] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests Jianchao Wang
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Jianchao Wang @ 2018-11-02  7:08 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

When try to issue request directly, if the queue is stopped or
quiesced, 'bypass' will be ignored and return BLK_STS_OK to caller
to avoid it dispatch request again. Then the request will be
inserted with blk_mq_sched_insert_request. This is not correct
for dm-rq case where we should avoid to pass through the underlying
path's io scheduler.

To fix it, add new mq_issue_decision entry MQ_ISSUE_INSERT_DISPATCH
for above case where the request need to be inserted forcibly.
And use blk_mq_request_bypass_insert to insert the request into
hctx->dispatch directly.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 962fdfc..a0b9b6c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1731,12 +1731,13 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
 
 enum mq_issue_decision {
 	MQ_ISSUE_INSERT_QUEUE,
+	MQ_ISSUE_INSERT_DISPATCH,
 	MQ_ISSUE_END_REQUEST,
 	MQ_ISSUE_DO_NOTHING,
 };
 
 static inline enum mq_issue_decision
-	blk_mq_make_dicision(blk_status_t ret, bool bypass)
+	blk_mq_make_dicision(blk_status_t ret, bool bypass, bool force)
 {
 	enum mq_issue_decision dec;
 
@@ -1746,7 +1747,10 @@ static inline enum mq_issue_decision
 		break;
 	case BLK_STS_DEV_RESOURCE:
 	case BLK_STS_RESOURCE:
-		dec = bypass ? MQ_ISSUE_DO_NOTHING : MQ_ISSUE_INSERT_QUEUE;
+		if (force)
+			dec = bypass ? MQ_ISSUE_INSERT_DISPATCH : MQ_ISSUE_INSERT_QUEUE;
+		else
+			dec = bypass ? MQ_ISSUE_DO_NOTHING : MQ_ISSUE_INSERT_QUEUE;
 		break;
 	default:
 		dec = bypass ? MQ_ISSUE_DO_NOTHING : MQ_ISSUE_END_REQUEST;
@@ -1762,7 +1766,7 @@ static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 						bool bypass)
 {
 	struct request_queue *q = rq->q;
-	bool run_queue = true;
+	bool run_queue = true, force = false;
 	blk_status_t ret = BLK_STS_RESOURCE;
 	enum mq_issue_decision dec;
 	int srcu_idx;
@@ -1778,7 +1782,7 @@ static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 	 */
 	if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
 		run_queue = false;
-		bypass = false;
+		force = true;
 		goto out_unlock;
 	}
 
@@ -1798,11 +1802,14 @@ static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 out_unlock:
 	hctx_unlock(hctx, srcu_idx);
 
-	dec = blk_mq_make_dicision(ret, bypass);
+	dec = blk_mq_make_dicision(ret, bypass, force);
 	switch(dec) {
 	case MQ_ISSUE_INSERT_QUEUE:
 		blk_mq_sched_insert_request(rq, false, run_queue, false);
 		break;
+	case MQ_ISSUE_INSERT_DISPATCH:
+		blk_mq_request_bypass_insert(rq, run_queue);
+		break;
 	case MQ_ISSUE_END_REQUEST:
 		blk_mq_end_request(rq, ret);
 		break;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V4 4/5] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests
  2018-11-02  7:08 [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly Jianchao Wang
                   ` (2 preceding siblings ...)
  2018-11-02  7:08 ` [PATCH V4 3/5] blk-mq: fix issue directly case when q is stopped or quiesced Jianchao Wang
@ 2018-11-02  7:08 ` Jianchao Wang
  2018-11-02  7:08 ` [PATCH V4 5/5] blk-mq: ensure hctx to be ran on mapped cpu when issue directly Jianchao Wang
  2018-11-06  1:37 ` [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly jianchao.wang
  5 siblings, 0 replies; 7+ messages in thread
From: Jianchao Wang @ 2018-11-02  7:08 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

It is not necessary to issue request directly with bypass 'true'
in blk_mq_sched_insert_requests and insert the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them totally.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq-sched.c |  8 +++-----
 block/blk-mq.c       | 11 +----------
 2 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 29bfe80..23cd97e 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -411,12 +411,10 @@ void blk_mq_sched_insert_requests(struct request_queue *q,
 		 * busy in case of 'none' scheduler, and this way may save
 		 * us one extra enqueue & dequeue to sw queue.
 		 */
-		if (!hctx->dispatch_busy && !e && !run_queue_async) {
+		if (!hctx->dispatch_busy && !e && !run_queue_async)
 			blk_mq_try_issue_list_directly(hctx, list);
-			if (list_empty(list))
-				return;
-		}
-		blk_mq_insert_requests(hctx, ctx, list);
+		else
+			blk_mq_insert_requests(hctx, ctx, list);
 	}
 
 	blk_mq_run_hw_queue(hctx, run_queue_async);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a0b9b6c..bf8b144 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1832,20 +1832,11 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
 		struct list_head *list)
 {
 	while (!list_empty(list)) {
-		blk_status_t ret;
 		struct request *rq = list_first_entry(list, struct request,
 				queuelist);
 
 		list_del_init(&rq->queuelist);
-		ret = blk_mq_request_issue_directly(rq);
-		if (ret != BLK_STS_OK) {
-			if (ret == BLK_STS_RESOURCE ||
-					ret == BLK_STS_DEV_RESOURCE) {
-				list_add(&rq->queuelist, list);
-				break;
-			}
-			blk_mq_end_request(rq, ret);
-		}
+		blk_mq_try_issue_directly(hctx, rq, NULL, false);
 	}
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V4 5/5] blk-mq: ensure hctx to be ran on mapped cpu when issue directly
  2018-11-02  7:08 [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly Jianchao Wang
                   ` (3 preceding siblings ...)
  2018-11-02  7:08 ` [PATCH V4 4/5] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests Jianchao Wang
@ 2018-11-02  7:08 ` Jianchao Wang
  2018-11-06  1:37 ` [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly jianchao.wang
  5 siblings, 0 replies; 7+ messages in thread
From: Jianchao Wang @ 2018-11-02  7:08 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

When issue request directly and the task is migrated out of the
original cpu where it allocates request, hctx could be ran on
the cpu where it is not mapped. To fix this, insert the request
if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
to the hctx and invoke __blk_mq_issue_directly under preemption
disabled.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index bf8b144..4450eb6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1771,6 +1771,17 @@ static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 	enum mq_issue_decision dec;
 	int srcu_idx;
 
+	if (hctx->flags & BLK_MQ_F_BLOCKING) {
+		force = true;
+		goto out;
+	}
+
+	if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
+		put_cpu();
+		force = true;
+		goto out;
+	}
+
 	hctx_lock(hctx, &srcu_idx);
 
 	/*
@@ -1801,7 +1812,8 @@ static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 
 out_unlock:
 	hctx_unlock(hctx, srcu_idx);
-
+	put_cpu();
+out:
 	dec = blk_mq_make_dicision(ret, bypass, force);
 	switch(dec) {
 	case MQ_ISSUE_INSERT_QUEUE:
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly
  2018-11-02  7:08 [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly Jianchao Wang
                   ` (4 preceding siblings ...)
  2018-11-02  7:08 ` [PATCH V4 5/5] blk-mq: ensure hctx to be ran on mapped cpu when issue directly Jianchao Wang
@ 2018-11-06  1:37 ` jianchao.wang
  5 siblings, 0 replies; 7+ messages in thread
From: jianchao.wang @ 2018-11-06  1:37 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

Would anyone please take a look at this ?

Thanks
Jianchao

On 11/2/18 3:08 PM, Jianchao Wang wrote:
> Hi Jens
> 
> These patch set refactors the code of issueing request driectly and
> fix some defects.
> 
> The 1st patch make __blk_mq_issue_directly be able to accept NULL cookie
> pointer.
> 
> The 2nd patch refactors the code of issue request directly.
> It merges the blk_mq_try_issue_directly and __blk_mq_try_issue_directly
> and make it handle the return value of .queue_rq itself.
> 
> The 3rd patch let the requests be inserted into hctx->dispatch when
> the queue is stopped or quiesced if bypass is true.
> 
> The 4th patch make blk_mq_sched_insert_requests issue requests directly
> with 'bypass' false, then it needn't to handle the non-issued requests
> any more.
> 
> The 5th patch ensures the hctx to be ran on its mapped cpu in issue directly
> path.
> 
> V4:
>  - split the original patch 1 into two patches, 1st and 2nd patch currently
>  - rename the mq_decision to mq_issue_decision
>  - comment changes
> 
> V3:
>  - Correct the code about the case bypass_insert is true and io scheduler
>    attached. The request still need to be issued in case above. (1/4)
>  - Refactor the code to make code clearer. blk_mq_make_request is introduced
>    to decide insert, end or just return based on the return value of .queue_rq
>    and bypass_insert (1/4) 
>  - Add the 2nd patch. It introduce a new decision result which indicates to
>    insert request with blk_mq_request_bypass_insert.
>  - Modify the code to adapt the new patch 1.
> 
> V2:
>  - Add 1st and 2nd patch
> 
> Jianchao Wang (5)
> blk-mq: make __blk_mq_issue_directly be able to accept
> blk-mq: refactor the code of issue request directly
> blk-mq: fix issue directly case when q is stopped or
> blk-mq: issue directly with bypass 'false' in
> blk-mq: ensure hctx to be ran on mapped cpu when issue
> 
>  block/blk-mq-sched.c |   8 ++-
>  block/blk-mq.c       | 149 ++++++++++++++++++++++++++++++---------------------
>  2 files changed, 92 insertions(+), 65 deletions(-)
> 
> Thanks
> Jianchao
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-11-06  1:37 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-02  7:08 [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly Jianchao Wang
2018-11-02  7:08 ` [PATCH V4 1/5] blk-mq: make __blk_mq_issue_directly be able to accept NULL cookie pointer Jianchao Wang
2018-11-02  7:08 ` [PATCH V4 2/5] blk-mq: refactor the code of issue request directly Jianchao Wang
2018-11-02  7:08 ` [PATCH V4 3/5] blk-mq: fix issue directly case when q is stopped or quiesced Jianchao Wang
2018-11-02  7:08 ` [PATCH V4 4/5] blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests Jianchao Wang
2018-11-02  7:08 ` [PATCH V4 5/5] blk-mq: ensure hctx to be ran on mapped cpu when issue directly Jianchao Wang
2018-11-06  1:37 ` [PATCH V4 0/5] blk-mq: refactor and fix on issue request directly jianchao.wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).