All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] block: some misc changes
@ 2015-10-14  3:30 Ming Lei
  2015-10-14  3:30 ` [PATCH 1/4] block: setup bi_phys_segments after splitting Ming Lei
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Ming Lei @ 2015-10-14  3:30 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel; +Cc: Ming Lin, Kent Overstreet, Christoph Hellwig

Hi,

The 1st three patches are optimizations related with bio splitting.

The 4th patch is to mark ctx as pending at batch in flush plug path.

Thanks,


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/4] block: setup bi_phys_segments after splitting
  2015-10-14  3:30 [PATCH 0/4] block: some misc changes Ming Lei
@ 2015-10-14  3:30 ` Ming Lei
  2015-10-15 15:14   ` Jeff Moyer
  2015-10-14  3:30 ` [PATCH 2/4] block: avoid to merge splitted bio Ming Lei
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Ming Lei @ 2015-10-14  3:30 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel
  Cc: Ming Lin, Kent Overstreet, Christoph Hellwig, Ming Lei

The number of bio->bi_phys_segments is always obtained
during bio splitting, so it is natural to setup it
just after bio splitting, then we can avoid to compute
nr_segment again during merge.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
---
 block/blk-merge.c | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index c4e9c37..22293fd 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -11,13 +11,16 @@
 
 static struct bio *blk_bio_discard_split(struct request_queue *q,
 					 struct bio *bio,
-					 struct bio_set *bs)
+					 struct bio_set *bs,
+					 unsigned *nsegs)
 {
 	unsigned int max_discard_sectors, granularity;
 	int alignment;
 	sector_t tmp;
 	unsigned split_sectors;
 
+	*nsegs = 1;
+
 	/* Zero-sector (unknown) and one-sector granularities are the same.  */
 	granularity = max(q->limits.discard_granularity >> 9, 1U);
 
@@ -51,8 +54,11 @@ static struct bio *blk_bio_discard_split(struct request_queue *q,
 
 static struct bio *blk_bio_write_same_split(struct request_queue *q,
 					    struct bio *bio,
-					    struct bio_set *bs)
+					    struct bio_set *bs,
+					    unsigned *nsegs)
 {
+	*nsegs = 1;
+
 	if (!q->limits.max_write_same_sectors)
 		return NULL;
 
@@ -64,7 +70,8 @@ static struct bio *blk_bio_write_same_split(struct request_queue *q,
 
 static struct bio *blk_bio_segment_split(struct request_queue *q,
 					 struct bio *bio,
-					 struct bio_set *bs)
+					 struct bio_set *bs,
+					 unsigned *segs)
 {
 	struct bio_vec bv, bvprv, *bvprvp = NULL;
 	struct bvec_iter iter;
@@ -106,22 +113,30 @@ new_segment:
 		sectors += bv.bv_len >> 9;
 	}
 
+	*segs = nsegs;
 	return NULL;
 split:
+	*segs = nsegs;
 	return bio_split(bio, sectors, GFP_NOIO, bs);
 }
 
 void blk_queue_split(struct request_queue *q, struct bio **bio,
 		     struct bio_set *bs)
 {
-	struct bio *split;
+	struct bio *split, *res;
+	unsigned nsegs;
 
 	if ((*bio)->bi_rw & REQ_DISCARD)
-		split = blk_bio_discard_split(q, *bio, bs);
+		split = blk_bio_discard_split(q, *bio, bs, &nsegs);
 	else if ((*bio)->bi_rw & REQ_WRITE_SAME)
-		split = blk_bio_write_same_split(q, *bio, bs);
+		split = blk_bio_write_same_split(q, *bio, bs, &nsegs);
 	else
-		split = blk_bio_segment_split(q, *bio, q->bio_split);
+		split = blk_bio_segment_split(q, *bio, q->bio_split, &nsegs);
+
+	/* physical segments can be figured out during splitting */
+	res = split ? split : *bio;
+	res->bi_phys_segments = nsegs;
+	bio_set_flag(res, BIO_SEG_VALID);
 
 	if (split) {
 		bio_chain(split, *bio);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/4] block: avoid to merge splitted bio
  2015-10-14  3:30 [PATCH 0/4] block: some misc changes Ming Lei
  2015-10-14  3:30 ` [PATCH 1/4] block: setup bi_phys_segments after splitting Ming Lei
@ 2015-10-14  3:30 ` Ming Lei
  2015-10-15 15:15   ` Jeff Moyer
  2015-10-14  3:30 ` [PATCH 3/4] blk-mq: check bio_mergeable() early before merging Ming Lei
  2015-10-14  3:30 ` [PATCH 4/4] blk-mq: mark ctx as pending at batch in flush plug path Ming Lei
  3 siblings, 1 reply; 11+ messages in thread
From: Ming Lei @ 2015-10-14  3:30 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel
  Cc: Ming Lin, Kent Overstreet, Christoph Hellwig, Ming Lei

The splitted bio has been already too fat to merge, so mark it
as NOMERGE.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
---
 block/blk-merge.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 22293fd..de5716d8 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -139,6 +139,9 @@ void blk_queue_split(struct request_queue *q, struct bio **bio,
 	bio_set_flag(res, BIO_SEG_VALID);
 
 	if (split) {
+		/* there isn't chance to merge the splitted bio */
+		split->bi_rw |= REQ_NOMERGE;
+
 		bio_chain(split, *bio);
 		generic_make_request(*bio);
 		*bio = split;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/4] blk-mq: check bio_mergeable() early before merging
  2015-10-14  3:30 [PATCH 0/4] block: some misc changes Ming Lei
  2015-10-14  3:30 ` [PATCH 1/4] block: setup bi_phys_segments after splitting Ming Lei
  2015-10-14  3:30 ` [PATCH 2/4] block: avoid to merge splitted bio Ming Lei
@ 2015-10-14  3:30 ` Ming Lei
  2015-10-15 15:21   ` Jeff Moyer
  2015-10-14  3:30 ` [PATCH 4/4] blk-mq: mark ctx as pending at batch in flush plug path Ming Lei
  3 siblings, 1 reply; 11+ messages in thread
From: Ming Lei @ 2015-10-14  3:30 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel
  Cc: Ming Lin, Kent Overstreet, Christoph Hellwig, Ming Lei

It isn't necessary to try to merge the bio which is marked
as NOMERGE.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
---
 block/blk-mq.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 546b3b8..deb5f4c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -671,6 +671,9 @@ static bool blk_mq_attempt_merge(struct request_queue *q,
 	struct request *rq;
 	int checked = 8;
 
+	if (!bio_mergeable(bio))
+		return false;
+
 	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
 		int el_ret;
 
@@ -1140,7 +1143,7 @@ static inline bool blk_mq_merge_queue_io(struct blk_mq_hw_ctx *hctx,
 					 struct blk_mq_ctx *ctx,
 					 struct request *rq, struct bio *bio)
 {
-	if (!hctx_allow_merges(hctx)) {
+	if (!hctx_allow_merges(hctx) || !bio_mergeable(bio)) {
 		blk_mq_bio_to_request(rq, bio);
 		spin_lock(&ctx->lock);
 insert_rq:
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/4] blk-mq: mark ctx as pending at batch in flush plug path
  2015-10-14  3:30 [PATCH 0/4] block: some misc changes Ming Lei
                   ` (2 preceding siblings ...)
  2015-10-14  3:30 ` [PATCH 3/4] blk-mq: check bio_mergeable() early before merging Ming Lei
@ 2015-10-14  3:30 ` Ming Lei
  2015-10-15 15:26   ` Jeff Moyer
  3 siblings, 1 reply; 11+ messages in thread
From: Ming Lei @ 2015-10-14  3:30 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel
  Cc: Ming Lin, Kent Overstreet, Christoph Hellwig, Ming Lei

Most of times, flush plug should be the hottest I/O path,
so mark ctx as pending after all requests in the list are
inserted.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
---
 block/blk-mq.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index deb5f4c..1c943b9 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -993,18 +993,25 @@ void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
 }
 EXPORT_SYMBOL(blk_mq_delay_queue);
 
-static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
-				    struct request *rq, bool at_head)
+static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
+					    struct blk_mq_ctx *ctx,
+					    struct request *rq,
+					    bool at_head)
 {
-	struct blk_mq_ctx *ctx = rq->mq_ctx;
-
 	trace_block_rq_insert(hctx->queue, rq);
 
 	if (at_head)
 		list_add(&rq->queuelist, &ctx->rq_list);
 	else
 		list_add_tail(&rq->queuelist, &ctx->rq_list);
+}
+
+static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
+				    struct request *rq, bool at_head)
+{
+	struct blk_mq_ctx *ctx = rq->mq_ctx;
 
+	__blk_mq_insert_req_list(hctx, ctx, rq, at_head);
 	blk_mq_hctx_mark_pending(hctx, ctx);
 }
 
@@ -1060,8 +1067,9 @@ static void blk_mq_insert_requests(struct request_queue *q,
 		rq = list_first_entry(list, struct request, queuelist);
 		list_del_init(&rq->queuelist);
 		rq->mq_ctx = ctx;
-		__blk_mq_insert_request(hctx, rq, false);
+		__blk_mq_insert_req_list(hctx, ctx, rq, false);
 	}
+	blk_mq_hctx_mark_pending(hctx, ctx);
 	spin_unlock(&ctx->lock);
 
 	blk_mq_run_hw_queue(hctx, from_schedule);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/4] block: setup bi_phys_segments after splitting
  2015-10-14  3:30 ` [PATCH 1/4] block: setup bi_phys_segments after splitting Ming Lei
@ 2015-10-15 15:14   ` Jeff Moyer
  0 siblings, 0 replies; 11+ messages in thread
From: Jeff Moyer @ 2015-10-15 15:14 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-kernel, Ming Lin, Kent Overstreet, Christoph Hellwig

Ming Lei <ming.lei@canonical.com> writes:

> The number of bio->bi_phys_segments is always obtained
> during bio splitting, so it is natural to setup it
> just after bio splitting, then we can avoid to compute
> nr_segment again during merge.
>
> Signed-off-by: Ming Lei <ming.lei@canonical.com>

Reviewed-by: Jeff Moyer <jmoyer@redhat.com>

> ---
>  block/blk-merge.c | 29 ++++++++++++++++++++++-------
>  1 file changed, 22 insertions(+), 7 deletions(-)
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index c4e9c37..22293fd 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -11,13 +11,16 @@
>  
>  static struct bio *blk_bio_discard_split(struct request_queue *q,
>  					 struct bio *bio,
> -					 struct bio_set *bs)
> +					 struct bio_set *bs,
> +					 unsigned *nsegs)
>  {
>  	unsigned int max_discard_sectors, granularity;
>  	int alignment;
>  	sector_t tmp;
>  	unsigned split_sectors;
>  
> +	*nsegs = 1;
> +
>  	/* Zero-sector (unknown) and one-sector granularities are the same.  */
>  	granularity = max(q->limits.discard_granularity >> 9, 1U);
>  
> @@ -51,8 +54,11 @@ static struct bio *blk_bio_discard_split(struct request_queue *q,
>  
>  static struct bio *blk_bio_write_same_split(struct request_queue *q,
>  					    struct bio *bio,
> -					    struct bio_set *bs)
> +					    struct bio_set *bs,
> +					    unsigned *nsegs)
>  {
> +	*nsegs = 1;
> +
>  	if (!q->limits.max_write_same_sectors)
>  		return NULL;
>  
> @@ -64,7 +70,8 @@ static struct bio *blk_bio_write_same_split(struct request_queue *q,
>  
>  static struct bio *blk_bio_segment_split(struct request_queue *q,
>  					 struct bio *bio,
> -					 struct bio_set *bs)
> +					 struct bio_set *bs,
> +					 unsigned *segs)
>  {
>  	struct bio_vec bv, bvprv, *bvprvp = NULL;
>  	struct bvec_iter iter;
> @@ -106,22 +113,30 @@ new_segment:
>  		sectors += bv.bv_len >> 9;
>  	}
>  
> +	*segs = nsegs;
>  	return NULL;
>  split:
> +	*segs = nsegs;
>  	return bio_split(bio, sectors, GFP_NOIO, bs);
>  }
>  
>  void blk_queue_split(struct request_queue *q, struct bio **bio,
>  		     struct bio_set *bs)
>  {
> -	struct bio *split;
> +	struct bio *split, *res;
> +	unsigned nsegs;
>  
>  	if ((*bio)->bi_rw & REQ_DISCARD)
> -		split = blk_bio_discard_split(q, *bio, bs);
> +		split = blk_bio_discard_split(q, *bio, bs, &nsegs);
>  	else if ((*bio)->bi_rw & REQ_WRITE_SAME)
> -		split = blk_bio_write_same_split(q, *bio, bs);
> +		split = blk_bio_write_same_split(q, *bio, bs, &nsegs);
>  	else
> -		split = blk_bio_segment_split(q, *bio, q->bio_split);
> +		split = blk_bio_segment_split(q, *bio, q->bio_split, &nsegs);
> +
> +	/* physical segments can be figured out during splitting */
> +	res = split ? split : *bio;
> +	res->bi_phys_segments = nsegs;
> +	bio_set_flag(res, BIO_SEG_VALID);
>  
>  	if (split) {
>  		bio_chain(split, *bio);

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] block: avoid to merge splitted bio
  2015-10-14  3:30 ` [PATCH 2/4] block: avoid to merge splitted bio Ming Lei
@ 2015-10-15 15:15   ` Jeff Moyer
  0 siblings, 0 replies; 11+ messages in thread
From: Jeff Moyer @ 2015-10-15 15:15 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-kernel, Ming Lin, Kent Overstreet, Christoph Hellwig

Ming Lei <ming.lei@canonical.com> writes:

> The splitted bio has been already too fat to merge, so mark it
> as NOMERGE.
>
> Signed-off-by: Ming Lei <ming.lei@canonical.com>

Reviewed-by: Jeff Moyer <jmoyer@redhat.com>

> ---
>  block/blk-merge.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 22293fd..de5716d8 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -139,6 +139,9 @@ void blk_queue_split(struct request_queue *q, struct bio **bio,
>  	bio_set_flag(res, BIO_SEG_VALID);
>  
>  	if (split) {
> +		/* there isn't chance to merge the splitted bio */
> +		split->bi_rw |= REQ_NOMERGE;
> +
>  		bio_chain(split, *bio);
>  		generic_make_request(*bio);
>  		*bio = split;

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/4] blk-mq: check bio_mergeable() early before merging
  2015-10-14  3:30 ` [PATCH 3/4] blk-mq: check bio_mergeable() early before merging Ming Lei
@ 2015-10-15 15:21   ` Jeff Moyer
  2015-10-16  0:26     ` Ming Lei
  0 siblings, 1 reply; 11+ messages in thread
From: Jeff Moyer @ 2015-10-15 15:21 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-kernel, Ming Lin, Kent Overstreet, Christoph Hellwig

Ming Lei <ming.lei@canonical.com> writes:

> It isn't necessary to try to merge the bio which is marked
> as NOMERGE.
>
> Signed-off-by: Ming Lei <ming.lei@canonical.com>
> ---
>  block/blk-mq.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 546b3b8..deb5f4c 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -671,6 +671,9 @@ static bool blk_mq_attempt_merge(struct request_queue *q,
>  	struct request *rq;
>  	int checked = 8;
>  
> +	if (!bio_mergeable(bio))
> +		return false;
> +
>  	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
>  		int el_ret;
>  
> @@ -1140,7 +1143,7 @@ static inline bool blk_mq_merge_queue_io(struct blk_mq_hw_ctx *hctx,
>  					 struct blk_mq_ctx *ctx,
>  					 struct request *rq, struct bio *bio)
>  {
> -	if (!hctx_allow_merges(hctx)) {
> +	if (!hctx_allow_merges(hctx) || !bio_mergeable(bio)) {
>  		blk_mq_bio_to_request(rq, bio);
>  		spin_lock(&ctx->lock);
>  insert_rq:

blk_mq_attempt_merge is only called from blk_mq_merge_queue_io.  So, by
adding the conditional in blk_mq_merge_queue_io, you don't need any
change in blk_mq_attempt_merge.

Also, why haven't you updated the non-multiqueue code paths similarly?

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/4] blk-mq: mark ctx as pending at batch in flush plug path
  2015-10-14  3:30 ` [PATCH 4/4] blk-mq: mark ctx as pending at batch in flush plug path Ming Lei
@ 2015-10-15 15:26   ` Jeff Moyer
  2015-10-16  0:24     ` Ming Lei
  0 siblings, 1 reply; 11+ messages in thread
From: Jeff Moyer @ 2015-10-15 15:26 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-kernel, Ming Lin, Kent Overstreet, Christoph Hellwig

Ming Lei <ming.lei@canonical.com> writes:

> Most of times, flush plug should be the hottest I/O path,
> so mark ctx as pending after all requests in the list are
> inserted.

Hi, Ming,

Did you see some performance gain from this?

-Jeff

>
> Signed-off-by: Ming Lei <ming.lei@canonical.com>
> ---
>  block/blk-mq.c | 18 +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index deb5f4c..1c943b9 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -993,18 +993,25 @@ void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
>  }
>  EXPORT_SYMBOL(blk_mq_delay_queue);
>  
> -static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
> -				    struct request *rq, bool at_head)
> +static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
> +					    struct blk_mq_ctx *ctx,
> +					    struct request *rq,
> +					    bool at_head)
>  {
> -	struct blk_mq_ctx *ctx = rq->mq_ctx;
> -
>  	trace_block_rq_insert(hctx->queue, rq);
>  
>  	if (at_head)
>  		list_add(&rq->queuelist, &ctx->rq_list);
>  	else
>  		list_add_tail(&rq->queuelist, &ctx->rq_list);
> +}
> +
> +static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
> +				    struct request *rq, bool at_head)
> +{
> +	struct blk_mq_ctx *ctx = rq->mq_ctx;
>  
> +	__blk_mq_insert_req_list(hctx, ctx, rq, at_head);
>  	blk_mq_hctx_mark_pending(hctx, ctx);
>  }
>  
> @@ -1060,8 +1067,9 @@ static void blk_mq_insert_requests(struct request_queue *q,
>  		rq = list_first_entry(list, struct request, queuelist);
>  		list_del_init(&rq->queuelist);
>  		rq->mq_ctx = ctx;
> -		__blk_mq_insert_request(hctx, rq, false);
> +		__blk_mq_insert_req_list(hctx, ctx, rq, false);
>  	}
> +	blk_mq_hctx_mark_pending(hctx, ctx);
>  	spin_unlock(&ctx->lock);
>  
>  	blk_mq_run_hw_queue(hctx, from_schedule);

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/4] blk-mq: mark ctx as pending at batch in flush plug path
  2015-10-15 15:26   ` Jeff Moyer
@ 2015-10-16  0:24     ` Ming Lei
  0 siblings, 0 replies; 11+ messages in thread
From: Ming Lei @ 2015-10-16  0:24 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Jens Axboe, Linux Kernel Mailing List, Ming Lin, Kent Overstreet,
	Christoph Hellwig

Hi Jeff,

On Thu, Oct 15, 2015 at 11:26 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Ming Lei <ming.lei@canonical.com> writes:
>
>> Most of times, flush plug should be the hottest I/O path,
>> so mark ctx as pending after all requests in the list are
>> inserted.
>
> Hi, Ming,
>
> Did you see some performance gain from this?

Not done the test yet, but I think it is the correct thing to do because
the flag can be cleared quite frequently from other CPUs, and the numberof
marking pending can be decreased to ~1/16 of previous one.

>
> -Jeff
>
>>
>> Signed-off-by: Ming Lei <ming.lei@canonical.com>
>> ---
>>  block/blk-mq.c | 18 +++++++++++++-----
>>  1 file changed, 13 insertions(+), 5 deletions(-)
>>
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index deb5f4c..1c943b9 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -993,18 +993,25 @@ void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
>>  }
>>  EXPORT_SYMBOL(blk_mq_delay_queue);
>>
>> -static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
>> -                                 struct request *rq, bool at_head)
>> +static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
>> +                                         struct blk_mq_ctx *ctx,
>> +                                         struct request *rq,
>> +                                         bool at_head)
>>  {
>> -     struct blk_mq_ctx *ctx = rq->mq_ctx;
>> -
>>       trace_block_rq_insert(hctx->queue, rq);
>>
>>       if (at_head)
>>               list_add(&rq->queuelist, &ctx->rq_list);
>>       else
>>               list_add_tail(&rq->queuelist, &ctx->rq_list);
>> +}
>> +
>> +static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
>> +                                 struct request *rq, bool at_head)
>> +{
>> +     struct blk_mq_ctx *ctx = rq->mq_ctx;
>>
>> +     __blk_mq_insert_req_list(hctx, ctx, rq, at_head);
>>       blk_mq_hctx_mark_pending(hctx, ctx);
>>  }
>>
>> @@ -1060,8 +1067,9 @@ static void blk_mq_insert_requests(struct request_queue *q,
>>               rq = list_first_entry(list, struct request, queuelist);
>>               list_del_init(&rq->queuelist);
>>               rq->mq_ctx = ctx;
>> -             __blk_mq_insert_request(hctx, rq, false);
>> +             __blk_mq_insert_req_list(hctx, ctx, rq, false);
>>       }
>> +     blk_mq_hctx_mark_pending(hctx, ctx);
>>       spin_unlock(&ctx->lock);
>>
>>       blk_mq_run_hw_queue(hctx, from_schedule);
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/4] blk-mq: check bio_mergeable() early before merging
  2015-10-15 15:21   ` Jeff Moyer
@ 2015-10-16  0:26     ` Ming Lei
  0 siblings, 0 replies; 11+ messages in thread
From: Ming Lei @ 2015-10-16  0:26 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Jens Axboe, Linux Kernel Mailing List, Ming Lin, Kent Overstreet,
	Christoph Hellwig

On Thu, Oct 15, 2015 at 11:21 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Ming Lei <ming.lei@canonical.com> writes:
>
>> It isn't necessary to try to merge the bio which is marked
>> as NOMERGE.
>>
>> Signed-off-by: Ming Lei <ming.lei@canonical.com>
>> ---
>>  block/blk-mq.c | 5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index 546b3b8..deb5f4c 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -671,6 +671,9 @@ static bool blk_mq_attempt_merge(struct request_queue *q,
>>       struct request *rq;
>>       int checked = 8;
>>
>> +     if (!bio_mergeable(bio))
>> +             return false;
>> +
>>       list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
>>               int el_ret;
>>
>> @@ -1140,7 +1143,7 @@ static inline bool blk_mq_merge_queue_io(struct blk_mq_hw_ctx *hctx,
>>                                        struct blk_mq_ctx *ctx,
>>                                        struct request *rq, struct bio *bio)
>>  {
>> -     if (!hctx_allow_merges(hctx)) {
>> +     if (!hctx_allow_merges(hctx) || !bio_mergeable(bio)) {
>>               blk_mq_bio_to_request(rq, bio);
>>               spin_lock(&ctx->lock);
>>  insert_rq:
>
> blk_mq_attempt_merge is only called from blk_mq_merge_queue_io.  So, by
> adding the conditional in blk_mq_merge_queue_io, you don't need any
> change in blk_mq_attempt_merge.

OK.

>
> Also, why haven't you updated the non-multiqueue code paths similarly?

Will do it in v1.

>
> Cheers,
> Jeff
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-10-16  0:26 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-14  3:30 [PATCH 0/4] block: some misc changes Ming Lei
2015-10-14  3:30 ` [PATCH 1/4] block: setup bi_phys_segments after splitting Ming Lei
2015-10-15 15:14   ` Jeff Moyer
2015-10-14  3:30 ` [PATCH 2/4] block: avoid to merge splitted bio Ming Lei
2015-10-15 15:15   ` Jeff Moyer
2015-10-14  3:30 ` [PATCH 3/4] blk-mq: check bio_mergeable() early before merging Ming Lei
2015-10-15 15:21   ` Jeff Moyer
2015-10-16  0:26     ` Ming Lei
2015-10-14  3:30 ` [PATCH 4/4] blk-mq: mark ctx as pending at batch in flush plug path Ming Lei
2015-10-15 15:26   ` Jeff Moyer
2015-10-16  0:24     ` Ming Lei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.