linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] blk-mq: bypass IO scheduler's limit_depth for passthrough request
@ 2021-04-15  3:39 Lin Feng
  2021-04-16  2:10 ` Ming Lei
  2021-04-16 12:09 ` Jens Axboe
  0 siblings, 2 replies; 3+ messages in thread
From: Lin Feng @ 2021-04-15  3:39 UTC (permalink / raw)
  To: axboe; +Cc: linux-block, ming.lei, linf, linux-kernel

Commit 01e99aeca39796003 ("blk-mq: insert passthrough request into
hctx->dispatch directly") gives high priority to passthrough requests and
bypass underlying IO scheduler. But as we allocate tag for such request it
still runs io-scheduler's callback limit_depth, while we really want is to
give full sbitmap-depth capabity to such request for acquiring available
tag.
blktrace shows PC requests(dmraid -s -c -i) hit bfq's limit_depth:
  8,0    2        0     0.000000000 39952 1,0  m   N bfq [bfq_limit_depth] wr_busy 0 sync 0 depth 8
  8,0    2        1     0.000008134 39952  D   R 4 [dmraid]
  8,0    2        2     0.000021538    24  C   R [0]
  8,0    2        0     0.000035442 39952 1,0  m   N bfq [bfq_limit_depth] wr_busy 0 sync 0 depth 8
  8,0    2        3     0.000038813 39952  D   R 24 [dmraid]
  8,0    2        4     0.000044356    24  C   R [0]

This patch introduce a new wrapper to make code not that ugly.

Signed-off-by: Lin Feng <linf@wangsu.com>
---
 block/blk-mq.c         | 3 ++-
 include/linux/blkdev.h | 6 ++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d4d7c1caa439..927189a55575 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -361,11 +361,12 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data)
 
 	if (e) {
 		/*
-		 * Flush requests are special and go directly to the
+		 * Flush/passthrough requests are special and go directly to the
 		 * dispatch list. Don't include reserved tags in the
 		 * limiting, as it isn't useful.
 		 */
 		if (!op_is_flush(data->cmd_flags) &&
+		    !blk_op_is_passthrough(data->cmd_flags) &&
 		    e->type->ops.limit_depth &&
 		    !(data->flags & BLK_MQ_REQ_RESERVED))
 			e->type->ops.limit_depth(data->cmd_flags, data);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 158aefae1030..0d81eed39833 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -272,6 +272,12 @@ static inline bool bio_is_passthrough(struct bio *bio)
 	return blk_op_is_scsi(op) || blk_op_is_private(op);
 }
 
+static inline bool blk_op_is_passthrough(unsigned int op)
+{
+	return (blk_op_is_scsi(op & REQ_OP_MASK) ||
+			blk_op_is_private(op & REQ_OP_MASK));
+}
+
 static inline unsigned short req_get_ioprio(struct request *req)
 {
 	return req->ioprio;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/2] blk-mq: bypass IO scheduler's limit_depth for passthrough request
  2021-04-15  3:39 [PATCH 1/2] blk-mq: bypass IO scheduler's limit_depth for passthrough request Lin Feng
@ 2021-04-16  2:10 ` Ming Lei
  2021-04-16 12:09 ` Jens Axboe
  1 sibling, 0 replies; 3+ messages in thread
From: Ming Lei @ 2021-04-16  2:10 UTC (permalink / raw)
  To: Lin Feng; +Cc: axboe, linux-block, linux-kernel

On Thu, Apr 15, 2021 at 11:39:20AM +0800, Lin Feng wrote:
> Commit 01e99aeca39796003 ("blk-mq: insert passthrough request into
> hctx->dispatch directly") gives high priority to passthrough requests and
> bypass underlying IO scheduler. But as we allocate tag for such request it
> still runs io-scheduler's callback limit_depth, while we really want is to
> give full sbitmap-depth capabity to such request for acquiring available
> tag.
> blktrace shows PC requests(dmraid -s -c -i) hit bfq's limit_depth:
>   8,0    2        0     0.000000000 39952 1,0  m   N bfq [bfq_limit_depth] wr_busy 0 sync 0 depth 8
>   8,0    2        1     0.000008134 39952  D   R 4 [dmraid]
>   8,0    2        2     0.000021538    24  C   R [0]
>   8,0    2        0     0.000035442 39952 1,0  m   N bfq [bfq_limit_depth] wr_busy 0 sync 0 depth 8
>   8,0    2        3     0.000038813 39952  D   R 24 [dmraid]
>   8,0    2        4     0.000044356    24  C   R [0]
> 
> This patch introduce a new wrapper to make code not that ugly.
> 
> Signed-off-by: Lin Feng <linf@wangsu.com>
> ---
>  block/blk-mq.c         | 3 ++-
>  include/linux/blkdev.h | 6 ++++++
>  2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index d4d7c1caa439..927189a55575 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -361,11 +361,12 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data)
>  
>  	if (e) {
>  		/*
> -		 * Flush requests are special and go directly to the
> +		 * Flush/passthrough requests are special and go directly to the
>  		 * dispatch list. Don't include reserved tags in the
>  		 * limiting, as it isn't useful.
>  		 */
>  		if (!op_is_flush(data->cmd_flags) &&
> +		    !blk_op_is_passthrough(data->cmd_flags) &&
>  		    e->type->ops.limit_depth &&
>  		    !(data->flags & BLK_MQ_REQ_RESERVED))
>  			e->type->ops.limit_depth(data->cmd_flags, data);
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 158aefae1030..0d81eed39833 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -272,6 +272,12 @@ static inline bool bio_is_passthrough(struct bio *bio)
>  	return blk_op_is_scsi(op) || blk_op_is_private(op);
>  }
>  
> +static inline bool blk_op_is_passthrough(unsigned int op)
> +{
> +	return (blk_op_is_scsi(op & REQ_OP_MASK) ||
> +			blk_op_is_private(op & REQ_OP_MASK));
> +}
> +
>  static inline unsigned short req_get_ioprio(struct request *req)
>  {
>  	return req->ioprio;
> -- 
> 2.30.2
> 

Looks fine:

Reviewed-by: Ming Lei <ming.lei@redhat.com>

Thanks, 
Ming


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/2] blk-mq: bypass IO scheduler's limit_depth for passthrough request
  2021-04-15  3:39 [PATCH 1/2] blk-mq: bypass IO scheduler's limit_depth for passthrough request Lin Feng
  2021-04-16  2:10 ` Ming Lei
@ 2021-04-16 12:09 ` Jens Axboe
  1 sibling, 0 replies; 3+ messages in thread
From: Jens Axboe @ 2021-04-16 12:09 UTC (permalink / raw)
  To: Lin Feng; +Cc: linux-block, ming.lei, linux-kernel

On 4/14/21 9:39 PM, Lin Feng wrote:
> Commit 01e99aeca39796003 ("blk-mq: insert passthrough request into
> hctx->dispatch directly") gives high priority to passthrough requests and
> bypass underlying IO scheduler. But as we allocate tag for such request it
> still runs io-scheduler's callback limit_depth, while we really want is to
> give full sbitmap-depth capabity to such request for acquiring available
> tag.
> blktrace shows PC requests(dmraid -s -c -i) hit bfq's limit_depth:
>   8,0    2        0     0.000000000 39952 1,0  m   N bfq [bfq_limit_depth] wr_busy 0 sync 0 depth 8
>   8,0    2        1     0.000008134 39952  D   R 4 [dmraid]
>   8,0    2        2     0.000021538    24  C   R [0]
>   8,0    2        0     0.000035442 39952 1,0  m   N bfq [bfq_limit_depth] wr_busy 0 sync 0 depth 8
>   8,0    2        3     0.000038813 39952  D   R 24 [dmraid]
>   8,0    2        4     0.000044356    24  C   R [0]
> 
> This patch introduce a new wrapper to make code not that ugly.

Applied, thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-04-16 12:09 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-15  3:39 [PATCH 1/2] blk-mq: bypass IO scheduler's limit_depth for passthrough request Lin Feng
2021-04-16  2:10 ` Ming Lei
2021-04-16 12:09 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).