All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] blk-mq-sched: fix performance regression of mq-deadline
@ 2017-07-03 12:37 Ming Lei
  2017-07-03 22:53 ` Jens Axboe
  0 siblings, 1 reply; 2+ messages in thread
From: Ming Lei @ 2017-07-03 12:37 UTC (permalink / raw)
  To: Jens Axboe, linux-block, Christoph Hellwig; +Cc: Ming Lei

When mq-deadline is taken, IOPS of sequential read and
seqential write is observed more than 20% drop on sata(scsi-mq)
devices, compared with using 'none' scheduler.

The reason is that the default nr_requests for scheduler is
too big for small queuedepth devices, and latency is increased
much.

Since the principle of taking 256 requests for mq scheduler
is based on 128 queue depth, this patch changes into
double size of min(hw queue_depth, 128).

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-sched.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 7f0dc48ffb40..4ab69435708c 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -515,10 +515,12 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e)
 	}
 
 	/*
-	 * Default to 256, since we don't split into sync/async like the
-	 * old code did. Additionally, this is a per-hw queue depth.
+	 * Default to double of smaller one between hw queue_depth and 128,
+	 * since we don't split into sync/async like the old code did.
+	 * Additionally, this is a per-hw queue depth.
 	 */
-	q->nr_requests = 2 * BLKDEV_MAX_RQ;
+	q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth,
+				   BLKDEV_MAX_RQ);
 
 	queue_for_each_hw_ctx(q, hctx, i) {
 		ret = blk_mq_sched_alloc_tags(q, hctx, i);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] blk-mq-sched: fix performance regression of mq-deadline
  2017-07-03 12:37 [PATCH] blk-mq-sched: fix performance regression of mq-deadline Ming Lei
@ 2017-07-03 22:53 ` Jens Axboe
  0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2017-07-03 22:53 UTC (permalink / raw)
  To: Ming Lei, linux-block, Christoph Hellwig

On 07/03/2017 06:37 AM, Ming Lei wrote:
> When mq-deadline is taken, IOPS of sequential read and
> seqential write is observed more than 20% drop on sata(scsi-mq)
> devices, compared with using 'none' scheduler.
> 
> The reason is that the default nr_requests for scheduler is
> too big for small queuedepth devices, and latency is increased
> much.
> 
> Since the principle of taking 256 requests for mq scheduler
> is based on 128 queue depth, this patch changes into
> double size of min(hw queue_depth, 128).

That's somewhat odd, but it's a better default than some random
value. I'll apply it, thanks Ming.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-07-03 22:53 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-03 12:37 [PATCH] blk-mq-sched: fix performance regression of mq-deadline Ming Lei
2017-07-03 22:53 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.