From: Jens Axboe <axboe@kernel.dk>
To: linux-block@vger.kernel.org
Cc: bvanassche@acm.org
Subject: [PATCHSET RFC 0/2] mq-deadline scalability improvements
Date: Thu, 18 Jan 2024 11:04:55 -0700 [thread overview]
Message-ID: <20240118180541.930783-1-axboe@kernel.dk> (raw)
Hi,
It's no secret that mq-deadline doesn't scale very well - it was
originally done as a proof-of-concept conversion from deadline, when the
blk-mq multiqueue layer was written. In the single queue world, the
queue lock protected the IO scheduler as well, and mq-deadline simply
adopted an internal dd->lock to fill the place of that.
While mq-deadline works under blk-mq and doesn't suffer any scaling on
that side, as soon as request insertion or dispatch is done, we're
hitting the per-queue dd->lock quite intensely. On a basic test box
with 16 cores / 32 threads, running a number of IO intensive threads
on either null_blk (single hw queue) or nvme0n1 (many hw queues) shows
this quite easily:
Device QD Jobs IOPS Lock contention
=======================================================
null_blk 4 32 1090K 92%
nvme0n1 4 32 1070K 94%
which looks pretty miserable, most of the time is spent contending on
the queue lock.
This RFC patchset attempts to address that by:
1) Serializing dispatch of requests. If we fail dispatching, rely on
the next completion to dispatch the next one. This could potentially
reduce the overall depth achieved on the device side, however even
for the heavily contended test I'm running here, no observable
change is seen. This is patch 1.
2) Serialize request insertion, using internal per-cpu lists to
temporarily store requests until insertion can proceed. This is
patch 2.
With that in place, the same test case now does:
Device QD Jobs IOPS Contention Diff
=============================================================
null_blk 4 32 2250K 28% +106%
nvme0n1 4 32 2560K 23% +112%
and while that doesn't completely eliminate the lock contention, it's
oodles better than what it was before. The throughput increase shows
that nicely, with more than 100% improvement for both cases.
block/mq-deadline.c | 146 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 133 insertions(+), 13 deletions(-)
--
Jens Axboe
next reply other threads:[~2024-01-18 18:05 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-18 18:04 Jens Axboe [this message]
2024-01-18 18:04 ` [PATCH 1/2] block/mq-deadline: serialize request dispatching Jens Axboe
2024-01-18 18:24 ` Bart Van Assche
2024-01-18 18:45 ` Jens Axboe
2024-01-18 18:51 ` Bart Van Assche
2024-01-18 18:55 ` Jens Axboe
2024-01-19 2:40 ` Ming Lei
2024-01-19 15:49 ` Jens Axboe
2024-01-18 18:04 ` [PATCH 2/2] block/mq-deadline: fallback to per-cpu insertion buckets under contention Jens Axboe
2024-01-18 18:25 ` Keith Busch
2024-01-18 18:28 ` Jens Axboe
2024-01-18 18:31 ` Bart Van Assche
2024-01-18 18:33 ` Jens Axboe
2024-01-18 18:53 ` Bart Van Assche
2024-01-18 18:56 ` Jens Axboe
2024-01-18 20:46 ` Bart Van Assche
2024-01-18 20:52 ` Jens Axboe
2024-01-19 23:11 ` Bart Van Assche
2024-01-18 19:29 ` [PATCHSET RFC 0/2] mq-deadline scalability improvements Jens Axboe
2024-01-18 20:22 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240118180541.930783-1-axboe@kernel.dk \
--to=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=linux-block@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).