From: Jens Axboe <axboe@kernel.dk>
To: Keith Busch <kbusch@kernel.org>
Cc: Sagi Grimberg <sagi@grimberg.me>,
linux-nvme@lists.infradead.org, Ming Lei <ming.lei@redhat.com>,
linux-block@vger.kernel.org, Chao Leng <lengchao@huawei.com>,
Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v3 1/2] blk-mq: add async quiesce interface
Date: Mon, 27 Jul 2020 15:30:31 -0600 [thread overview]
Message-ID: <b8e79e31-05a8-2238-8aca-d4140d3d4412@kernel.dk> (raw)
In-Reply-To: <20200727212137.GA797661@dhcp-10-100-145-180.wdl.wdc.com>
On 7/27/20 3:21 PM, Keith Busch wrote:
> On Mon, Jul 27, 2020 at 03:05:40PM -0600, Jens Axboe wrote:
>> +void blk_mq_quiesce_queue_wait(struct request_queue *q)
>> {
>> struct blk_mq_hw_ctx *hctx;
>> unsigned int i;
>> bool rcu = false;
>>
>> - blk_mq_quiesce_queue_nowait(q);
>> -
>> queue_for_each_hw_ctx(q, hctx, i) {
>> if (hctx->flags & BLK_MQ_F_BLOCKING)
>> synchronize_srcu(hctx->srcu);
>> else
>> rcu = true;
>> }
>> +
>> if (rcu)
>> synchronize_rcu();
>> }
>
> Either all the hctx's are blocking or none of them are: we don't need to
> iterate the hctx's to see which sync method to use. We can add at the
> very beginning (and get rid of 'bool rcu'):
>
> if (!(q->tag_set->flags & BLK_MQ_F_BLOCKING)) {
> synchronize_rcu();
> return;
> }
Agree, was just copy/pasting the existing code.
> But the issue Sagi is trying to address is quiescing a lot
> request queues sharing a tagset where synchronize_rcu() is too time
> consuming to do repeatedly. He wants to synchrnoize once for the entire
> tagset rather than per-request_queue, so I think he needs an API taking
> a 'struct blk_mq_tag_set' instead of a 'struct request_queue'.
Gotcha, yeah that won't work for multiple queues obviously.
Are all these queues sharing a tag set? If so, yes that seems like the
right abstraction. And the pointer addition is a much better idea than
including a full srcu/rcu struct.
--
Jens Axboe
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-07-27 21:30 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-26 0:22 [PATCH v3 0/2] improve quiesce time for large amount of namespaces Sagi Grimberg
2020-07-26 0:23 ` [PATCH v3 1/2] blk-mq: add async quiesce interface Sagi Grimberg
2020-07-26 9:31 ` Ming Lei
2020-07-26 16:27 ` Sagi Grimberg
2020-07-27 2:08 ` Ming Lei
2020-07-27 3:33 ` Chao Leng
2020-07-27 3:50 ` Ming Lei
2020-07-27 5:55 ` Chao Leng
2020-07-27 6:32 ` Ming Lei
2020-07-27 18:40 ` Sagi Grimberg
2020-07-27 18:38 ` Sagi Grimberg
2020-07-27 18:36 ` Sagi Grimberg
2020-07-27 20:37 ` Jens Axboe
2020-07-27 21:00 ` Sagi Grimberg
2020-07-27 21:05 ` Jens Axboe
2020-07-27 21:21 ` Keith Busch
2020-07-27 21:30 ` Jens Axboe [this message]
2020-07-28 1:09 ` Ming Lei
2020-07-26 0:23 ` [PATCH v3 2/2] nvme: improve quiesce time for large amount of namespaces Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b8e79e31-05a8-2238-8aca-d4140d3d4412@kernel.dk \
--to=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=lengchao@huawei.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=ming.lei@redhat.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).