linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	"Martin K . Petersen" <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, Keith Busch <kbusch@kernel.org>,
	Ming Lei <ming.lei@redhat.com>
Subject: [PATCH V2 3/5] blk-mq: add helper of blk_mq_shared_quiesce_wait()
Date: Tue, 30 Nov 2021 15:37:50 +0800	[thread overview]
Message-ID: <20211130073752.3005936-4-ming.lei@redhat.com> (raw)
In-Reply-To: <20211130073752.3005936-1-ming.lei@redhat.com>

Add helper of blk_mq_shared_quiesce_wait() for supporting to quiesce
queues in parallel, then we can just wait once if global quiesce wait
is allowed.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/blk-mq.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 42fe97adb807..6f3ccd604d72 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -788,6 +788,19 @@ static inline bool blk_mq_add_to_batch(struct request *req,
 	return true;
 }
 
+/*
+ * If the queue has allocated & used srcu to quiesce queue, quiesce wait is
+ * done via the synchronize_srcu(q->rcu), otherwise it can be done via
+ * shared synchronize_rcu() from other request queues in same host wide.
+ *
+ * This helper can help us to support quiescing queue in parallel, so just
+ * one quiesce wait is enough if shared quiesce wait is allowed.
+ */
+static inline bool blk_mq_shared_quiesce_wait(struct request_queue *q)
+{
+	return !blk_queue_has_srcu(q);
+}
+
 void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list);
 void blk_mq_kick_requeue_list(struct request_queue *q);
 void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs);
-- 
2.31.1


  parent reply	other threads:[~2021-11-30  7:39 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-30  7:37 [PATCH V2 0/5] blk-mq: quiesce improvement Ming Lei
2021-11-30  7:37 ` [PATCH V2 1/5] blk-mq: remove hctx_lock and hctx_unlock Ming Lei
2021-11-30 14:42   ` kernel test robot
2021-12-05 13:16   ` [blk] 4575a8b36e: WARNING:at_kernel/sched/core.c:#__might_sleep kernel test robot
2021-11-30  7:37 ` [PATCH V2 2/5] blk-mq: move srcu from blk_mq_hw_ctx to request_queue Ming Lei
2021-11-30  7:37 ` Ming Lei [this message]
2021-11-30  7:37 ` [PATCH V2 4/5] nvme: quiesce namespace queue in parallel Ming Lei
2021-11-30  7:37 ` [PATCH V2 5/5] scsi: use blk-mq quiesce APIs to implement scsi_host_block Ming Lei
2022-06-07 11:21 ` [PATCH V2 4/5] nvme: quiesce namespace queue in parallel Ismael Luceno
2022-06-07 14:03   ` Ming Lei
2022-07-06 15:37     ` Ismael Luceno

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211130073752.3005936-4-ming.lei@redhat.com \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).