From: Daniel Wagner <dwagner@suse.de> To: linux-kernel@vger.kernel.org Cc: James Smart <james.smart@broadcom.com>, Keith Busch <kbusch@kernel.org>, Ming Lei <ming.lei@redhat.com>, Sagi Grimberg <sagi@grimberg.me>, Hannes Reinecke <hare@suse.de>, Wen Xiong <wenxiong@us.ibm.com>, Daniel Wagner <dwagner@suse.de> Subject: [PATCH v4 3/8] nvme-rdma: Update number of hardware queues before using them Date: Mon, 2 Aug 2021 11:14:14 +0200 [thread overview] Message-ID: <20210802091419.56425-4-dwagner@suse.de> (raw) In-Reply-To: <20210802091419.56425-1-dwagner@suse.de> When the number of hardware queues changes during resetting we should update the tagset first before using it. Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Daniel Wagner <dwagner@suse.de> --- drivers/nvme/host/rdma.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 69ae67652f38..de2a8950d282 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -965,6 +965,7 @@ static void nvme_rdma_destroy_io_queues(struct nvme_rdma_ctrl *ctrl, static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new) { int ret; + u32 prior_q_cnt = ctrl->ctrl.queue_count; ret = nvme_rdma_alloc_io_queues(ctrl); if (ret) @@ -982,13 +983,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new) ret = PTR_ERR(ctrl->ctrl.connect_q); goto out_free_tag_set; } - } - - ret = nvme_rdma_start_io_queues(ctrl); - if (ret) - goto out_cleanup_connect_q; - - if (!new) { + } else if (prior_q_cnt != ctrl->ctrl.queue_count) { nvme_start_queues(&ctrl->ctrl); if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) { /* @@ -1004,6 +999,10 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new) nvme_unfreeze(&ctrl->ctrl); } + ret = nvme_rdma_start_io_queues(ctrl); + if (ret) + goto out_cleanup_connect_q; + return 0; out_wait_freeze_timed_out: -- 2.29.2
next prev parent reply other threads:[~2021-08-02 9:14 UTC|newest] Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-02 9:14 [PATCH v4 0/8] Handle update hardware queues and queue freeze more carefully Daniel Wagner 2021-08-02 9:14 ` [PATCH v4 1/8] nvme-fc: Update hardware queues before using them Daniel Wagner 2021-08-02 9:14 ` [PATCH v4 2/8] nvme-tcp: Update number of " Daniel Wagner 2021-08-02 9:14 ` Daniel Wagner [this message] 2021-08-02 9:14 ` [PATCH v4 4/8] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner 2021-08-02 9:14 ` [PATCH v4 5/8] nvme-fc: avoid race between time out and tear down Daniel Wagner 2021-08-02 9:14 ` [PATCH v4 6/8] nvme-fc: fix controller reset hang during traffic Daniel Wagner 2021-08-02 9:14 ` [PATCH v4 7/8] nvme-tcp: Unfreeze queues on reconnect Daniel Wagner 2021-08-02 9:14 ` [PATCH v4 8/8] nvme-rdma: " Daniel Wagner 2021-08-02 11:26 [PATCH RESEND v4 0/8] Handle update hardware queues and queue freeze more carefully Daniel Wagner 2021-08-02 11:26 ` [PATCH v4 3/8] nvme-rdma: Update number of hardware queues before using them Daniel Wagner
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210802091419.56425-4-dwagner@suse.de \ --to=dwagner@suse.de \ --cc=hare@suse.de \ --cc=james.smart@broadcom.com \ --cc=kbusch@kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=ming.lei@redhat.com \ --cc=sagi@grimberg.me \ --cc=wenxiong@us.ibm.com \ --subject='Re: [PATCH v4 3/8] nvme-rdma: Update number of hardware queues before using them' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).