From: Sagi Grimberg <sagi@grimberg.me>
To: Chao Leng <lengchao@huawei.com>,
linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
Keith Busch <kbusch@kernel.org>,
James Smart <james.smart@broadcom.com>
Subject: Re: [PATCH 5/6] nvme-rdma: fix timeout handler
Date: Wed, 5 Aug 2020 01:17:00 -0700 [thread overview]
Message-ID: <ccc941a4-171f-b01b-851a-35079a483255@grimberg.me> (raw)
In-Reply-To: <60ced5bb-3169-d9fc-4505-6032107d45a3@huawei.com>
>> How is it not safe? when flush_work returns, the work is guaranteed
>> to have finished execution, and we only do that for states
>> RESETTING/CONNECTING which means that it either has already started
>> or already finished.
>
> Though the state is NVME_CTRL_RESETTING, but it does not mean the work
> is already queued(started) or finished. There is a hole between Change
> state
> and queue work.
>
> Like this:
> static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
> {
> if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING))
> return;
> --------------------------------
> may interrupt by hard interrupt, and then timeout progress flush work
> at this time. Thus error recovery and nvme_rdma_complete_timed_out may
> concurrent to stop queue. will cause: error recovery may cancel request
> or nvme_rdma_complete_timed_out may complete request, but the queue may
> not be stoped. Thus will cause abnormal.
> --------------------------------
> queue_work(nvme_reset_wq, &ctrl->err_work);
> }
>
> Another, although the probability of occurrence is very low, reset work
> and nvme_rdma_complete_timed_out may also concurrent to stop queue, may
> also cause abnormal.
I see your point.
We can serialize ctrl teardown with a lock (similar to
dev->shutdown_lock that we have in pci).
Something like:
--
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 96fa3185d123..8c8f7492cab4 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1168,11 +1168,13 @@ static void nvme_rdma_error_recovery_work(struct
work_struct *work)
struct nvme_rdma_ctrl *ctrl = container_of(work,
struct nvme_rdma_ctrl, err_work);
+ mutex_lock(&ctrl->shutdown_lock);
nvme_stop_keep_alive(&ctrl->ctrl);
nvme_rdma_teardown_io_queues(ctrl, false);
nvme_start_queues(&ctrl->ctrl);
nvme_rdma_teardown_admin_queue(ctrl, false);
blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+ mutex_unlock(&ctrl->shutdown_lock);
if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
/* state change failure is ok if we started ctrl delete */
@@ -1964,7 +1966,9 @@ static void nvme_rdma_complete_timed_out(struct
request *rq)
/* fence other contexts that may complete the command */
flush_work(&ctrl->err_work);
+ mutex_lock(&ctrl->shutdown_lock);
nvme_rdma_stop_queue(queue);
+ mutex_unlock(&ctrl->shutdown_lock);
if (blk_mq_request_completed(rq))
return;
nvme_req(rq)->flags |= NVME_REQ_CANCELLED;
@@ -2226,6 +2230,7 @@ static void nvme_rdma_shutdown_ctrl(struct
nvme_rdma_ctrl *ctrl, bool shutdown)
cancel_work_sync(&ctrl->err_work);
cancel_delayed_work_sync(&ctrl->reconnect_work);
+ mutex_lock(&ctrl->shutdown_lock);
nvme_rdma_teardown_io_queues(ctrl, shutdown);
blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
if (shutdown)
@@ -2233,6 +2238,7 @@ static void nvme_rdma_shutdown_ctrl(struct
nvme_rdma_ctrl *ctrl, bool shutdown)
else
nvme_disable_ctrl(&ctrl->ctrl);
nvme_rdma_teardown_admin_queue(ctrl, shutdown);
+ mutex_unlock(&ctrl->shutdown_lock);
}
static void nvme_rdma_delete_ctrl(struct nvme_ctrl *ctrl)
--
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-08-05 8:17 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-03 6:58 [PATCH 0/6] fix possible controller reset hangs in nvme-tcp/nvme-rdma Sagi Grimberg
2020-08-03 6:58 ` [PATCH 1/6] nvme-fabrics: allow to queue requests for live queues Sagi Grimberg
2020-08-03 6:58 ` [PATCH 2/6] nvme: have nvme_wait_freeze_timeout return if it timed out Sagi Grimberg
2020-08-03 6:58 ` [PATCH 3/6] nvme-tcp: fix timeout handler Sagi Grimberg
2020-08-03 6:58 ` [PATCH 4/6] nvme-tcp: fix reset hang if controller died in the middle of a reset Sagi Grimberg
2020-08-03 6:58 ` [PATCH 5/6] nvme-rdma: fix timeout handler Sagi Grimberg
2020-08-03 10:25 ` Chao Leng
2020-08-03 15:03 ` Sagi Grimberg
2020-08-04 1:49 ` Chao Leng
2020-08-04 15:36 ` Sagi Grimberg
2020-08-05 1:07 ` Chao Leng
2020-08-05 1:12 ` Sagi Grimberg
2020-08-05 6:27 ` Chao Leng
2020-08-05 7:00 ` Sagi Grimberg
2020-08-05 7:14 ` Chao Leng
2020-08-05 7:19 ` Sagi Grimberg
2020-08-05 7:35 ` Chao Leng
2020-08-05 8:17 ` Sagi Grimberg [this message]
2020-08-06 19:52 ` David Milburn
2020-08-06 20:11 ` Sagi Grimberg
2020-08-03 6:58 ` [PATCH 6/6] nvme-rdma: fix reset hang if controller died in the middle of a reset Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ccc941a4-171f-b01b-851a-35079a483255@grimberg.me \
--to=sagi@grimberg.me \
--cc=hch@lst.de \
--cc=james.smart@broadcom.com \
--cc=kbusch@kernel.org \
--cc=lengchao@huawei.com \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).