linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Chao Leng <lengchao@huawei.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	<linux-nvme@lists.infradead.org>, Christoph Hellwig <hch@lst.de>,
	Keith Busch <kbusch@kernel.org>,
	James Smart <james.smart@broadcom.com>
Subject: Re: [PATCH 5/6] nvme-rdma: fix timeout handler
Date: Wed, 5 Aug 2020 15:35:28 +0800	[thread overview]
Message-ID: <60ced5bb-3169-d9fc-4505-6032107d45a3@huawei.com> (raw)
In-Reply-To: <77794f62-2d4a-d2c9-f474-4ddbb361a308@grimberg.me>



On 2020/8/5 15:19, Sagi Grimberg wrote:
> 
>>>>> The request being timed out cannot be completed after the queue is
>>>>> stopped, that is the point of nvme_rdma_stop_queue. if it is only
>>>>> ALLOCATED, we did not yet connect hence there is zero chance for
>>>>> any command to complete.
>>>> The request may already complete before stop queue, it is in the cq, but
>>>> is not treated by software.
>>>
>>> Not possible, ib_drain_cq completion guarantees that all cqes were
>>> reaped and handled by SW.
>>>
>>>> If nvme_rdma_stop_queue concurrent
>>>
>>> Before we complete we make sure the queue is stopped (and drained and
>>> reaped).
>>>
>>> , for
>>>> example:
>>>> The error recovery run first, it will clear the flag:NVME_RDMA_Q_LIVE,
>>>> and then wait drain cq. At the same time nvme_rdma_timeout
>>>> call nvme_rdma_stop_queue will return immediately, and then may call
>>>> blk_mq_complete_request. but error recovery may drain cq at the same
>>>> time, and may also treat the same request.
>>>
>>> We flush the err_work before running nvme_rdma_stop_queue exactly
>>> because of that. your example cannot happen.
>> Flush work is not safe. See my previous email.
> 
> How is it not safe? when flush_work returns, the work is guaranteed
> to have finished execution, and we only do that for states
> RESETTING/CONNECTING which means that it either has already started
> or already finished.

Though the state is NVME_CTRL_RESETTING, but it does not mean the work
is already queued(started) or finished. There is a hole between Change state
and queue work.

Like this:
static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
{
     if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING))
         return;
--------------------------------
may interrupt by hard interrupt, and then timeout progress flush work
at this time. Thus error recovery and nvme_rdma_complete_timed_out may
concurrent to stop queue. will cause: error recovery may cancel request
or nvme_rdma_complete_timed_out may complete request, but the queue may
not be stoped. Thus will cause abnormal.
--------------------------------
     queue_work(nvme_reset_wq, &ctrl->err_work);
}

Another, although the probability of occurrence is very low, reset work
and nvme_rdma_complete_timed_out may also concurrent to stop queue, may
also cause abnormal.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-08-05  7:35 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-03  6:58 [PATCH 0/6] fix possible controller reset hangs in nvme-tcp/nvme-rdma Sagi Grimberg
2020-08-03  6:58 ` [PATCH 1/6] nvme-fabrics: allow to queue requests for live queues Sagi Grimberg
2020-08-03  6:58 ` [PATCH 2/6] nvme: have nvme_wait_freeze_timeout return if it timed out Sagi Grimberg
2020-08-03  6:58 ` [PATCH 3/6] nvme-tcp: fix timeout handler Sagi Grimberg
2020-08-03  6:58 ` [PATCH 4/6] nvme-tcp: fix reset hang if controller died in the middle of a reset Sagi Grimberg
2020-08-03  6:58 ` [PATCH 5/6] nvme-rdma: fix timeout handler Sagi Grimberg
2020-08-03 10:25   ` Chao Leng
2020-08-03 15:03     ` Sagi Grimberg
2020-08-04  1:49       ` Chao Leng
2020-08-04 15:36         ` Sagi Grimberg
2020-08-05  1:07           ` Chao Leng
2020-08-05  1:12             ` Sagi Grimberg
2020-08-05  6:27               ` Chao Leng
2020-08-05  7:00                 ` Sagi Grimberg
2020-08-05  7:14                   ` Chao Leng
2020-08-05  7:19                     ` Sagi Grimberg
2020-08-05  7:35                       ` Chao Leng [this message]
2020-08-05  8:17                         ` Sagi Grimberg
2020-08-06 19:52   ` David Milburn
2020-08-06 20:11     ` Sagi Grimberg
2020-08-03  6:58 ` [PATCH 6/6] nvme-rdma: fix reset hang if controller died in the middle of a reset Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=60ced5bb-3169-d9fc-4505-6032107d45a3@huawei.com \
    --to=lengchao@huawei.com \
    --cc=hch@lst.de \
    --cc=james.smart@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).