linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Chao Leng <lengchao@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, Yi Zhang <yi.zhang@redhat.com>,
	Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH V2 3/4] nvme: tcp: complete non-IO requests atomically
Date: Wed, 21 Oct 2020 10:20:11 +0800	[thread overview]
Message-ID: <7a3cdbdb-8e57-484c-fcb3-e8e72dfe8d13@huawei.com> (raw)
In-Reply-To: <20201021012241.GC1571548@T590>



On 2020/10/21 9:22, Ming Lei wrote:
> On Tue, Oct 20, 2020 at 05:04:29PM +0800, Chao Leng wrote:
>>
>>
>> On 2020/10/20 16:53, Ming Lei wrote:
>>> During controller's CONNECTING state, admin/fabric/connect requests
>>> are submitted for recovery controller, and we allow to abort this request
>>> directly in time out handler for not blocking setup procedure.
>>>
>>> So timout vs. normal completion race exists on these requests since
>>> admin/fabirc/connect queues won't be shutdown before handling timeout
>>> during CONNECTING state.
>>>
>>> Add atomic completion for requests from connect/fabric/admin queue for
>>> avoiding the race.
>>>
>>> CC: Chao Leng <lengchao@huawei.com>
>>> Cc: Sagi Grimberg <sagi@grimberg.me>
>>> Reported-by: Yi Zhang <yi.zhang@redhat.com>
>>> Tested-by: Yi Zhang <yi.zhang@redhat.com>
>>> Signed-off-by: Ming Lei <ming.lei@redhat.com>
>>> ---
>>>    drivers/nvme/host/tcp.c | 40 +++++++++++++++++++++++++++++++++++++---
>>>    1 file changed, 37 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>>> index d6a3e1487354..7e85bd4a8d1b 100644
>>> --- a/drivers/nvme/host/tcp.c
>>> +++ b/drivers/nvme/host/tcp.c
>>> @@ -30,6 +30,8 @@ static int so_priority;
>>>    module_param(so_priority, int, 0644);
>>>    MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority");
>>> +#define REQ_STATE_COMPLETE     0
>>> +
>>>    enum nvme_tcp_send_state {
>>>    	NVME_TCP_SEND_CMD_PDU = 0,
>>>    	NVME_TCP_SEND_H2C_PDU,
>>> @@ -56,6 +58,8 @@ struct nvme_tcp_request {
>>>    	size_t			offset;
>>>    	size_t			data_sent;
>>>    	enum nvme_tcp_send_state state;
>>> +
>>> +	unsigned long		comp_state;
>> I do not think adding state is a good idea.
>> It is similar to rq->state.
>> In the teardown process, after quiesced queues delete the timer and
>> cancel the timeout work maybe a better option.
>> I will send the patch later.
>> The patch is already tested with roce more than one week.
> 
> Actually there isn't race between timeout and teardown, and patch 1 and patch
> 2 are enough to fix the issue reported by Yi.
> 
> It is just that rq->state is updated to IDLE in its. complete(), so
> either one of code paths may think that this rq isn't completed, and
> patch 2 has addressed this issue.
> 
> In short, teardown lock is enough to cover the race.
The race may cause abnormals:
1. Reported by Yi Zhang <yi.zhang@redhat.com>
detail: https://lore.kernel.org/linux-nvme/1934331639.3314730.1602152202454.JavaMail.zimbra@redhat.com/
2. BUG_ON in blk_mq_requeue_request
Because error recovery and time out may repeated completion request.
First error recovery cancel request in tear down process, the request
will be retried in completion, rq->state will be changed to IDEL.
And then time out will complete the request again, and samely retry
the request, BUG_ON will happen in blk_mq_requeue_request.
3. abnormal link disconnection
Firt error recovery cancel all request, reconnect success, the request
will be restarted. And then time out will complete the request again,
the queue will be stoped in nvme_rdma(tcp)_complete_timed_out,
Abnormal link diconnection will happen. The condition(time out process
is delayed long time by some reason such as hardware interrupt) is need.
So the probability is low.

teardown_lock just serialize the race. and add checkint the rq->state can avoid
the 1 and 2 scenario, but 3 scenario can not be fixed.
> 
> 
> Thanks,
> Ming
> 
> .
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-10-21  2:20 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-20  8:52 [PATCH V2 0/4] blk-mq/nvme-tcp: fix timed out related races Ming Lei
2020-10-20  8:52 ` [PATCH V2 1/4] blk-mq: check rq->state explicitly in blk_mq_tagset_count_completed_rqs Ming Lei
2020-10-20  8:52 ` [PATCH V2 2/4] blk-mq: fix blk_mq_request_completed Ming Lei
2020-10-20  8:53 ` [PATCH V2 3/4] nvme: tcp: complete non-IO requests atomically Ming Lei
2020-10-20  9:04   ` Chao Leng
2020-10-21  1:22     ` Ming Lei
2020-10-21  2:20       ` Chao Leng [this message]
2020-10-21  2:55         ` Ming Lei
2020-10-21  3:14           ` Chao Leng
2020-10-20  8:53 ` [PATCH V2 4/4] nvme: tcp: fix race between timeout and normal completion Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7a3cdbdb-8e57-484c-fcb3-e8e72dfe8d13@huawei.com \
    --to=lengchao@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).