All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chao Leng <lengchao@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Sagi Grimberg <sagi@grimberg.me>, Jens Axboe <axboe@kernel.dk>,
	Yi Zhang <yi.zhang@redhat.com>, <linux-nvme@lists.infradead.org>,
	<linux-block@vger.kernel.org>, Keith Busch <kbusch@kernel.org>,
	"Christoph Hellwig" <hch@lst.de>
Subject: Re: [PATCH] block: re-introduce blk_mq_complete_request_sync
Date: Thu, 15 Oct 2020 14:05:01 +0800	[thread overview]
Message-ID: <c9cf7168-d8ce-276f-de01-739199ed4258@huawei.com> (raw)
In-Reply-To: <20201014095642.GE775684@T590>



On 2020/10/14 17:56, Ming Lei wrote:
> On Wed, Oct 14, 2020 at 05:39:12PM +0800, Chao Leng wrote:
>>
>>
>> On 2020/10/14 11:34, Ming Lei wrote:
>>> On Wed, Oct 14, 2020 at 09:08:28AM +0800, Ming Lei wrote:
>>>> On Tue, Oct 13, 2020 at 03:36:08PM -0700, Sagi Grimberg wrote:
>>>>>
>>>>>>>> This may just reduce the probability. The concurrency of timeout
>>>>>>>> and teardown will cause the same request
>>>>>>>> be treated repeatly, this is not we expected.
>>>>>>>
>>>>>>> That is right, not like SCSI, NVME doesn't apply atomic request
>>>>>>> completion, so
>>>>>>> request may be completed/freed from both timeout & nvme_cancel_request().
>>>>>>>
>>>>>>> .teardown_lock still may cover the race with Sagi's patch because
>>>>>>> teardown
>>>>>>> actually cancels requests in sync style.
>>>>>> In extreme scenarios, the request may be already retry success(rq state
>>>>>> change to inflight).
>>>>>> Timeout processing may wrongly stop the queue and abort the request.
>>>>>> teardown_lock serialize the process of timeout and teardown, but do not
>>>>>> avoid the race.
>>>>>> It might not be safe.
>>>>>
>>>>> Not sure I understand the scenario you are describing.
>>>>>
>>>>> what do you mean by "In extreme scenarios, the request may be already retry
>>>>> success(rq state change to inflight)"?
>>>>>
>>>>> What will retry the request? only when the host will reconnect
>>>>> the request will be retried.
>>>>>
>>>>> We can call nvme_sync_queues in the last part of the teardown, but
>>>>> I still don't understand the race here.
>>>>
>>>> Not like SCSI, NVME doesn't complete request atomically, so double
>>>> completion/free can be done from both timeout & nvme_cancel_request()(via teardown).
>>>>
>>>> Given request is completed remotely or asynchronously in the two code paths,
>>>> the teardown_lock can't protect the case.
>>>
>>> Thinking of the issue further, the race shouldn't be between timeout and
>>> teardown.
>>>
>>> Both nvme_cancel_request() and nvme_tcp_complete_timed_out() are called
>>> with .teardown_lock, and both check if the request is completed before
>>> calling blk_mq_complete_request() which marks the request as COMPLETE state.
>>> So the request shouldn't be double-freed in the two code paths.
>>>
>>> Another possible reason is that between timeout and normal completion(fail
>>> fast pending requests after ctrl state is updated to CONNECTING).
>>>
>>> Yi, can you try the following patch and see if the issue is fixed?
>>>
>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>>> index d6a3e1487354..fab9220196bd 100644
>>> --- a/drivers/nvme/host/tcp.c
>>> +++ b/drivers/nvme/host/tcp.c
>>> @@ -1886,7 +1886,6 @@ static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new)
>>>    static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
>>>    		bool remove)
>>>    {
>>> -	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>    	blk_mq_quiesce_queue(ctrl->admin_q);
>>>    	nvme_tcp_stop_queue(ctrl, 0);
>>>    	if (ctrl->admin_tagset) {
>>> @@ -1897,15 +1896,13 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
>>>    	if (remove)
>>>    		blk_mq_unquiesce_queue(ctrl->admin_q);
>>>    	nvme_tcp_destroy_admin_queue(ctrl, remove);
>>> -	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>    }
>>>    static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
>>>    		bool remove)
>>>    {
>>> -	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>    	if (ctrl->queue_count <= 1)
>>> -		goto out;
>>> +		return;
>>>    	blk_mq_quiesce_queue(ctrl->admin_q);
>>>    	nvme_start_freeze(ctrl);
>>>    	nvme_stop_queues(ctrl);
>>> @@ -1918,8 +1915,6 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
>>>    	if (remove)
>>>    		nvme_start_queues(ctrl);
>>>    	nvme_tcp_destroy_io_queues(ctrl, remove);
>>> -out:
>>> -	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>    }
>>>    static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
>>> @@ -2030,11 +2025,11 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
>>>    	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
>>>    	nvme_stop_keep_alive(ctrl);
>>> +
>>> +	mutex_lock(&tcp_ctrl->teardown_lock);
>>>    	nvme_tcp_teardown_io_queues(ctrl, false);
>>> -	/* unquiesce to fail fast pending requests */
>>> -	nvme_start_queues(ctrl);
>>>    	nvme_tcp_teardown_admin_queue(ctrl, false);
>>> -	blk_mq_unquiesce_queue(ctrl->admin_q);
>> Delete blk_mq_unquiesce_queue will cause a bug which may cause reconnect failed.
>> Delete nvme_start_queues may cause another bug.
> 
> nvme_tcp_setup_ctrl() will re-start io and admin queue, and only .connect_q
> and .fabrics_q are required during reconnect.I check the code. Unquiesce the admin queue in nvme_tcp_configure_admin_queue, so reconnect can work well.
> 
> So can you explain in detail about the bug?
First if reconnect failed, quiesce the io queue and admin queue will cause IO pause long time.
Second if reconnect failed more than max_reconnects, delete ctrl will hang.
Second, I'm not sure. you can check it.
The hang function is nvme_remove_namespaces in nvme_do_delete_ctrl.
> 
> Thanks,
> Ming
> 
> .
> 

WARNING: multiple messages have this Message-ID (diff)
From: Chao Leng <lengchao@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, Yi Zhang <yi.zhang@redhat.com>,
	Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] block: re-introduce blk_mq_complete_request_sync
Date: Thu, 15 Oct 2020 14:05:01 +0800	[thread overview]
Message-ID: <c9cf7168-d8ce-276f-de01-739199ed4258@huawei.com> (raw)
In-Reply-To: <20201014095642.GE775684@T590>



On 2020/10/14 17:56, Ming Lei wrote:
> On Wed, Oct 14, 2020 at 05:39:12PM +0800, Chao Leng wrote:
>>
>>
>> On 2020/10/14 11:34, Ming Lei wrote:
>>> On Wed, Oct 14, 2020 at 09:08:28AM +0800, Ming Lei wrote:
>>>> On Tue, Oct 13, 2020 at 03:36:08PM -0700, Sagi Grimberg wrote:
>>>>>
>>>>>>>> This may just reduce the probability. The concurrency of timeout
>>>>>>>> and teardown will cause the same request
>>>>>>>> be treated repeatly, this is not we expected.
>>>>>>>
>>>>>>> That is right, not like SCSI, NVME doesn't apply atomic request
>>>>>>> completion, so
>>>>>>> request may be completed/freed from both timeout & nvme_cancel_request().
>>>>>>>
>>>>>>> .teardown_lock still may cover the race with Sagi's patch because
>>>>>>> teardown
>>>>>>> actually cancels requests in sync style.
>>>>>> In extreme scenarios, the request may be already retry success(rq state
>>>>>> change to inflight).
>>>>>> Timeout processing may wrongly stop the queue and abort the request.
>>>>>> teardown_lock serialize the process of timeout and teardown, but do not
>>>>>> avoid the race.
>>>>>> It might not be safe.
>>>>>
>>>>> Not sure I understand the scenario you are describing.
>>>>>
>>>>> what do you mean by "In extreme scenarios, the request may be already retry
>>>>> success(rq state change to inflight)"?
>>>>>
>>>>> What will retry the request? only when the host will reconnect
>>>>> the request will be retried.
>>>>>
>>>>> We can call nvme_sync_queues in the last part of the teardown, but
>>>>> I still don't understand the race here.
>>>>
>>>> Not like SCSI, NVME doesn't complete request atomically, so double
>>>> completion/free can be done from both timeout & nvme_cancel_request()(via teardown).
>>>>
>>>> Given request is completed remotely or asynchronously in the two code paths,
>>>> the teardown_lock can't protect the case.
>>>
>>> Thinking of the issue further, the race shouldn't be between timeout and
>>> teardown.
>>>
>>> Both nvme_cancel_request() and nvme_tcp_complete_timed_out() are called
>>> with .teardown_lock, and both check if the request is completed before
>>> calling blk_mq_complete_request() which marks the request as COMPLETE state.
>>> So the request shouldn't be double-freed in the two code paths.
>>>
>>> Another possible reason is that between timeout and normal completion(fail
>>> fast pending requests after ctrl state is updated to CONNECTING).
>>>
>>> Yi, can you try the following patch and see if the issue is fixed?
>>>
>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>>> index d6a3e1487354..fab9220196bd 100644
>>> --- a/drivers/nvme/host/tcp.c
>>> +++ b/drivers/nvme/host/tcp.c
>>> @@ -1886,7 +1886,6 @@ static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new)
>>>    static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
>>>    		bool remove)
>>>    {
>>> -	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>    	blk_mq_quiesce_queue(ctrl->admin_q);
>>>    	nvme_tcp_stop_queue(ctrl, 0);
>>>    	if (ctrl->admin_tagset) {
>>> @@ -1897,15 +1896,13 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
>>>    	if (remove)
>>>    		blk_mq_unquiesce_queue(ctrl->admin_q);
>>>    	nvme_tcp_destroy_admin_queue(ctrl, remove);
>>> -	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>    }
>>>    static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
>>>    		bool remove)
>>>    {
>>> -	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>    	if (ctrl->queue_count <= 1)
>>> -		goto out;
>>> +		return;
>>>    	blk_mq_quiesce_queue(ctrl->admin_q);
>>>    	nvme_start_freeze(ctrl);
>>>    	nvme_stop_queues(ctrl);
>>> @@ -1918,8 +1915,6 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
>>>    	if (remove)
>>>    		nvme_start_queues(ctrl);
>>>    	nvme_tcp_destroy_io_queues(ctrl, remove);
>>> -out:
>>> -	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>    }
>>>    static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
>>> @@ -2030,11 +2025,11 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
>>>    	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
>>>    	nvme_stop_keep_alive(ctrl);
>>> +
>>> +	mutex_lock(&tcp_ctrl->teardown_lock);
>>>    	nvme_tcp_teardown_io_queues(ctrl, false);
>>> -	/* unquiesce to fail fast pending requests */
>>> -	nvme_start_queues(ctrl);
>>>    	nvme_tcp_teardown_admin_queue(ctrl, false);
>>> -	blk_mq_unquiesce_queue(ctrl->admin_q);
>> Delete blk_mq_unquiesce_queue will cause a bug which may cause reconnect failed.
>> Delete nvme_start_queues may cause another bug.
> 
> nvme_tcp_setup_ctrl() will re-start io and admin queue, and only .connect_q
> and .fabrics_q are required during reconnect.I check the code. Unquiesce the admin queue in nvme_tcp_configure_admin_queue, so reconnect can work well.
> 
> So can you explain in detail about the bug?
First if reconnect failed, quiesce the io queue and admin queue will cause IO pause long time.
Second if reconnect failed more than max_reconnects, delete ctrl will hang.
Second, I'm not sure. you can check it.
The hang function is nvme_remove_namespaces in nvme_do_delete_ctrl.
> 
> Thanks,
> Ming
> 
> .
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-10-15  6:05 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-08 21:37 [PATCH] block: re-introduce blk_mq_complete_request_sync Sagi Grimberg
2020-10-08 21:37 ` Sagi Grimberg
2020-10-09  4:39 ` Ming Lei
2020-10-09  4:39   ` Ming Lei
2020-10-09  5:03   ` Yi Zhang
2020-10-09  5:03     ` Yi Zhang
2020-10-09  8:09     ` Sagi Grimberg
2020-10-09  8:09       ` Sagi Grimberg
2020-10-09 13:55       ` Yi Zhang
2020-10-09 13:55         ` Yi Zhang
2020-10-09 18:29         ` Sagi Grimberg
2020-10-09 18:29           ` Sagi Grimberg
2020-10-10  6:08           ` Yi Zhang
2020-10-10  6:08             ` Yi Zhang
2020-10-12  3:59             ` Chao Leng
2020-10-12  3:59               ` Chao Leng
2020-10-12  8:13               ` Ming Lei
2020-10-12  8:13                 ` Ming Lei
2020-10-12  9:06                 ` Chao Leng
2020-10-12  9:06                   ` Chao Leng
2020-10-13 22:36                   ` Sagi Grimberg
2020-10-13 22:36                     ` Sagi Grimberg
2020-10-14  1:08                     ` Ming Lei
2020-10-14  1:08                       ` Ming Lei
2020-10-14  1:37                       ` Chao Leng
2020-10-14  1:37                         ` Chao Leng
2020-10-14  2:02                         ` Ming Lei
2020-10-14  2:02                           ` Ming Lei
2020-10-14  2:32                           ` Chao Leng
2020-10-14  2:32                             ` Chao Leng
2020-10-14  2:41                           ` Chao Leng
2020-10-14  2:41                             ` Chao Leng
2020-10-14  3:34                       ` Ming Lei
2020-10-14  3:34                         ` Ming Lei
2020-10-14  9:39                         ` Chao Leng
2020-10-14  9:39                           ` Chao Leng
2020-10-14  9:56                           ` Ming Lei
2020-10-14  9:56                             ` Ming Lei
2020-10-15  6:05                             ` Chao Leng [this message]
2020-10-15  6:05                               ` Chao Leng
2020-10-15  7:50                               ` Ming Lei
2020-10-15  7:50                                 ` Ming Lei
2020-10-15 10:05                                 ` Chao Leng
2020-10-15 10:05                                   ` Chao Leng
2020-10-14  1:32                     ` Chao Leng
2020-10-14  1:32                       ` Chao Leng
2020-10-13 22:31                 ` Sagi Grimberg
2020-10-13 22:31                   ` Sagi Grimberg
2020-10-14  1:25                   ` Chao Leng
2020-10-14  1:25                     ` Chao Leng
2020-10-09  8:11   ` Sagi Grimberg
2020-10-09  8:11     ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c9cf7168-d8ce-276f-de01-739199ed4258@huawei.com \
    --to=lengchao@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.