Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH] nvme-core: fix deadlock when delete ctrl due to reconnect fail
@ 2020-07-27  8:01 Chao Leng
  2020-07-27  8:04 ` Chao Leng
  0 siblings, 1 reply; 2+ messages in thread
From: Chao Leng @ 2020-07-27  8:01 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, lengchao, sagi

A deadlock happens when test link blink for nvme over roce. If time out
in reconneting process, nvme_rdma_timeout->nvme_rdma_teardown_io_queues
will quiesce the io queues, and then the ctrl will be deleted after
reconnect times exceed max_reconnects. If run fdisk from the time
when the queue is quiesced to the time when the ctrl is deleted,
delete ctrl will deadlock, the process: nvme_do_delete_ctrl->
nvme_remove_namespaces->nvme_ns_remove->blk_cleanup_queue->
blk_freeze_queue->blk_mq_freeze_queue_wait, blk_mq_freeze_queue_wait
will wait until q_usage_counter of queue become 0, but the queue is
quiesced, can not clean any request.

Solution: nvme_rdma_timeout should call nvme_start_queues after
call nvme_rdma_teardown_io_queues. further more, we need start queues
regardless of whether the remove flag is set, after cancel requests
in nvme_rdma_teardown_io_queues.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/rdma.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index f8f856dc0c67..b381e2cde50a 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -989,8 +989,7 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
 				nvme_cancel_request, &ctrl->ctrl);
 			blk_mq_tagset_wait_completed_request(ctrl->ctrl.tagset);
 		}
-		if (remove)
-			nvme_start_queues(&ctrl->ctrl);
+		nvme_start_queues(&ctrl->ctrl);
 		nvme_rdma_destroy_io_queues(ctrl, remove);
 	}
 }
@@ -1128,7 +1127,6 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 
 	nvme_stop_keep_alive(&ctrl->ctrl);
 	nvme_rdma_teardown_io_queues(ctrl, false);
-	nvme_start_queues(&ctrl->ctrl);
 	nvme_rdma_teardown_admin_queue(ctrl, false);
 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
 
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] nvme-core: fix deadlock when delete ctrl due to reconnect fail
  2020-07-27  8:01 [PATCH] nvme-core: fix deadlock when delete ctrl due to reconnect fail Chao Leng
@ 2020-07-27  8:04 ` Chao Leng
  0 siblings, 0 replies; 2+ messages in thread
From: Chao Leng @ 2020-07-27  8:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, sagi

Title is error. Ignore this email, see next email.

On 2020/7/27 16:01, Chao Leng wrote:
> A deadlock happens when test link blink for nvme over roce. If time out
> in reconneting process, nvme_rdma_timeout->nvme_rdma_teardown_io_queues
> will quiesce the io queues, and then the ctrl will be deleted after
> reconnect times exceed max_reconnects. If run fdisk from the time
> when the queue is quiesced to the time when the ctrl is deleted,
> delete ctrl will deadlock, the process: nvme_do_delete_ctrl->
> nvme_remove_namespaces->nvme_ns_remove->blk_cleanup_queue->
> blk_freeze_queue->blk_mq_freeze_queue_wait, blk_mq_freeze_queue_wait
> will wait until q_usage_counter of queue become 0, but the queue is
> quiesced, can not clean any request.
> 
> Solution: nvme_rdma_timeout should call nvme_start_queues after
> call nvme_rdma_teardown_io_queues. further more, we need start queues
> regardless of whether the remove flag is set, after cancel requests
> in nvme_rdma_teardown_io_queues.
> 
> Signed-off-by: Chao Leng <lengchao@huawei.com>
> ---
>   drivers/nvme/host/rdma.c | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index f8f856dc0c67..b381e2cde50a 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -989,8 +989,7 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
>   				nvme_cancel_request, &ctrl->ctrl);
>   			blk_mq_tagset_wait_completed_request(ctrl->ctrl.tagset);
>   		}
> -		if (remove)
> -			nvme_start_queues(&ctrl->ctrl);
> +		nvme_start_queues(&ctrl->ctrl);
>   		nvme_rdma_destroy_io_queues(ctrl, remove);
>   	}
>   }
> @@ -1128,7 +1127,6 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>   
>   	nvme_stop_keep_alive(&ctrl->ctrl);
>   	nvme_rdma_teardown_io_queues(ctrl, false);
> -	nvme_start_queues(&ctrl->ctrl);
>   	nvme_rdma_teardown_admin_queue(ctrl, false);
>   	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>   
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-27  8:01 [PATCH] nvme-core: fix deadlock when delete ctrl due to reconnect fail Chao Leng
2020-07-27  8:04 ` Chao Leng

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git