linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: David Milburn <dmilburn@redhat.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
	Keith Busch <kbusch@kernel.org>,
	James Smart <james.smart@broadcom.com>
Subject: Re: [PATCH 5/6] nvme-rdma: fix timeout handler
Date: Thu, 6 Aug 2020 14:52:42 -0500	[thread overview]
Message-ID: <f2d0d1cb-f4d5-4b47-4b76-ba7d5198c74d@redhat.com> (raw)
In-Reply-To: <20200803065852.69987-6-sagi@grimberg.me>


Hi Sagi,

On 08/03/2020 01:58 AM, Sagi Grimberg wrote:
> Currently we check if the controller state != LIVE, and
> we directly fail the command under the assumption that this
> is the connect command or an admin command within the
> controller initialization sequence.
> 
> This is wrong, we need to check if the request risking
> controller setup/teardown blocking if not completed and
> only then fail.
> 
> The logic should be:
> - RESETTING, only fail fabrics/admin commands otherwise
>    controller teardown will block. otherwise reset the timer
>    and come back again.
> - CONNECTING, if this is a connect (or an admin command), we fail
>    right away (unblock controller initialization), otherwise we
>    treat it like anything else.
> - otherwise trigger error recovery and reset the timer (the
>    error handler will take care of completing/delaying it).
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>   drivers/nvme/host/rdma.c | 67 +++++++++++++++++++++++++++++-----------
>   1 file changed, 49 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index 44c76ffbb264..a58c6deaf691 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -1180,6 +1180,7 @@ static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
>   	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING))
>   		return;
>   
> +	dev_warn(ctrl->ctrl.device, "starting error recovery\n");
>   	queue_work(nvme_reset_wq, &ctrl->err_work);
>   }
>   
> @@ -1946,6 +1947,22 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
>   	return 0;
>   }
>   
> +static void nvme_rdma_complete_timed_out(struct request *rq)
> +{
> +	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
> +	struct nvme_rdma_queue *queue = req->queue;
> +	struct nvme_rdma_ctrl *ctrl = queue->ctrl;
> +
> +	/* fence other contexts that may complete the command */
> +	flush_work(&ctrl->err_work) > +	nvme_rdma_stop_queue(queue);
> +	if (blk_mq_request_completed(rq))
> +		return;
> +	nvme_req(rq)->flags |= NVME_REQ_CANCELLED;
> +	nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
> +	blk_mq_complete_request(rq);


If keep_alive times out, is is possible we try and
blk_mq_free_request() twice for same request.

blk_mq_complete_request
  nvme_rdma_complete_rq
   blk_mq_end_request
    __blk_mq_end_request
     rq->end_io(rq, error) - nvme_keep_alive_end_io
      blk_mq_free_request
       __blk_mq_free_request
         rq->mq_hctx = NULL;
.
.
.
return BLK_EH_DONE to blk_mq_rq_timed_out

And then before returning from blk_mq_check_expired
back down

rq->end_io(rq, 0)
  nvme_keep_alive_end_io
   blk_mq_free_request
    atomic_dec(&hctx->nr_active)

since rq->mq_hctx is now NULL, crash in blk_mq_free_request

Thanks,
David



> +}
> +
>   static enum blk_eh_timer_return
>   nvme_rdma_timeout(struct request *rq, bool reserved)
>   {
> @@ -1956,29 +1973,43 @@ nvme_rdma_timeout(struct request *rq, bool reserved)
>   	dev_warn(ctrl->ctrl.device, "I/O %d QID %d timeout\n",
>   		 rq->tag, nvme_rdma_queue_idx(queue));
>   
> -	/*
> -	 * Restart the timer if a controller reset is already scheduled. Any
> -	 * timed out commands would be handled before entering the connecting
> -	 * state.
> -	 */
> -	if (ctrl->ctrl.state == NVME_CTRL_RESETTING)
> +	switch (ctrl->ctrl.state) {
> +	case NVME_CTRL_RESETTING:
> +		if (!nvme_rdma_queue_idx(queue)) {
> +			/*
> +			 * if we are in teardown we must complete immediately
> +			 * because we may block the teardown sequence (e.g.
> +			 * nvme_disable_ctrl timed out).
> +			 */
> +			nvme_rdma_complete_timed_out(rq);
> +			return BLK_EH_DONE;
> +		}
> +		/*
> +		 * Restart the timer if a controller reset is already scheduled.
> +		 * Any timed out commands would be handled before entering the
> +		 * connecting state.
> +		 */
>   		return BLK_EH_RESET_TIMER;
> -
> -	if (ctrl->ctrl.state != NVME_CTRL_LIVE) {
> +	case NVME_CTRL_CONNECTING:
> +		if (reserved || !nvme_rdma_queue_idx(queue)) {
> +			/*
> +			 * if we are connecting we must complete immediately
> +			 * connect (reserved) or admin requests because we may
> +			 * block controller setup sequence.
> +			 */
> +			nvme_rdma_complete_timed_out(rq);
> +			return BLK_EH_DONE;
> +		}
> +		/* fallthru */
> +	default:
>   		/*
> -		 * Teardown immediately if controller times out while starting
> -		 * or we are already started error recovery. all outstanding
> -		 * requests are completed on shutdown, so we return BLK_EH_DONE.
> +		 * every other state should trigger the error recovery
> +		 * which will be handled by the flow and controller state
> +		 * machine
>   		 */
> -		flush_work(&ctrl->err_work);
> -		nvme_rdma_teardown_io_queues(ctrl, false);
> -		nvme_rdma_teardown_admin_queue(ctrl, false);
> -		return BLK_EH_DONE;
> +		nvme_rdma_error_recovery(ctrl);
>   	}
>   
> -	dev_warn(ctrl->ctrl.device, "starting error recovery\n");
> -	nvme_rdma_error_recovery(ctrl);
> -
>   	return BLK_EH_RESET_TIMER;
>   }
>   
> 


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2020-08-06 19:54 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-03  6:58 [PATCH 0/6] fix possible controller reset hangs in nvme-tcp/nvme-rdma Sagi Grimberg
2020-08-03  6:58 ` [PATCH 1/6] nvme-fabrics: allow to queue requests for live queues Sagi Grimberg
2020-08-03  6:58 ` [PATCH 2/6] nvme: have nvme_wait_freeze_timeout return if it timed out Sagi Grimberg
2020-08-03  6:58 ` [PATCH 3/6] nvme-tcp: fix timeout handler Sagi Grimberg
2020-08-03  6:58 ` [PATCH 4/6] nvme-tcp: fix reset hang if controller died in the middle of a reset Sagi Grimberg
2020-08-03  6:58 ` [PATCH 5/6] nvme-rdma: fix timeout handler Sagi Grimberg
2020-08-03 10:25   ` Chao Leng
2020-08-03 15:03     ` Sagi Grimberg
2020-08-04  1:49       ` Chao Leng
2020-08-04 15:36         ` Sagi Grimberg
2020-08-05  1:07           ` Chao Leng
2020-08-05  1:12             ` Sagi Grimberg
2020-08-05  6:27               ` Chao Leng
2020-08-05  7:00                 ` Sagi Grimberg
2020-08-05  7:14                   ` Chao Leng
2020-08-05  7:19                     ` Sagi Grimberg
2020-08-05  7:35                       ` Chao Leng
2020-08-05  8:17                         ` Sagi Grimberg
2020-08-06 19:52   ` David Milburn [this message]
2020-08-06 20:11     ` Sagi Grimberg
2020-08-03  6:58 ` [PATCH 6/6] nvme-rdma: fix reset hang if controller died in the middle of a reset Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f2d0d1cb-f4d5-4b47-4b76-ba7d5198c74d@redhat.com \
    --to=dmilburn@redhat.com \
    --cc=hch@lst.de \
    --cc=james.smart@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).