All of lore.kernel.org
 help / color / mirror / Atom feed
From: Keith Busch <keith.busch@intel.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org,
	Jianchao Wang <jianchao.w.wang@oracle.com>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org
Subject: Re: [PATCH 1/2] nvme: pci: simplify timeout handling
Date: Fri, 27 Apr 2018 11:51:57 -0600	[thread overview]
Message-ID: <20180427175157.GB5073@localhost.localdomain> (raw)
In-Reply-To: <20180426123956.26039-2-ming.lei@redhat.com>

On Thu, Apr 26, 2018 at 08:39:55PM +0800, Ming Lei wrote:
> +/*
> + * This one is called after queues are quiesced, and no in-fligh timeout
> + * and nvme interrupt handling.
> + */
> +static void nvme_pci_cancel_request(struct request *req, void *data,
> +		bool reserved)
> +{
> +	/* make sure timed-out requests are covered too */
> +	if (req->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) {
> +		req->aborted_gstate = 0;
> +		req->rq_flags &= ~RQF_MQ_TIMEOUT_EXPIRED;
> +	}
> +
> +	nvme_cancel_request(req, data, reserved);
> +}

I don't know about this. I feel like blk-mq owns these flags and LLD's
shouldn't require such knowledge of their implementation details.

I understand how the problems are happening a bit better now. It used
to be that blk-mq would lock an expired command one at a time, so when
we had a batch of IO timeouts, the driver was able to complete all of
them inside a single IO timeout handler.

That's not the case anymore, so the driver is called for every IO
timeout even though if it reaped all the commands at once.

IMO, the block layer should allow the driver to complete all timed out
IOs at once rather than go through ->timeout() for each of them and
without knowing about the request state details. Much of the problems
would go away in that case.

But I think much of this can be fixed by just syncing the queues in the
probe work, and may be a more reasonable bug-fix for 4.17 rather than
rewrite the entire timeout handler. What do you think of this patch? It's
an update of one I posted a few months ago, but uses Jianchao's safe
namespace list locking.

---
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index a3771c5729f5..198b4469c3e2 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3562,12 +3562,25 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
 	struct nvme_ns *ns;
 
 	down_read(&ctrl->namespaces_rwsem);
-	list_for_each_entry(ns, &ctrl->namespaces, list)
+	list_for_each_entry(ns, &ctrl->namespaces, list) {
 		blk_mq_unquiesce_queue(ns->queue);
+		blk_mq_kick_requeue_list(ns->queue);
+	}
 	up_read(&ctrl->namespaces_rwsem);
 }
 EXPORT_SYMBOL_GPL(nvme_start_queues);
 
+void nvme_sync_queues(struct nvme_ctrl *ctrl)
+{
+	struct nvme_ns *ns;
+
+	down_read(&ctrl->namespaces_rwsem);
+	list_for_each_entry(ns, &ctrl->namespaces, list)
+		blk_sync_queue(ns->queue);
+	up_read(&ctrl->namespaces_rwsem);
+}
+EXPORT_SYMBOL_GPL(nvme_sync_queues);
+
 int nvme_reinit_tagset(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set)
 {
 	if (!ctrl->ops->reinit_request)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 7ded7a51c430..e62273198725 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -402,6 +402,7 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
 void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status,
 		union nvme_result *res);
 
+void nvme_sync_queues(struct nvme_ctrl *ctrl);
 void nvme_stop_queues(struct nvme_ctrl *ctrl);
 void nvme_start_queues(struct nvme_ctrl *ctrl);
 void nvme_kill_queues(struct nvme_ctrl *ctrl);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index fbc71fac6f1e..a2f3ad105620 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2303,6 +2303,7 @@ static void nvme_reset_work(struct work_struct *work)
 	 */
 	if (dev->ctrl.ctrl_config & NVME_CC_ENABLE)
 		nvme_dev_disable(dev, false);
+	nvme_sync_queues(&dev->ctrl);
 
 	/*
 	 * Introduce CONNECTING state from nvme-fc/rdma transports to mark the
--

WARNING: multiple messages have this Message-ID (diff)
From: keith.busch@intel.com (Keith Busch)
Subject: [PATCH 1/2] nvme: pci: simplify timeout handling
Date: Fri, 27 Apr 2018 11:51:57 -0600	[thread overview]
Message-ID: <20180427175157.GB5073@localhost.localdomain> (raw)
In-Reply-To: <20180426123956.26039-2-ming.lei@redhat.com>

On Thu, Apr 26, 2018@08:39:55PM +0800, Ming Lei wrote:
> +/*
> + * This one is called after queues are quiesced, and no in-fligh timeout
> + * and nvme interrupt handling.
> + */
> +static void nvme_pci_cancel_request(struct request *req, void *data,
> +		bool reserved)
> +{
> +	/* make sure timed-out requests are covered too */
> +	if (req->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) {
> +		req->aborted_gstate = 0;
> +		req->rq_flags &= ~RQF_MQ_TIMEOUT_EXPIRED;
> +	}
> +
> +	nvme_cancel_request(req, data, reserved);
> +}

I don't know about this. I feel like blk-mq owns these flags and LLD's
shouldn't require such knowledge of their implementation details.

I understand how the problems are happening a bit better now. It used
to be that blk-mq would lock an expired command one at a time, so when
we had a batch of IO timeouts, the driver was able to complete all of
them inside a single IO timeout handler.

That's not the case anymore, so the driver is called for every IO
timeout even though if it reaped all the commands at once.

IMO, the block layer should allow the driver to complete all timed out
IOs at once rather than go through ->timeout() for each of them and
without knowing about the request state details. Much of the problems
would go away in that case.

But I think much of this can be fixed by just syncing the queues in the
probe work, and may be a more reasonable bug-fix for 4.17 rather than
rewrite the entire timeout handler. What do you think of this patch? It's
an update of one I posted a few months ago, but uses Jianchao's safe
namespace list locking.

---
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index a3771c5729f5..198b4469c3e2 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3562,12 +3562,25 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
 	struct nvme_ns *ns;
 
 	down_read(&ctrl->namespaces_rwsem);
-	list_for_each_entry(ns, &ctrl->namespaces, list)
+	list_for_each_entry(ns, &ctrl->namespaces, list) {
 		blk_mq_unquiesce_queue(ns->queue);
+		blk_mq_kick_requeue_list(ns->queue);
+	}
 	up_read(&ctrl->namespaces_rwsem);
 }
 EXPORT_SYMBOL_GPL(nvme_start_queues);
 
+void nvme_sync_queues(struct nvme_ctrl *ctrl)
+{
+	struct nvme_ns *ns;
+
+	down_read(&ctrl->namespaces_rwsem);
+	list_for_each_entry(ns, &ctrl->namespaces, list)
+		blk_sync_queue(ns->queue);
+	up_read(&ctrl->namespaces_rwsem);
+}
+EXPORT_SYMBOL_GPL(nvme_sync_queues);
+
 int nvme_reinit_tagset(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set)
 {
 	if (!ctrl->ops->reinit_request)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 7ded7a51c430..e62273198725 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -402,6 +402,7 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
 void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status,
 		union nvme_result *res);
 
+void nvme_sync_queues(struct nvme_ctrl *ctrl);
 void nvme_stop_queues(struct nvme_ctrl *ctrl);
 void nvme_start_queues(struct nvme_ctrl *ctrl);
 void nvme_kill_queues(struct nvme_ctrl *ctrl);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index fbc71fac6f1e..a2f3ad105620 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2303,6 +2303,7 @@ static void nvme_reset_work(struct work_struct *work)
 	 */
 	if (dev->ctrl.ctrl_config & NVME_CC_ENABLE)
 		nvme_dev_disable(dev, false);
+	nvme_sync_queues(&dev->ctrl);
 
 	/*
 	 * Introduce CONNECTING state from nvme-fc/rdma transports to mark the
--

  parent reply	other threads:[~2018-04-27 17:51 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-26 12:39 [PATCH 0/2] nvme: pci: fix & improve timeout handling Ming Lei
2018-04-26 12:39 ` Ming Lei
2018-04-26 12:39 ` [PATCH 1/2] nvme: pci: simplify " Ming Lei
2018-04-26 12:39   ` Ming Lei
2018-04-26 15:07   ` jianchao.wang
2018-04-26 15:07     ` jianchao.wang
2018-04-26 15:57     ` Ming Lei
2018-04-26 15:57       ` Ming Lei
2018-04-26 16:16       ` Ming Lei
2018-04-26 16:16         ` Ming Lei
2018-04-27  1:37       ` jianchao.wang
2018-04-27  1:37         ` jianchao.wang
2018-04-27 14:57         ` Ming Lei
2018-04-27 14:57           ` Ming Lei
2018-04-28 14:00           ` jianchao.wang
2018-04-28 14:00             ` jianchao.wang
2018-04-28 21:57             ` Ming Lei
2018-04-28 21:57               ` Ming Lei
2018-04-28 22:27               ` Ming Lei
2018-04-28 22:27                 ` Ming Lei
2018-04-29  1:36                 ` Ming Lei
2018-04-29  1:36                   ` Ming Lei
2018-04-29  2:21                   ` jianchao.wang
2018-04-29  2:21                     ` jianchao.wang
2018-04-29 14:13                     ` Ming Lei
2018-04-29 14:13                       ` Ming Lei
2018-04-27 17:51   ` Keith Busch [this message]
2018-04-27 17:51     ` Keith Busch
2018-04-28  3:50     ` Ming Lei
2018-04-28  3:50       ` Ming Lei
2018-04-28 13:35       ` Keith Busch
2018-04-28 13:35         ` Keith Busch
2018-04-28 14:31         ` jianchao.wang
2018-04-28 14:31           ` jianchao.wang
2018-04-28 21:39         ` Ming Lei
2018-04-28 21:39           ` Ming Lei
2018-04-30 19:52           ` Keith Busch
2018-04-30 19:52             ` Keith Busch
2018-04-30 23:14             ` Ming Lei
2018-04-30 23:14               ` Ming Lei
2018-05-08 15:30       ` Keith Busch
2018-05-08 15:30         ` Keith Busch
2018-05-10 20:52         ` Ming Lei
2018-05-10 20:52           ` Ming Lei
2018-05-10 21:05           ` Keith Busch
2018-05-10 21:05             ` Keith Busch
2018-05-10 21:10             ` Ming Lei
2018-05-10 21:10               ` Ming Lei
2018-05-10 21:18               ` Keith Busch
2018-05-10 21:18                 ` Keith Busch
2018-05-10 21:24                 ` Ming Lei
2018-05-10 21:24                   ` Ming Lei
2018-05-10 21:44                   ` Keith Busch
2018-05-10 21:44                     ` Keith Busch
2018-05-10 21:50                     ` Ming Lei
2018-05-10 21:50                       ` Ming Lei
2018-05-10 21:53                     ` Ming Lei
2018-05-10 21:53                       ` Ming Lei
2018-05-10 22:03                 ` Ming Lei
2018-05-10 22:03                   ` Ming Lei
2018-05-10 22:43                   ` Keith Busch
2018-05-10 22:43                     ` Keith Busch
2018-05-11  0:14                     ` Ming Lei
2018-05-11  0:14                       ` Ming Lei
2018-05-11  2:10             ` Ming Lei
2018-05-11  2:10               ` Ming Lei
2018-04-26 12:39 ` [PATCH 2/2] nvme: pci: guarantee EH can make progress Ming Lei
2018-04-26 12:39   ` Ming Lei
2018-04-26 16:24   ` Keith Busch
2018-04-26 16:24     ` Keith Busch
2018-04-28  3:28     ` Ming Lei
2018-04-28  3:28       ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180427175157.GB5073@localhost.localdomain \
    --to=keith.busch@intel.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=jianchao.w.wang@oracle.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.