linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races
@ 2020-10-16 14:28 Ming Lei
  2020-10-16 14:28 ` [PATCH 1/4] blk-mq: check rq->state explicitly in blk_mq_tagset_count_completed_rqs Ming Lei
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Ming Lei @ 2020-10-16 14:28 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch
  Cc: Ming Lei, Chao Leng, Sagi Grimberg, Yi Zhang

Hi,

The 1st 2 patches fixes request completion related races.

The 2nd 3 patches fixes/improves nvme-tcp error recovery.

With the 4 patches, nvme/012 can pass on nvme-tcp in Zhang Yi's test
machine.


Ming Lei (4):
  blk-mq: check rq->state explicitly in
    blk_mq_tagset_count_completed_rqs
  blk-mq: think request as completed if it isn't IN_FLIGHT.
  nvme: tcp: fix race between timeout and normal completion
  nvme: tcp: complete non-IO requests atomically

 block/blk-flush.c       |  2 ++
 block/blk-mq-tag.c      |  2 +-
 drivers/nvme/host/tcp.c | 76 +++++++++++++++++++++++++++++------------
 include/linux/blk-mq.h  |  8 ++++-
 4 files changed, 65 insertions(+), 23 deletions(-)

CC: Chao Leng <lengchao@huawei.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Yi Zhang <yi.zhang@redhat.com>


-- 
2.25.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] blk-mq: check rq->state explicitly in blk_mq_tagset_count_completed_rqs
  2020-10-16 14:28 [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Ming Lei
@ 2020-10-16 14:28 ` Ming Lei
  2020-10-19  0:50   ` Ming Lei
  2020-10-16 14:28 ` [PATCH 2/4] blk-mq: think request as completed if it isn't IN_FLIGHT Ming Lei
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Ming Lei @ 2020-10-16 14:28 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch
  Cc: Ming Lei, Chao Leng, Sagi Grimberg

blk_mq_tagset_count_completed_rqs() is called from
blk_mq_tagset_wait_completed_request() for draining requests to be
completed remotely. What we need to do is to see request->state to
switch to non-MQ_RQ_COMPLETE.

So check MQ_RQ_COMPLETE explicitly in blk_mq_tagset_count_completed_rqs().

Meantime mark flush request as IDLE in its .end_io() for aligning to
end normal request because flush request may stay in inflight tags in case
of !elevator, so we need to change its state into IDLE.

Cc: Chao Leng <lengchao@huawei.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-flush.c  | 2 ++
 block/blk-mq-tag.c | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/block/blk-flush.c b/block/blk-flush.c
index 53abb5c73d99..31a2ae04d3f0 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -231,6 +231,8 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
 		return;
 	}
 
+	WRITE_ONCE(rq->state, MQ_RQ_IDLE);
+
 	if (fq->rq_status != BLK_STS_OK)
 		error = fq->rq_status;
 
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 9c92053e704d..10ff8968b93b 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -367,7 +367,7 @@ static bool blk_mq_tagset_count_completed_rqs(struct request *rq,
 {
 	unsigned *count = data;
 
-	if (blk_mq_request_completed(rq))
+	if (blk_mq_rq_state(rq) == MQ_RQ_COMPLETE)
 		(*count)++;
 	return true;
 }
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] blk-mq: think request as completed if it isn't IN_FLIGHT.
  2020-10-16 14:28 [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Ming Lei
  2020-10-16 14:28 ` [PATCH 1/4] blk-mq: check rq->state explicitly in blk_mq_tagset_count_completed_rqs Ming Lei
@ 2020-10-16 14:28 ` Ming Lei
  2020-10-16 14:28 ` [PATCH 3/4] nvme: tcp: fix race between timeout and normal completion Ming Lei
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2020-10-16 14:28 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch
  Cc: Ming Lei, Yi Zhang, Chao Leng, Sagi Grimberg

MQ_RQ_COMPLETE is one transient state, because the .complete callback
ends or requeues this request, then the request state is updated to
IDLE.

blk_mq_request_completed() is often used by driver for avoiding
double completion with help of driver's specific sync approach. Such as,
NVMe TCP calls blk_mq_request_completed() in its timeout handler
and abort handler for avoiding double completion. If request's state
is updated to IDLE in either one, another code path may think this
request as not completed, and will complete it one more time. Then
double completion is triggered.

Yi reported[1] that 'refcount_t: underflow; use-after-free' of rq->ref
is triggered in blktests(nvme/012) on one very slow machine.

Fixes this issue by thinking request as completed if its state becomes
not IN_FLIGHT.

Reported-by: Yi Zhang <yi.zhang@redhat.com>
Cc: Chao Leng <lengchao@huawei.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/blk-mq.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index b23eeca4d677..9a0c1f8ac42d 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -486,9 +486,15 @@ static inline int blk_mq_request_started(struct request *rq)
 	return blk_mq_rq_state(rq) != MQ_RQ_IDLE;
 }
 
+/*
+ * It is often called in abort handler for avoiding double completion,
+ * MQ_RQ_COMPLETE is one transient state because .complete callback
+ * may end or requeue this request, in either way the request is marked
+ * as IDLE. So return true if this request's state become not IN_FLIGHT.
+ */
 static inline int blk_mq_request_completed(struct request *rq)
 {
-	return blk_mq_rq_state(rq) == MQ_RQ_COMPLETE;
+	return blk_mq_rq_state(rq) != MQ_RQ_IN_FLIGHT;
 }
 
 void blk_mq_start_request(struct request *rq);
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] nvme: tcp: fix race between timeout and normal completion
  2020-10-16 14:28 [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Ming Lei
  2020-10-16 14:28 ` [PATCH 1/4] blk-mq: check rq->state explicitly in blk_mq_tagset_count_completed_rqs Ming Lei
  2020-10-16 14:28 ` [PATCH 2/4] blk-mq: think request as completed if it isn't IN_FLIGHT Ming Lei
@ 2020-10-16 14:28 ` Ming Lei
  2020-10-20  8:11   ` Sagi Grimberg
  2020-10-16 14:28 ` [PATCH 4/4] nvme: tcp: complete non-IO requests atomically Ming Lei
  2020-10-20  7:32 ` [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Yi Zhang
  4 siblings, 1 reply; 10+ messages in thread
From: Ming Lei @ 2020-10-16 14:28 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch
  Cc: Ming Lei, Chao Leng, Sagi Grimberg, Yi Zhang

NVMe TCP timeout handler allows to abort request directly when the
controller isn't in LIVE state. nvme_tcp_error_recovery() updates
controller state as RESETTING, and schedule reset work function. If
new timeout comes before the work function is called, the new timedout
request will be aborted directly, however at that time, the controller
isn't shut down yet, then timeout abort vs. normal completion race
will be triggered.

Fix the race by the following approach:

1) aborting timed out request directly only in case that controller is in
CONNECTING and DELETING state. In the two states, controller has been shutdown,
so it is safe to do so; Also, it is enough to recovery controller in this way,
because we only stop/destroy queues during RESETTING, and cancel all in-flight
requests, no new request is required in RESETTING.

2) delay unquiesce io queues and admin queue until controller is LIVE
because it isn't necessary to start queues during RESETTING. Instead,
this way may risk timeout vs. normal completion race because we need
to abort timed-out request directly during CONNECTING state for setting
up controller.

CC: Chao Leng <lengchao@huawei.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/nvme/host/tcp.c | 36 ++++++++++++++++++------------------
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index d6a3e1487354..56ac61a90c1b 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1886,7 +1886,6 @@ static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new)
 static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
 		bool remove)
 {
-	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
 	blk_mq_quiesce_queue(ctrl->admin_q);
 	nvme_tcp_stop_queue(ctrl, 0);
 	if (ctrl->admin_tagset) {
@@ -1897,15 +1896,13 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
 	if (remove)
 		blk_mq_unquiesce_queue(ctrl->admin_q);
 	nvme_tcp_destroy_admin_queue(ctrl, remove);
-	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
 }
 
 static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
 		bool remove)
 {
-	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
 	if (ctrl->queue_count <= 1)
-		goto out;
+		return;
 	blk_mq_quiesce_queue(ctrl->admin_q);
 	nvme_start_freeze(ctrl);
 	nvme_stop_queues(ctrl);
@@ -1918,8 +1915,6 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
 	if (remove)
 		nvme_start_queues(ctrl);
 	nvme_tcp_destroy_io_queues(ctrl, remove);
-out:
-	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
 }
 
 static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
@@ -1938,6 +1933,8 @@ static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
 				ctrl->opts->reconnect_delay * HZ);
 	} else {
 		dev_info(ctrl->device, "Removing controller...\n");
+		nvme_start_queues(ctrl);
+		blk_mq_unquiesce_queue(ctrl->admin_q);
 		nvme_delete_ctrl(ctrl);
 	}
 }
@@ -2030,11 +2027,11 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
 	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
 
 	nvme_stop_keep_alive(ctrl);
+
+	mutex_lock(&tcp_ctrl->teardown_lock);
 	nvme_tcp_teardown_io_queues(ctrl, false);
-	/* unquiesce to fail fast pending requests */
-	nvme_start_queues(ctrl);
 	nvme_tcp_teardown_admin_queue(ctrl, false);
-	blk_mq_unquiesce_queue(ctrl->admin_q);
+	mutex_unlock(&tcp_ctrl->teardown_lock);
 
 	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
 		/* state change failure is ok if we started ctrl delete */
@@ -2051,6 +2048,7 @@ static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
 	cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
 	cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
 
+	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
 	nvme_tcp_teardown_io_queues(ctrl, shutdown);
 	blk_mq_quiesce_queue(ctrl->admin_q);
 	if (shutdown)
@@ -2058,6 +2056,7 @@ static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
 	else
 		nvme_disable_ctrl(ctrl);
 	nvme_tcp_teardown_admin_queue(ctrl, shutdown);
+	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
 }
 
 static void nvme_tcp_delete_ctrl(struct nvme_ctrl *ctrl)
@@ -2192,19 +2191,20 @@ nvme_tcp_timeout(struct request *rq, bool reserved)
 		"queue %d: timeout request %#x type %d\n",
 		nvme_tcp_queue_id(req->queue), rq->tag, pdu->hdr.type);
 
-	if (ctrl->state != NVME_CTRL_LIVE) {
+	/*
+	 * During CONNECTING or DELETING, the controller has been shutdown,
+	 * so it is safe to abort the request directly, otherwise timeout
+	 * vs. normal completion will be triggered.
+	 */
+	if (ctrl->state == NVME_CTRL_CONNECTING ||
+			ctrl->state == NVME_CTRL_DELETING ||
+			ctrl->state == NVME_CTRL_DELETING_NOIO) {
 		/*
-		 * If we are resetting, connecting or deleting we should
-		 * complete immediately because we may block controller
-		 * teardown or setup sequence
+		 * If we are connecting we should complete immediately because
+		 * we may block controller setup sequence
 		 * - ctrl disable/shutdown fabrics requests
 		 * - connect requests
 		 * - initialization admin requests
-		 * - I/O requests that entered after unquiescing and
-		 *   the controller stopped responding
-		 *
-		 * All other requests should be cancelled by the error
-		 * recovery work, so it's fine that we fail it here.
 		 */
 		nvme_tcp_complete_timed_out(rq);
 		return BLK_EH_DONE;
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] nvme: tcp: complete non-IO requests atomically
  2020-10-16 14:28 [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Ming Lei
                   ` (2 preceding siblings ...)
  2020-10-16 14:28 ` [PATCH 3/4] nvme: tcp: fix race between timeout and normal completion Ming Lei
@ 2020-10-16 14:28 ` Ming Lei
  2020-10-20  8:14   ` Sagi Grimberg
  2020-10-20  7:32 ` [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Yi Zhang
  4 siblings, 1 reply; 10+ messages in thread
From: Ming Lei @ 2020-10-16 14:28 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch
  Cc: Ming Lei, Chao Leng, Sagi Grimberg, Yi Zhang

During controller's CONNECTING state, admin/fabric/connect requests
are submitted for recovery, and we allow to abort this request directly
in time out handler for not blocking the setup step.

So timout vs. normal completion race may be triggered on these requests.

Add atomic completion for requests from connect/fabric/admin queue for
avoiding the race.

CC: Chao Leng <lengchao@huawei.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/nvme/host/tcp.c | 40 +++++++++++++++++++++++++++++++++++++---
 1 file changed, 37 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 56ac61a90c1b..654061abdc5a 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -30,6 +30,8 @@ static int so_priority;
 module_param(so_priority, int, 0644);
 MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority");
 
+#define REQ_STATE_COMPLETE     0
+
 enum nvme_tcp_send_state {
 	NVME_TCP_SEND_CMD_PDU = 0,
 	NVME_TCP_SEND_H2C_PDU,
@@ -56,6 +58,8 @@ struct nvme_tcp_request {
 	size_t			offset;
 	size_t			data_sent;
 	enum nvme_tcp_send_state state;
+
+	unsigned long		comp_state;
 };
 
 enum nvme_tcp_queue_flags {
@@ -469,6 +473,33 @@ static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl)
 	queue_work(nvme_reset_wq, &to_tcp_ctrl(ctrl)->err_work);
 }
 
+/*
+ * requests originated from admin, fabrics and connect_q have to be
+ * completed atomically because we don't cover the race between timeout
+ * and normal completion for these queues.
+ */
+static inline bool nvme_tcp_need_atomic_complete(struct request *rq)
+{
+	return !rq->rq_disk;
+}
+
+static inline void nvme_tcp_clear_rq_complete(struct request *rq)
+{
+	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+
+	if (unlikely(nvme_tcp_need_atomic_complete(rq)))
+		clear_bit(REQ_STATE_COMPLETE, &req->comp_state);
+}
+
+static inline bool nvme_tcp_mark_rq_complete(struct request *rq)
+{
+	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+
+	if (unlikely(nvme_tcp_need_atomic_complete(rq)))
+		return !test_and_set_bit(REQ_STATE_COMPLETE, &req->comp_state);
+	return true;
+}
+
 static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
 		struct nvme_completion *cqe)
 {
@@ -483,7 +514,8 @@ static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
 		return -EINVAL;
 	}
 
-	if (!nvme_try_complete_req(rq, cqe->status, cqe->result))
+	if (nvme_tcp_mark_rq_complete(rq) &&
+			!nvme_try_complete_req(rq, cqe->status, cqe->result))
 		nvme_complete_rq(rq);
 	queue->nr_cqe++;
 
@@ -674,7 +706,8 @@ static inline void nvme_tcp_end_request(struct request *rq, u16 status)
 {
 	union nvme_result res = {};
 
-	if (!nvme_try_complete_req(rq, cpu_to_le16(status << 1), res))
+	if (nvme_tcp_mark_rq_complete(rq) &&
+			!nvme_try_complete_req(rq, cpu_to_le16(status << 1), res))
 		nvme_complete_rq(rq);
 }
 
@@ -2173,7 +2206,7 @@ static void nvme_tcp_complete_timed_out(struct request *rq)
 	/* fence other contexts that may complete the command */
 	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
 	nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
-	if (!blk_mq_request_completed(rq)) {
+	if (nvme_tcp_mark_rq_complete(rq) && !blk_mq_request_completed(rq)) {
 		nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
 		blk_mq_complete_request(rq);
 	}
@@ -2315,6 +2348,7 @@ static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx,
 	if (unlikely(ret))
 		return ret;
 
+	nvme_tcp_clear_rq_complete(rq);
 	blk_mq_start_request(rq);
 
 	nvme_tcp_queue_request(req, true, bd->last);
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] blk-mq: check rq->state explicitly in blk_mq_tagset_count_completed_rqs
  2020-10-16 14:28 ` [PATCH 1/4] blk-mq: check rq->state explicitly in blk_mq_tagset_count_completed_rqs Ming Lei
@ 2020-10-19  0:50   ` Ming Lei
  0 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2020-10-19  0:50 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch
  Cc: Chao Leng, Sagi Grimberg

On Fri, Oct 16, 2020 at 10:28:08PM +0800, Ming Lei wrote:
> blk_mq_tagset_count_completed_rqs() is called from
> blk_mq_tagset_wait_completed_request() for draining requests to be
> completed remotely. What we need to do is to see request->state to
> switch to non-MQ_RQ_COMPLETE.
> 
> So check MQ_RQ_COMPLETE explicitly in blk_mq_tagset_count_completed_rqs().
> 
> Meantime mark flush request as IDLE in its .end_io() for aligning to
> end normal request because flush request may stay in inflight tags in case
> of !elevator, so we need to change its state into IDLE.
> 
> Cc: Chao Leng <lengchao@huawei.com>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-flush.c  | 2 ++
>  block/blk-mq-tag.c | 2 +-
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-flush.c b/block/blk-flush.c
> index 53abb5c73d99..31a2ae04d3f0 100644
> --- a/block/blk-flush.c
> +++ b/block/blk-flush.c
> @@ -231,6 +231,8 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
>  		return;
>  	}
>  
> +	WRITE_ONCE(rq->state, MQ_RQ_IDLE);

oops, the above line should have been:

	WRITE_ONCE(flush_rq->state, MQ_RQ_IDLE);

Thanks,
Ming


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races
  2020-10-16 14:28 [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Ming Lei
                   ` (3 preceding siblings ...)
  2020-10-16 14:28 ` [PATCH 4/4] nvme: tcp: complete non-IO requests atomically Ming Lei
@ 2020-10-20  7:32 ` Yi Zhang
  4 siblings, 0 replies; 10+ messages in thread
From: Yi Zhang @ 2020-10-20  7:32 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch, Sagi Grimberg, Chao Leng

Thanks Ming, feel free to add:
Tested-by: Yi Zhang <yi.zhang@redhat.com>

For the timeout issue, I've filed below issue to track it, thanks.
https://bugzilla.kernel.org/show_bug.cgi?id=209763

Best Regards,
  Yi Zhang


----- Original Message -----
From: "Ming Lei" <ming.lei@redhat.com>
To: "Jens Axboe" <axboe@kernel.dk>, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, "Christoph Hellwig" <hch@lst.de>, "Keith Busch" <kbusch@kernel.org>
Cc: "Yi Zhang" <yi.zhang@redhat.com>, "Sagi Grimberg" <sagi@grimberg.me>, "Chao Leng" <lengchao@huawei.com>, "Ming Lei" <ming.lei@redhat.com>
Sent: Friday, October 16, 2020 10:28:07 PM
Subject: [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races

Hi,

The 1st 2 patches fixes request completion related races.

The 2nd 3 patches fixes/improves nvme-tcp error recovery.

With the 4 patches, nvme/012 can pass on nvme-tcp in Zhang Yi's test
machine.


Ming Lei (4):
  blk-mq: check rq->state explicitly in
    blk_mq_tagset_count_completed_rqs
  blk-mq: think request as completed if it isn't IN_FLIGHT.
  nvme: tcp: fix race between timeout and normal completion
  nvme: tcp: complete non-IO requests atomically

 block/blk-flush.c       |  2 ++
 block/blk-mq-tag.c      |  2 +-
 drivers/nvme/host/tcp.c | 76 +++++++++++++++++++++++++++++------------
 include/linux/blk-mq.h  |  8 ++++-
 4 files changed, 65 insertions(+), 23 deletions(-)

CC: Chao Leng <lengchao@huawei.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Yi Zhang <yi.zhang@redhat.com>


-- 
2.25.2


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] nvme: tcp: fix race between timeout and normal completion
  2020-10-16 14:28 ` [PATCH 3/4] nvme: tcp: fix race between timeout and normal completion Ming Lei
@ 2020-10-20  8:11   ` Sagi Grimberg
  2020-10-20  9:44     ` Ming Lei
  0 siblings, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2020-10-20  8:11 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch
  Cc: Chao Leng, Yi Zhang


> NVMe TCP timeout handler allows to abort request directly when the
> controller isn't in LIVE state. nvme_tcp_error_recovery() updates
> controller state as RESETTING, and schedule reset work function. If
> new timeout comes before the work function is called, the new timedout
> request will be aborted directly, however at that time, the controller
> isn't shut down yet, then timeout abort vs. normal completion race
> will be triggered.

This assertion is incorrect, the before completing the request from
the timeout handler, we call nvme_tcp_stop_queue, which guarantees upon
return that no more completions will be seen from this queue.

> Fix the race by the following approach:
> 
> 1) aborting timed out request directly only in case that controller is in
> CONNECTING and DELETING state. In the two states, controller has been shutdown,
> so it is safe to do so; Also, it is enough to recovery controller in this way,
> because we only stop/destroy queues during RESETTING, and cancel all in-flight
> requests, no new request is required in RESETTING.

Unfortunately RESETTING also requires direct completion because this
state may include I/O that may timeout and unless we complete it
the reset flow cannot make forward progress
(nvme_disable_ctrl/nvme_shutdown_ctrl generate I/O in fabrics).

> 
> 2) delay unquiesce io queues and admin queue until controller is LIVE
> because it isn't necessary to start queues during RESETTING. Instead,
> this way may risk timeout vs. normal completion race because we need
> to abort timed-out request directly during CONNECTING state for setting
> up controller.

We can't unquisce I/O only when the controller is LIVE because I/O needs
to be able to failover for multipath, which should not be tied with
the controller becoming LIVE again what-so-ever...

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/4] nvme: tcp: complete non-IO requests atomically
  2020-10-16 14:28 ` [PATCH 4/4] nvme: tcp: complete non-IO requests atomically Ming Lei
@ 2020-10-20  8:14   ` Sagi Grimberg
  0 siblings, 0 replies; 10+ messages in thread
From: Sagi Grimberg @ 2020-10-20  8:14 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch
  Cc: Chao Leng, Yi Zhang


> During controller's CONNECTING state, admin/fabric/connect requests
> are submitted for recovery, and we allow to abort this request directly
> in time out handler for not blocking the setup step.
> 
> So timout vs. normal completion race may be triggered on these requests.
> 
> Add atomic completion for requests from connect/fabric/admin queue for
> avoiding the race.

I don't understand why this is needed... I don't understand the race,
and I don't understand why fencing the queue with nvme_tcp_stop_queue
and serialization against the teardown flow is not sufficient.

> 
> CC: Chao Leng <lengchao@huawei.com>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Reported-by: Yi Zhang <yi.zhang@redhat.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>   drivers/nvme/host/tcp.c | 40 +++++++++++++++++++++++++++++++++++++---
>   1 file changed, 37 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 56ac61a90c1b..654061abdc5a 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -30,6 +30,8 @@ static int so_priority;
>   module_param(so_priority, int, 0644);
>   MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority");
>   
> +#define REQ_STATE_COMPLETE     0
> +
>   enum nvme_tcp_send_state {
>   	NVME_TCP_SEND_CMD_PDU = 0,
>   	NVME_TCP_SEND_H2C_PDU,
> @@ -56,6 +58,8 @@ struct nvme_tcp_request {
>   	size_t			offset;
>   	size_t			data_sent;
>   	enum nvme_tcp_send_state state;
> +
> +	unsigned long		comp_state;
>   };
>   
>   enum nvme_tcp_queue_flags {
> @@ -469,6 +473,33 @@ static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl)
>   	queue_work(nvme_reset_wq, &to_tcp_ctrl(ctrl)->err_work);
>   }
>   
> +/*
> + * requests originated from admin, fabrics and connect_q have to be
> + * completed atomically because we don't cover the race between timeout
> + * and normal completion for these queues.
> + */
> +static inline bool nvme_tcp_need_atomic_complete(struct request *rq)
> +{
> +	return !rq->rq_disk;
> +}
> +
> +static inline void nvme_tcp_clear_rq_complete(struct request *rq)
> +{
> +	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
> +
> +	if (unlikely(nvme_tcp_need_atomic_complete(rq)))
> +		clear_bit(REQ_STATE_COMPLETE, &req->comp_state);
> +}
> +
> +static inline bool nvme_tcp_mark_rq_complete(struct request *rq)
> +{
> +	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
> +
> +	if (unlikely(nvme_tcp_need_atomic_complete(rq)))
> +		return !test_and_set_bit(REQ_STATE_COMPLETE, &req->comp_state);
> +	return true;
> +}
> +
>   static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
>   		struct nvme_completion *cqe)
>   {
> @@ -483,7 +514,8 @@ static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
>   		return -EINVAL;
>   	}
>   
> -	if (!nvme_try_complete_req(rq, cqe->status, cqe->result))
> +	if (nvme_tcp_mark_rq_complete(rq) &&
> +			!nvme_try_complete_req(rq, cqe->status, cqe->result))
>   		nvme_complete_rq(rq);
>   	queue->nr_cqe++;
>   
> @@ -674,7 +706,8 @@ static inline void nvme_tcp_end_request(struct request *rq, u16 status)
>   {
>   	union nvme_result res = {};
>   
> -	if (!nvme_try_complete_req(rq, cpu_to_le16(status << 1), res))
> +	if (nvme_tcp_mark_rq_complete(rq) &&
> +			!nvme_try_complete_req(rq, cpu_to_le16(status << 1), res))
>   		nvme_complete_rq(rq);
>   }
>   
> @@ -2173,7 +2206,7 @@ static void nvme_tcp_complete_timed_out(struct request *rq)
>   	/* fence other contexts that may complete the command */
>   	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
>   	nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
> -	if (!blk_mq_request_completed(rq)) {
> +	if (nvme_tcp_mark_rq_complete(rq) && !blk_mq_request_completed(rq)) {
>   		nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
>   		blk_mq_complete_request(rq);
>   	}
> @@ -2315,6 +2348,7 @@ static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx,
>   	if (unlikely(ret))
>   		return ret;
>   
> +	nvme_tcp_clear_rq_complete(rq);
>   	blk_mq_start_request(rq);
>   
>   	nvme_tcp_queue_request(req, true, bd->last);
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] nvme: tcp: fix race between timeout and normal completion
  2020-10-20  8:11   ` Sagi Grimberg
@ 2020-10-20  9:44     ` Ming Lei
  0 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2020-10-20  9:44 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch, Chao Leng, Yi Zhang

On Tue, Oct 20, 2020 at 01:11:11AM -0700, Sagi Grimberg wrote:
> 
> > NVMe TCP timeout handler allows to abort request directly when the
> > controller isn't in LIVE state. nvme_tcp_error_recovery() updates
> > controller state as RESETTING, and schedule reset work function. If
> > new timeout comes before the work function is called, the new timedout
> > request will be aborted directly, however at that time, the controller
> > isn't shut down yet, then timeout abort vs. normal completion race
> > will be triggered.
> 
> This assertion is incorrect, the before completing the request from
> the timeout handler, we call nvme_tcp_stop_queue, which guarantees upon
> return that no more completions will be seen from this queue.

OK, then looks the issue can be fixed by patch 1 & 2 only.

Yi, can you test again and see if the issue can be fixed by patch 1 & 2?


Thanks,
Ming


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-10-20  9:44 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-16 14:28 [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Ming Lei
2020-10-16 14:28 ` [PATCH 1/4] blk-mq: check rq->state explicitly in blk_mq_tagset_count_completed_rqs Ming Lei
2020-10-19  0:50   ` Ming Lei
2020-10-16 14:28 ` [PATCH 2/4] blk-mq: think request as completed if it isn't IN_FLIGHT Ming Lei
2020-10-16 14:28 ` [PATCH 3/4] nvme: tcp: fix race between timeout and normal completion Ming Lei
2020-10-20  8:11   ` Sagi Grimberg
2020-10-20  9:44     ` Ming Lei
2020-10-16 14:28 ` [PATCH 4/4] nvme: tcp: complete non-IO requests atomically Ming Lei
2020-10-20  8:14   ` Sagi Grimberg
2020-10-20  7:32 ` [PATCH 0/4] blk-mq/nvme-tcp: fix timed out related races Yi Zhang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).