Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 1/2] nvme-tcp: move send failure to nvme_tcp_try_send
@ 2020-02-26  0:43 Sagi Grimberg
  2020-02-26  0:43 ` [PATCH 2/2] nvme-tcp: break from io_work loop if recv failed Sagi Grimberg
  2020-03-10 20:53 ` [PATCH 1/2] nvme-tcp: move send failure to nvme_tcp_try_send Keith Busch
  0 siblings, 2 replies; 3+ messages in thread
From: Sagi Grimberg @ 2020-02-26  0:43 UTC (permalink / raw)
  To: linux-nvme, Keith Busch, Christoph Hellwig

Consolidate the request failure handling code to where
it is being fetched (nvme_tcp_try_send).

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/tcp.c | 26 +++++++++++---------------
 1 file changed, 11 insertions(+), 15 deletions(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 11a7c26f8573..221a5a59aa06 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1027,8 +1027,15 @@ static int nvme_tcp_try_send(struct nvme_tcp_queue *queue)
 	if (req->state == NVME_TCP_SEND_DDGST)
 		ret = nvme_tcp_try_send_ddgst(req);
 done:
-	if (ret == -EAGAIN)
+	if (ret == -EAGAIN) {
 		ret = 0;
+	} else if (ret < 0) {
+		dev_err(queue->ctrl->ctrl.device,
+			"failed to send request %d\n", ret);
+		if (ret != -EPIPE && ret != -ECONNRESET)
+			nvme_tcp_fail_request(queue->request);
+		nvme_tcp_done_send_req(queue);
+	}
 	return ret;
 }
 
@@ -1059,21 +1066,10 @@ static void nvme_tcp_io_work(struct work_struct *w)
 		int result;
 
 		result = nvme_tcp_try_send(queue);
-		if (result > 0) {
+		if (result > 0)
 			pending = true;
-		} else if (unlikely(result < 0)) {
-			dev_err(queue->ctrl->ctrl.device,
-				"failed to send request %d\n", result);
-
-			/*
-			 * Fail the request unless peer closed the connection,
-			 * in which case error recovery flow will complete all.
-			 */
-			if ((result != -EPIPE) && (result != -ECONNRESET))
-				nvme_tcp_fail_request(queue->request);
-			nvme_tcp_done_send_req(queue);
-			return;
-		}
+		else if (unlikely(result < 0))
+			break;
 
 		result = nvme_tcp_try_recv(queue);
 		if (result > 0)
-- 
2.20.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 2/2] nvme-tcp: break from io_work loop if recv failed
  2020-02-26  0:43 [PATCH 1/2] nvme-tcp: move send failure to nvme_tcp_try_send Sagi Grimberg
@ 2020-02-26  0:43 ` Sagi Grimberg
  2020-03-10 20:53 ` [PATCH 1/2] nvme-tcp: move send failure to nvme_tcp_try_send Keith Busch
  1 sibling, 0 replies; 3+ messages in thread
From: Sagi Grimberg @ 2020-02-26  0:43 UTC (permalink / raw)
  To: linux-nvme, Keith Busch, Christoph Hellwig

If we failed to receive data from the socket, don't try
to further process it, we will for sure be handling a queue
error at this point. While no issue was seen with the
current behavior thus far, its safer to cease socket processing
if we detected an error.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/tcp.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 221a5a59aa06..814ea2317f4e 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1074,6 +1074,9 @@ static void nvme_tcp_io_work(struct work_struct *w)
 		result = nvme_tcp_try_recv(queue);
 		if (result > 0)
 			pending = true;
+		else if (unlikely(result < 0))
+			break;
+
 
 		if (!pending)
 			return;
-- 
2.20.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/2] nvme-tcp: move send failure to nvme_tcp_try_send
  2020-02-26  0:43 [PATCH 1/2] nvme-tcp: move send failure to nvme_tcp_try_send Sagi Grimberg
  2020-02-26  0:43 ` [PATCH 2/2] nvme-tcp: break from io_work loop if recv failed Sagi Grimberg
@ 2020-03-10 20:53 ` Keith Busch
  1 sibling, 0 replies; 3+ messages in thread
From: Keith Busch @ 2020-03-10 20:53 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Christoph Hellwig, linux-nvme

On Tue, Feb 25, 2020 at 04:43:23PM -0800, Sagi Grimberg wrote:
> Consolidate the request failure handling code to where
> it is being fetched (nvme_tcp_try_send).
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>

Added to 5.7, thanks.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-26  0:43 [PATCH 1/2] nvme-tcp: move send failure to nvme_tcp_try_send Sagi Grimberg
2020-02-26  0:43 ` [PATCH 2/2] nvme-tcp: break from io_work loop if recv failed Sagi Grimberg
2020-03-10 20:53 ` [PATCH 1/2] nvme-tcp: move send failure to nvme_tcp_try_send Keith Busch

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git