Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error
@ 2021-03-30 17:24 elad.grupi
  2021-03-31  7:48 ` Hou Pu
  0 siblings, 1 reply; 13+ messages in thread
From: elad.grupi @ 2021-03-30 17:24 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: Elad Grupi

From: Elad Grupi <elad.grupi@dell.com>

In case there is an io that contains inline data and it goes to
parsing error flow, command response will free command and iov
before clearing the data on the socket buffer.
This will delay the command response until receive flow is completed.

Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
Signed-off-by: Elad Grupi <elad.grupi@dell.com>
---
 drivers/nvme/target/tcp.c | 37 ++++++++++++++++++++++++++++---------
 1 file changed, 28 insertions(+), 9 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 70cc507d1565..d159ff426630 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -38,7 +38,8 @@ enum nvmet_tcp_send_state {
 	NVMET_TCP_SEND_DATA,
 	NVMET_TCP_SEND_R2T,
 	NVMET_TCP_SEND_DDGST,
-	NVMET_TCP_SEND_RESPONSE
+	NVMET_TCP_SEND_RESPONSE,
+	NVMET_TCP_SEND_POSTPONED
 };
 
 enum nvmet_tcp_recv_state {
@@ -530,6 +531,14 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
 	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
 }
 
+static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd)
+{
+	if (unlikely(cmd->state == NVMET_TCP_SEND_POSTPONED))
+		nvmet_tcp_queue_response(&cmd->req);
+	else
+		cmd->req.execute(&cmd->req);
+}
+
 static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)
 {
 	u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue);
@@ -702,6 +711,18 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
 			return 0;
 	}
 
+	if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+			nvmet_tcp_has_data_in(cmd) &&
+			nvmet_tcp_has_inline_data(cmd))) {
+		/*
+		 * wait for inline data before processing the response
+		 * so the iov will not be freed
+		 */
+		cmd->state = NVMET_TCP_SEND_POSTPONED;
+		queue->snd_cmd = NULL;
+		goto done_send;
+	}
+
 	if (cmd->state == NVMET_TCP_SEND_DATA_PDU) {
 		ret = nvmet_try_send_data_pdu(cmd);
 		if (ret <= 0)
@@ -960,7 +981,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
 			le32_to_cpu(req->cmd->common.dptr.sgl.length));
 
 		nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
-		return -EAGAIN;
+		return 0;
 	}
 
 	ret = nvmet_tcp_map_data(queue->cmd);
@@ -1103,10 +1124,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
 		return 0;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len) {
-		cmd->req.execute(&cmd->req);
-	}
+	if (cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_execute_request(cmd);
 
 	nvmet_prepare_receive_pdu(queue);
 	return 0;
@@ -1143,9 +1162,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
 		goto out;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len)
-		cmd->req.execute(&cmd->req);
+	if (cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_execute_request(cmd);
+
 	ret = 0;
 out:
 	nvmet_prepare_receive_pdu(queue);
-- 
2.18.2


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-30 17:24 [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error elad.grupi
@ 2021-03-31  7:48 ` Hou Pu
  2021-03-31  8:28   ` Grupi, Elad
  2021-03-31  9:13   ` [PATCH v5] " Hou Pu
  0 siblings, 2 replies; 13+ messages in thread
From: Hou Pu @ 2021-03-31  7:48 UTC (permalink / raw)
  To: elad.grupi; +Cc: linux-nvme, sagi

On Tue, 30 Mar 2021 20:24:07 +0300, Elad wrote:
> @@ -960,7 +981,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
>  			le32_to_cpu(req->cmd->common.dptr.sgl.length));
> 
>  		nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
> -		return -EAGAIN;
> +		return 0;
>  	}
> 
>  	ret = nvmet_tcp_map_data(queue->cmd);

Hi Elad
By returning 0, the response is queued twice before it is get off from the
list. Even still returning -EAGAIN, the cmd still could be queued twice
potentially.

I think we'd better not queue the failed cmd in first place.
Please see my following fix later.

Thanks,
Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31  7:48 ` Hou Pu
@ 2021-03-31  8:28   ` Grupi, Elad
  2021-03-31  9:07     ` Hou Pu
  2021-03-31  9:13   ` [PATCH v5] " Hou Pu
  1 sibling, 1 reply; 13+ messages in thread
From: Grupi, Elad @ 2021-03-31  8:28 UTC (permalink / raw)
  To: Hou Pu; +Cc: linux-nvme, sagi

Not sure I'm following.

Once req_init is failed, nvmet_tcp_handle_req_failure is called and changes the state to NVMET_TCP_RECV_DATA.
In state NVMET_TCP_RECV_DATA we should not queue the response before it is get off from the list.

Am I missing something here?

-----Original Message-----
From: Hou Pu <houpu.main@gmail.com> 
Sent: Wednesday, 31 March 2021 10:49
To: Grupi, Elad
Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me
Subject: Re: [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

On Tue, 30 Mar 2021 20:24:07 +0300, Elad wrote:
> @@ -960,7 +981,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
>  			le32_to_cpu(req->cmd->common.dptr.sgl.length));
> 
>  		nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
> -		return -EAGAIN;
> +		return 0;
>  	}
> 
>  	ret = nvmet_tcp_map_data(queue->cmd);

Hi Elad
By returning 0, the response is queued twice before it is get off from the list. Even still returning -EAGAIN, the cmd still could be queued twice potentially.

I think we'd better not queue the failed cmd in first place.
Please see my following fix later.

Thanks,
Hou
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31  8:28   ` Grupi, Elad
@ 2021-03-31  9:07     ` Hou Pu
  2021-03-31 13:06       ` Grupi, Elad
  0 siblings, 1 reply; 13+ messages in thread
From: Hou Pu @ 2021-03-31  9:07 UTC (permalink / raw)
  To: elad.grupi; +Cc: houpu.main, linux-nvme, sagi

On Wed, 31 Mar 2021 08:28:46 +0000, Elad wrote:
> Not sure I'm following.
>
> Once req_init is failed, nvmet_tcp_handle_req_failure is called and changes the state to NVMET_TCP_RECV_DATA.
> In state NVMET_TCP_RECV_DATA we should not queue the response before it is get off from the list.
>
> Am I missing something here?

1. nvmet_tcp_handle_req_failure is called, 
2. Return 0 from nvmet_tcp_done_recv_pdu
3. nvmet_tcp_try_recv_data() from nvmet_tcp_try_recv_one(), After finish
   consume inline data, nvmet_tcp_execute_request() is called. Here
   NVMET_TCP_SEND_POSTPONED is not set. As nvmet_try_send_data_pdu()
   is not called yet. (it will be called after we return from
   nvmet_tcp_try_recv_one()).

Thanks,
Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31  7:48 ` Hou Pu
  2021-03-31  8:28   ` Grupi, Elad
@ 2021-03-31  9:13   ` Hou Pu
  2021-03-31 13:27     ` Grupi, Elad
                       ` (3 more replies)
  1 sibling, 4 replies; 13+ messages in thread
From: Hou Pu @ 2021-03-31  9:13 UTC (permalink / raw)
  To: houpu.main, elad.grupi; +Cc: linux-nvme, sagi

From: Elad Grupi <elad.grupi@dell.com>

In case there is an io that contains inline data and it goes to
parsing error flow, command response will free command and iov
before clearing the data on the socket buffer.
This will delay the command response until receive flow is completed.

Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
Signed-off-by: Elad Grupi <elad.grupi@dell.com>
Signed-off-by: Hou Pu <houpu.main@gmail.com>
---
 drivers/nvme/target/tcp.c | 39 +++++++++++++++++++++++++++++++--------
 1 file changed, 31 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index d658c6e8263a..0759eef3f4da 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -525,11 +525,36 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
 	struct nvmet_tcp_cmd *cmd =
 		container_of(req, struct nvmet_tcp_cmd, req);
 	struct nvmet_tcp_queue	*queue = cmd->queue;
+	struct nvme_sgl_desc *sgl;
+	u32 len;
+
+	if (unlikely(cmd == queue->cmd)) {
+		sgl = &cmd->req.cmd->common.dptr.sgl;
+		len = le32_to_cpu(sgl->length);
+
+		/*
+		 * Wait for inline data before processing the response.
+		 * Avoid using helpers, this might happen before
+		 * nvmet_req_init is completed.
+		 */
+		if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
+		    len && len < cmd->req.port->inline_data_size &&
+		    nvme_is_write(cmd->req.cmd))
+			return;
+	}
 
 	llist_add(&cmd->lentry, &queue->resp_list);
 	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
 }
 
+static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd)
+{
+	if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
+		nvmet_tcp_queue_response(&cmd->req);
+	else
+		cmd->req.execute(&cmd->req);
+}
+
 static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)
 {
 	u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue);
@@ -961,7 +986,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
 			le32_to_cpu(req->cmd->common.dptr.sgl.length));
 
 		nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
-		return -EAGAIN;
+		return 0;
 	}
 
 	ret = nvmet_tcp_map_data(queue->cmd);
@@ -1104,10 +1129,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
 		return 0;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len) {
-		cmd->req.execute(&cmd->req);
-	}
+	if (cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_execute_request(cmd);
 
 	nvmet_prepare_receive_pdu(queue);
 	return 0;
@@ -1144,9 +1167,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
 		goto out;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len)
-		cmd->req.execute(&cmd->req);
+	if (cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_execute_request(cmd);
+
 	ret = 0;
 out:
 	nvmet_prepare_receive_pdu(queue);
-- 
2.28.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31  9:07     ` Hou Pu
@ 2021-03-31 13:06       ` Grupi, Elad
  0 siblings, 0 replies; 13+ messages in thread
From: Grupi, Elad @ 2021-03-31 13:06 UTC (permalink / raw)
  To: Hou Pu; +Cc: linux-nvme, sagi

Right. Thank you for the clarification.

Elad

-----Original Message-----
From: Hou Pu <houpu.main@gmail.com> 
Sent: Wednesday, 31 March 2021 12:08
To: Grupi, Elad
Cc: houpu.main@gmail.com; linux-nvme@lists.infradead.org; sagi@grimberg.me
Subject: Re: [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

On Wed, 31 Mar 2021 08:28:46 +0000, Elad wrote:
> Not sure I'm following.
>
> Once req_init is failed, nvmet_tcp_handle_req_failure is called and changes the state to NVMET_TCP_RECV_DATA.
> In state NVMET_TCP_RECV_DATA we should not queue the response before it is get off from the list.
>
> Am I missing something here?

1. nvmet_tcp_handle_req_failure is called, 2. Return 0 from nvmet_tcp_done_recv_pdu 3. nvmet_tcp_try_recv_data() from nvmet_tcp_try_recv_one(), After finish
   consume inline data, nvmet_tcp_execute_request() is called. Here
   NVMET_TCP_SEND_POSTPONED is not set. As nvmet_try_send_data_pdu()
   is not called yet. (it will be called after we return from
   nvmet_tcp_try_recv_one()).

Thanks,
Hou
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31  9:13   ` [PATCH v5] " Hou Pu
@ 2021-03-31 13:27     ` Grupi, Elad
  2021-04-02 16:17       ` Christoph Hellwig
  2021-04-05 19:03     ` Grupi, Elad
                       ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Grupi, Elad @ 2021-03-31 13:27 UTC (permalink / raw)
  To: Hou Pu; +Cc: linux-nvme, sagi

Looks good to me.

Thanks,
Elad

-----Original Message-----
From: Hou Pu <houpu.main@gmail.com> 
Sent: Wednesday, 31 March 2021 12:13
To: houpu.main@gmail.com; Grupi, Elad
Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me
Subject: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

From: Elad Grupi <elad.grupi@dell.com>

In case there is an io that contains inline data and it goes to parsing error flow, command response will free command and iov before clearing the data on the socket buffer.
This will delay the command response until receive flow is completed.

Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
Signed-off-by: Elad Grupi <elad.grupi@dell.com>
Signed-off-by: Hou Pu <houpu.main@gmail.com>
---
 drivers/nvme/target/tcp.c | 39 +++++++++++++++++++++++++++++++--------
 1 file changed, 31 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index d658c6e8263a..0759eef3f4da 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -525,11 +525,36 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
 	struct nvmet_tcp_cmd *cmd =
 		container_of(req, struct nvmet_tcp_cmd, req);
 	struct nvmet_tcp_queue	*queue = cmd->queue;
+	struct nvme_sgl_desc *sgl;
+	u32 len;
+
+	if (unlikely(cmd == queue->cmd)) {
+		sgl = &cmd->req.cmd->common.dptr.sgl;
+		len = le32_to_cpu(sgl->length);
+
+		/*
+		 * Wait for inline data before processing the response.
+		 * Avoid using helpers, this might happen before
+		 * nvmet_req_init is completed.
+		 */
+		if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
+		    len && len < cmd->req.port->inline_data_size &&
+		    nvme_is_write(cmd->req.cmd))
+			return;
+	}
 
 	llist_add(&cmd->lentry, &queue->resp_list);
 	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);  }
 
+static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd) {
+	if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
+		nvmet_tcp_queue_response(&cmd->req);
+	else
+		cmd->req.execute(&cmd->req);
+}
+
 static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)  {
 	u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); @@ -961,7 +986,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
 			le32_to_cpu(req->cmd->common.dptr.sgl.length));
 
 		nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
-		return -EAGAIN;
+		return 0;
 	}
 
 	ret = nvmet_tcp_map_data(queue->cmd);
@@ -1104,10 +1129,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
 		return 0;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len) {
-		cmd->req.execute(&cmd->req);
-	}
+	if (cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_execute_request(cmd);
 
 	nvmet_prepare_receive_pdu(queue);
 	return 0;
@@ -1144,9 +1167,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
 		goto out;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len)
-		cmd->req.execute(&cmd->req);
+	if (cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_execute_request(cmd);
+
 	ret = 0;
 out:
 	nvmet_prepare_receive_pdu(queue);
--
2.28.0

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31 13:27     ` Grupi, Elad
@ 2021-04-02 16:17       ` Christoph Hellwig
  2021-04-05 19:01         ` Grupi, Elad
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2021-04-02 16:17 UTC (permalink / raw)
  To: Grupi, Elad; +Cc: Hou Pu, linux-nvme, sagi

On Wed, Mar 31, 2021 at 01:27:40PM +0000, Grupi, Elad wrote:
> Looks good to me.

Is that a Reviewed-by?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-04-02 16:17       ` Christoph Hellwig
@ 2021-04-05 19:01         ` Grupi, Elad
  0 siblings, 0 replies; 13+ messages in thread
From: Grupi, Elad @ 2021-04-05 19:01 UTC (permalink / raw)
  To: Christoph Hellwig, sagi; +Cc: Hou Pu, linux-nvme

Let's have @sagi@grimberg.me take another look please.

Thanks,
Elad

-----Original Message-----
From: Christoph Hellwig <hch@infradead.org> 
Sent: Friday, 2 April 2021 19:18
To: Grupi, Elad
Cc: Hou Pu; linux-nvme@lists.infradead.org; sagi@grimberg.me
Subject: Re: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

On Wed, Mar 31, 2021 at 01:27:40PM +0000, Grupi, Elad wrote:
> Looks good to me.

Is that a Reviewed-by?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31  9:13   ` [PATCH v5] " Hou Pu
  2021-03-31 13:27     ` Grupi, Elad
@ 2021-04-05 19:03     ` Grupi, Elad
  2021-04-06  3:28       ` Hou Pu
  2021-04-09 17:52     ` Sagi Grimberg
  2021-04-15  7:57     ` Christoph Hellwig
  3 siblings, 1 reply; 13+ messages in thread
From: Grupi, Elad @ 2021-04-05 19:03 UTC (permalink / raw)
  To: Hou Pu; +Cc: linux-nvme, sagi

Hi Hou.

I tested this patch on our system and it solved the issue.

I think there is a minor issue with the spaces indentation in the patch:
+		    len && len < cmd->req.port->inline_data_size &&
+		    nvme_is_write(cmd->req.cmd))

Thanks,
Elad

-----Original Message-----
From: Hou Pu <houpu.main@gmail.com> 
Sent: Wednesday, 31 March 2021 12:13
To: houpu.main@gmail.com; Grupi, Elad
Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me
Subject: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

From: Elad Grupi <elad.grupi@dell.com>

In case there is an io that contains inline data and it goes to parsing error flow, command response will free command and iov before clearing the data on the socket buffer.
This will delay the command response until receive flow is completed.

Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
Signed-off-by: Elad Grupi <elad.grupi@dell.com>
Signed-off-by: Hou Pu <houpu.main@gmail.com>
---
 drivers/nvme/target/tcp.c | 39 +++++++++++++++++++++++++++++++--------
 1 file changed, 31 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index d658c6e8263a..0759eef3f4da 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -525,11 +525,36 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
 	struct nvmet_tcp_cmd *cmd =
 		container_of(req, struct nvmet_tcp_cmd, req);
 	struct nvmet_tcp_queue	*queue = cmd->queue;
+	struct nvme_sgl_desc *sgl;
+	u32 len;
+
+	if (unlikely(cmd == queue->cmd)) {
+		sgl = &cmd->req.cmd->common.dptr.sgl;
+		len = le32_to_cpu(sgl->length);
+
+		/*
+		 * Wait for inline data before processing the response.
+		 * Avoid using helpers, this might happen before
+		 * nvmet_req_init is completed.
+		 */
+		if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
+		    len && len < cmd->req.port->inline_data_size &&
+		    nvme_is_write(cmd->req.cmd))
+			return;
+	}
 
 	llist_add(&cmd->lentry, &queue->resp_list);
 	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);  }
 
+static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd) {
+	if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
+		nvmet_tcp_queue_response(&cmd->req);
+	else
+		cmd->req.execute(&cmd->req);
+}
+
 static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)  {
 	u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); @@ -961,7 +986,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
 			le32_to_cpu(req->cmd->common.dptr.sgl.length));
 
 		nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
-		return -EAGAIN;
+		return 0;
 	}
 
 	ret = nvmet_tcp_map_data(queue->cmd);
@@ -1104,10 +1129,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
 		return 0;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len) {
-		cmd->req.execute(&cmd->req);
-	}
+	if (cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_execute_request(cmd);
 
 	nvmet_prepare_receive_pdu(queue);
 	return 0;
@@ -1144,9 +1167,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
 		goto out;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len)
-		cmd->req.execute(&cmd->req);
+	if (cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_execute_request(cmd);
+
 	ret = 0;
 out:
 	nvmet_prepare_receive_pdu(queue);
--
2.28.0

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-04-05 19:03     ` Grupi, Elad
@ 2021-04-06  3:28       ` Hou Pu
  0 siblings, 0 replies; 13+ messages in thread
From: Hou Pu @ 2021-04-06  3:28 UTC (permalink / raw)
  To: Grupi, Elad; +Cc: linux-nvme, sagi

On Tue, Apr 6, 2021 at 3:03 AM Grupi, Elad <Elad.Grupi@dell.com> wrote:
>
> Hi Hou.
>
> I tested this patch on our system and it solved the issue.

Thanks.

>
> I think there is a minor issue with the spaces indentation in the patch:
> +                   len && len < cmd->req.port->inline_data_size &&
> +                   nvme_is_write(cmd->req.cmd))

Hi Elad,
I run scripts/checkpatch.pl, there are no errors or warnings.
We could find such indentation style in many places in tcp.c
or other files.

Thanks,
Hou


>
> Thanks,
> Elad
>
> -----Original Message-----
> From: Hou Pu <houpu.main@gmail.com>
> Sent: Wednesday, 31 March 2021 12:13
> To: houpu.main@gmail.com; Grupi, Elad
> Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me
> Subject: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
>
>
> [EXTERNAL EMAIL]
>
> From: Elad Grupi <elad.grupi@dell.com>
>
> In case there is an io that contains inline data and it goes to parsing error flow, command response will free command and iov before clearing the data on the socket buffer.
> This will delay the command response until receive flow is completed.
>
> Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
> Signed-off-by: Elad Grupi <elad.grupi@dell.com>
> Signed-off-by: Hou Pu <houpu.main@gmail.com>
> ---
>  drivers/nvme/target/tcp.c | 39 +++++++++++++++++++++++++++++++--------
>  1 file changed, 31 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index d658c6e8263a..0759eef3f4da 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -525,11 +525,36 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
>         struct nvmet_tcp_cmd *cmd =
>                 container_of(req, struct nvmet_tcp_cmd, req);
>         struct nvmet_tcp_queue  *queue = cmd->queue;
> +       struct nvme_sgl_desc *sgl;
> +       u32 len;
> +
> +       if (unlikely(cmd == queue->cmd)) {
> +               sgl = &cmd->req.cmd->common.dptr.sgl;
> +               len = le32_to_cpu(sgl->length);
> +
> +               /*
> +                * Wait for inline data before processing the response.
> +                * Avoid using helpers, this might happen before
> +                * nvmet_req_init is completed.
> +                */
> +               if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
> +                   len && len < cmd->req.port->inline_data_size &&
> +                   nvme_is_write(cmd->req.cmd))
> +                       return;
> +       }
>
>         llist_add(&cmd->lentry, &queue->resp_list);
>         queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);  }
>
> +static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd) {
> +       if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
> +               nvmet_tcp_queue_response(&cmd->req);
> +       else
> +               cmd->req.execute(&cmd->req);
> +}
> +
>  static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)  {
>         u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); @@ -961,7 +986,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
>                         le32_to_cpu(req->cmd->common.dptr.sgl.length));
>
>                 nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
> -               return -EAGAIN;
> +               return 0;
>         }
>
>         ret = nvmet_tcp_map_data(queue->cmd);
> @@ -1104,10 +1129,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
>                 return 0;
>         }
>
> -       if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
> -           cmd->rbytes_done == cmd->req.transfer_len) {
> -               cmd->req.execute(&cmd->req);
> -       }
> +       if (cmd->rbytes_done == cmd->req.transfer_len)
> +               nvmet_tcp_execute_request(cmd);
>
>         nvmet_prepare_receive_pdu(queue);
>         return 0;
> @@ -1144,9 +1167,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
>                 goto out;
>         }
>
> -       if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
> -           cmd->rbytes_done == cmd->req.transfer_len)
> -               cmd->req.execute(&cmd->req);
> +       if (cmd->rbytes_done == cmd->req.transfer_len)
> +               nvmet_tcp_execute_request(cmd);
> +
>         ret = 0;
>  out:
>         nvmet_prepare_receive_pdu(queue);
> --
> 2.28.0
>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31  9:13   ` [PATCH v5] " Hou Pu
  2021-03-31 13:27     ` Grupi, Elad
  2021-04-05 19:03     ` Grupi, Elad
@ 2021-04-09 17:52     ` Sagi Grimberg
  2021-04-15  7:57     ` Christoph Hellwig
  3 siblings, 0 replies; 13+ messages in thread
From: Sagi Grimberg @ 2021-04-09 17:52 UTC (permalink / raw)
  To: Hou Pu, elad.grupi; +Cc: linux-nvme


> From: Elad Grupi <elad.grupi@dell.com>
> 
> In case there is an io that contains inline data and it goes to
> parsing error flow, command response will free command and iov
> before clearing the data on the socket buffer.
> This will delay the command response until receive flow is completed.
> 
> Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
> Signed-off-by: Elad Grupi <elad.grupi@dell.com>
> Signed-off-by: Hou Pu <houpu.main@gmail.com>

This looks fine to me:

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-31  9:13   ` [PATCH v5] " Hou Pu
                       ` (2 preceding siblings ...)
  2021-04-09 17:52     ` Sagi Grimberg
@ 2021-04-15  7:57     ` Christoph Hellwig
  3 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2021-04-15  7:57 UTC (permalink / raw)
  To: Hou Pu; +Cc: elad.grupi, linux-nvme, sagi

Thanks,

applied to nvme-5.13.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, back to index

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-30 17:24 [PATCH v4] nvmet-tcp: fix a segmentation fault during io parsing error elad.grupi
2021-03-31  7:48 ` Hou Pu
2021-03-31  8:28   ` Grupi, Elad
2021-03-31  9:07     ` Hou Pu
2021-03-31 13:06       ` Grupi, Elad
2021-03-31  9:13   ` [PATCH v5] " Hou Pu
2021-03-31 13:27     ` Grupi, Elad
2021-04-02 16:17       ` Christoph Hellwig
2021-04-05 19:01         ` Grupi, Elad
2021-04-05 19:03     ` Grupi, Elad
2021-04-06  3:28       ` Hou Pu
2021-04-09 17:52     ` Sagi Grimberg
2021-04-15  7:57     ` Christoph Hellwig

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git