linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
@ 2021-03-18 12:55 elad.grupi
  2021-03-19  3:52 ` Hou Pu
  0 siblings, 1 reply; 10+ messages in thread
From: elad.grupi @ 2021-03-18 12:55 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: Elad Grupi

From: Elad Grupi <elad.grupi@dell.com>

In case there is an io that contains inline data and it goes to
parsing error flow, command response will free command and iov
before clearing the data on the socket buffer.
This will delay the command response until receive flow is completed.

Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
Signed-off-by: Elad Grupi <elad.grupi@dell.com>
---
 drivers/nvme/target/tcp.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 70cc507d1565..5650293acaec 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -702,6 +702,17 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
 			return 0;
 	}
 
+	if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+			nvmet_tcp_has_data_in(cmd) &&
+			nvmet_tcp_has_inline_data(cmd))) {
+		/*
+		 * wait for inline data before processing the response
+		 * so the iov will not be freed
+		 */
+		queue->snd_cmd = NULL;
+		goto done_send;
+	}
+
 	if (cmd->state == NVMET_TCP_SEND_DATA_PDU) {
 		ret = nvmet_try_send_data_pdu(cmd);
 		if (ret <= 0)
@@ -1106,7 +1117,9 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
 	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
 	    cmd->rbytes_done == cmd->req.transfer_len) {
 		cmd->req.execute(&cmd->req);
-	}
+	} else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+			cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_queue_response(&cmd->req);
 
 	nvmet_prepare_receive_pdu(queue);
 	return 0;
@@ -1146,6 +1159,8 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
 	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
 	    cmd->rbytes_done == cmd->req.transfer_len)
 		cmd->req.execute(&cmd->req);
+	else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED))
+		nvmet_tcp_queue_response(&cmd->req);
 	ret = 0;
 out:
 	nvmet_prepare_receive_pdu(queue);
-- 
2.18.2


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-18 12:55 [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error elad.grupi
@ 2021-03-19  3:52 ` Hou Pu
  2021-03-19 17:26   ` Grupi, Elad
  0 siblings, 1 reply; 10+ messages in thread
From: Hou Pu @ 2021-03-19  3:52 UTC (permalink / raw)
  To: elad.grupi; +Cc: linux-nvme, sagi, houpu.main

> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 70cc507d1565..5650293acaec 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -702,6 +702,17 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
>  			return 0;
>  	}
>  
> +	if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
> +			nvmet_tcp_has_data_in(cmd) &&
> +			nvmet_tcp_has_inline_data(cmd))) {
> +		/*
> +		 * wait for inline data before processing the response
> +		 * so the iov will not be freed
> +		 */
> +		queue->snd_cmd = NULL;
> +		goto done_send;
> +	}
> +

Hi Elad,
Although this works, I think Sagi would prefer not adding this to the
response queue in nvmet_tcp_queue_response().


>  	if (cmd->state == NVMET_TCP_SEND_DATA_PDU) {
>  		ret = nvmet_try_send_data_pdu(cmd);
>  		if (ret <= 0)
> @@ -1106,7 +1117,9 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
>  	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
>  	    cmd->rbytes_done == cmd->req.transfer_len) {
>  		cmd->req.execute(&cmd->req);
> -	}
> +	} else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
> +			cmd->rbytes_done == cmd->req.transfer_len)
> +		nvmet_tcp_queue_response(&cmd->req);
>  
>  	nvmet_prepare_receive_pdu(queue);
>  	return 0;
> @@ -1146,6 +1159,8 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
>  	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
>  	    cmd->rbytes_done == cmd->req.transfer_len)
>  		cmd->req.execute(&cmd->req);
> +	else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED))
> +		nvmet_tcp_queue_response(&cmd->req);
 
Here we also need to check cmd->rbytes_done == cmd->req.transfer_len as
we could get multiple data pdu.

(BTW, did you forget to add [PATCH v2] to the subject line?)

Thanks,
Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-19  3:52 ` Hou Pu
@ 2021-03-19 17:26   ` Grupi, Elad
  2021-03-22  4:06     ` Hou Pu
  0 siblings, 1 reply; 10+ messages in thread
From: Grupi, Elad @ 2021-03-19 17:26 UTC (permalink / raw)
  To: Hou Pu, sagi; +Cc: linux-nvme

Right, I see.

But when calling nvmet_tcp_queue_response, the flag for NVMET_TCP_F_INIT_FAILED is not yet set.
The flag is being set only after nvmet_req_init returns in nvmet_tcp_handle_req_failure.
It is possible to block in nvmet_tcp_queue_response any command that has unattended inline data, will that work for you?

Thanks,
Elad

-----Original Message-----
From: Hou Pu <houpu.main@gmail.com> 
Sent: Friday, 19 March 2021 5:53
To: Grupi, Elad
Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me; houpu.main@gmail.com
Subject: RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c 
> index 70cc507d1565..5650293acaec 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -702,6 +702,17 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
>  			return 0;
>  	}
>  
> +	if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
> +			nvmet_tcp_has_data_in(cmd) &&
> +			nvmet_tcp_has_inline_data(cmd))) {
> +		/*
> +		 * wait for inline data before processing the response
> +		 * so the iov will not be freed
> +		 */
> +		queue->snd_cmd = NULL;
> +		goto done_send;
> +	}
> +

Hi Elad,
Although this works, I think Sagi would prefer not adding this to the response queue in nvmet_tcp_queue_response().


>  	if (cmd->state == NVMET_TCP_SEND_DATA_PDU) {
>  		ret = nvmet_try_send_data_pdu(cmd);
>  		if (ret <= 0)
> @@ -1106,7 +1117,9 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
>  	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
>  	    cmd->rbytes_done == cmd->req.transfer_len) {
>  		cmd->req.execute(&cmd->req);
> -	}
> +	} else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
> +			cmd->rbytes_done == cmd->req.transfer_len)
> +		nvmet_tcp_queue_response(&cmd->req);
>  
>  	nvmet_prepare_receive_pdu(queue);
>  	return 0;
> @@ -1146,6 +1159,8 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
>  	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
>  	    cmd->rbytes_done == cmd->req.transfer_len)
>  		cmd->req.execute(&cmd->req);
> +	else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED))
> +		nvmet_tcp_queue_response(&cmd->req);
 
Here we also need to check cmd->rbytes_done == cmd->req.transfer_len as we could get multiple data pdu.

(BTW, did you forget to add [PATCH v2] to the subject line?)

Thanks,
Hou
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-19 17:26   ` Grupi, Elad
@ 2021-03-22  4:06     ` Hou Pu
  0 siblings, 0 replies; 10+ messages in thread
From: Hou Pu @ 2021-03-22  4:06 UTC (permalink / raw)
  To: Grupi, Elad, sagi; +Cc: linux-nvme


On 2021/3/20 1:26 AM, Grupi, Elad wrote:
> Right, I see.
>
> But when calling nvmet_tcp_queue_response, the flag for NVMET_TCP_F_INIT_FAILED is not yet set.
> The flag is being set only after nvmet_req_init returns in nvmet_tcp_handle_req_failure.
Hmm, that's true.
> It is possible to block in nvmet_tcp_queue_response any command that has unattended inline data, will that work for you?

It's OK with me.


Thnaks,

Hou


>
> Thanks,
> Elad
>
> -----Original Message-----
> From: Hou Pu <houpu.main@gmail.com>
> Sent: Friday, 19 March 2021 5:53
> To: Grupi, Elad
> Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me; houpu.main@gmail.com
> Subject: RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
>
>
> [EXTERNAL EMAIL]
>
>> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
>> index 70cc507d1565..5650293acaec 100644
>> --- a/drivers/nvme/target/tcp.c
>> +++ b/drivers/nvme/target/tcp.c
>> @@ -702,6 +702,17 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
>>   			return 0;
>>   	}
>>   
>> +	if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
>> +			nvmet_tcp_has_data_in(cmd) &&
>> +			nvmet_tcp_has_inline_data(cmd))) {
>> +		/*
>> +		 * wait for inline data before processing the response
>> +		 * so the iov will not be freed
>> +		 */
>> +		queue->snd_cmd = NULL;
>> +		goto done_send;
>> +	}
>> +
> Hi Elad,
> Although this works, I think Sagi would prefer not adding this to the response queue in nvmet_tcp_queue_response().
>
>
>>   	if (cmd->state == NVMET_TCP_SEND_DATA_PDU) {
>>   		ret = nvmet_try_send_data_pdu(cmd);
>>   		if (ret <= 0)
>> @@ -1106,7 +1117,9 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
>>   	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
>>   	    cmd->rbytes_done == cmd->req.transfer_len) {
>>   		cmd->req.execute(&cmd->req);
>> -	}
>> +	} else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
>> +			cmd->rbytes_done == cmd->req.transfer_len)
>> +		nvmet_tcp_queue_response(&cmd->req);
>>   
>>   	nvmet_prepare_receive_pdu(queue);
>>   	return 0;
>> @@ -1146,6 +1159,8 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
>>   	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
>>   	    cmd->rbytes_done == cmd->req.transfer_len)
>>   		cmd->req.execute(&cmd->req);
>> +	else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED))
>> +		nvmet_tcp_queue_response(&cmd->req);
>   
> Here we also need to check cmd->rbytes_done == cmd->req.transfer_len as we could get multiple data pdu.
>
> (BTW, did you forget to add [PATCH v2] to the subject line?)
>
> Thanks,
> Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-26 10:26 ` Hou Pu
@ 2021-03-26 18:00   ` Grupi, Elad
  0 siblings, 0 replies; 10+ messages in thread
From: Grupi, Elad @ 2021-03-26 18:00 UTC (permalink / raw)
  To: Hou Pu; +Cc: linux-nvme, sagi

Correct. Fixed your comments

Thanks,
Elad

-----Original Message-----
From: Hou Pu <houpu.main@gmail.com> 
Sent: Friday, 26 March 2021 13:27
To: Grupi, Elad
Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me; houpu.main@gmail.com
Subject: RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

On Date: Fri, 26 Mar 2021 00:49:52 +0200, Elad Grupi wrote:
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c 
> index 70cc507d1565..f10fa2b5aaeb 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -154,6 +154,7 @@ static struct workqueue_struct *nvmet_tcp_wq;  
> static const struct nvmet_fabrics_ops nvmet_tcp_ops;  static void 
> nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);  static void 
> nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);

Do we need declare it here?
 
> @@ -1103,9 +1121,14 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
>  		return 0;
>  	}
 
> -	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
> -	    cmd->rbytes_done == cmd->req.transfer_len) {
> -		cmd->req.execute(&cmd->req);
> +	if (cmd->rbytes_done == cmd->req.transfer_len) {
> +		if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
> +			nvmet_tcp_queue_response(&cmd->req);
> +		else {
> +			if (unlikely(cmd == &queue->connect))
> +				nvmet_tcp_executing_connect_cmd(queue);

Is this in somewhere not yet upstream? I did not find nvmet_tcp_executing_connect_cmd on upstream (5.12-rc4).

> +			cmd->req.execute(&cmd->req);
> +		}


Hi Elad,
The patch looks ok to me except these 2 question.

Thanks,
Hou
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-25 22:49 elad.grupi
@ 2021-03-26 10:26 ` Hou Pu
  2021-03-26 18:00   ` Grupi, Elad
  0 siblings, 1 reply; 10+ messages in thread
From: Hou Pu @ 2021-03-26 10:26 UTC (permalink / raw)
  To: elad.grupi; +Cc: linux-nvme, sagi, houpu.main

On Date: Fri, 26 Mar 2021 00:49:52 +0200, Elad Grupi wrote:
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 70cc507d1565..f10fa2b5aaeb 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -154,6 +154,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
>  static const struct nvmet_fabrics_ops nvmet_tcp_ops;
>  static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
>  static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);

Do we need declare it here?
 
> @@ -1103,9 +1121,14 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
>  		return 0;
>  	}
 
> -	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
> -	    cmd->rbytes_done == cmd->req.transfer_len) {
> -		cmd->req.execute(&cmd->req);
> +	if (cmd->rbytes_done == cmd->req.transfer_len) {
> +		if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
> +			nvmet_tcp_queue_response(&cmd->req);
> +		else {
> +			if (unlikely(cmd == &queue->connect))
> +				nvmet_tcp_executing_connect_cmd(queue);

Is this in somewhere not yet upstream? I did not find
nvmet_tcp_executing_connect_cmd on upstream (5.12-rc4).

> +			cmd->req.execute(&cmd->req);
> +		}


Hi Elad,
The patch looks ok to me except these 2 question.

Thanks,
Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
@ 2021-03-25 22:49 elad.grupi
  2021-03-26 10:26 ` Hou Pu
  0 siblings, 1 reply; 10+ messages in thread
From: elad.grupi @ 2021-03-25 22:49 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: Elad Grupi

From: Elad Grupi <elad.grupi@dell.com>

In case there is an io that contains inline data and it goes to
parsing error flow, command response will free command and iov
before clearing the data on the socket buffer.
This will delay the command response until receive flow is completed.

Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
Signed-off-by: Elad Grupi <elad.grupi@dell.com>
---
 drivers/nvme/target/tcp.c | 42 +++++++++++++++++++++++++++++++++------
 1 file changed, 36 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 70cc507d1565..f10fa2b5aaeb 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -154,6 +154,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
 static const struct nvmet_fabrics_ops nvmet_tcp_ops;
 static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
 static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
+static void nvmet_tcp_queue_response(struct nvmet_req *req);
 
 static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
 		struct nvmet_tcp_cmd *cmd)
@@ -526,6 +527,12 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
 		container_of(req, struct nvmet_tcp_cmd, req);
 	struct nvmet_tcp_queue	*queue = cmd->queue;
 
+	if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+			nvmet_tcp_has_inline_data(cmd))) {
+		/* fail the cmd when we finish processing the inline data */
+		return;
+	}
+
 	llist_add(&cmd->lentry, &queue->resp_list);
 	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
 }
@@ -702,6 +709,17 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
 			return 0;
 	}
 
+	if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+			nvmet_tcp_has_data_in(cmd) &&
+			nvmet_tcp_has_inline_data(cmd))) {
+		/*
+		 * wait for inline data before processing the response
+		 * so the iov will not be freed
+		 */
+		queue->snd_cmd = NULL;
+		goto done_send;
+	}
+
 	if (cmd->state == NVMET_TCP_SEND_DATA_PDU) {
 		ret = nvmet_try_send_data_pdu(cmd);
 		if (ret <= 0)
@@ -1103,9 +1121,14 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
 		return 0;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len) {
-		cmd->req.execute(&cmd->req);
+	if (cmd->rbytes_done == cmd->req.transfer_len) {
+		if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
+			nvmet_tcp_queue_response(&cmd->req);
+		else {
+			if (unlikely(cmd == &queue->connect))
+				nvmet_tcp_executing_connect_cmd(queue);
+			cmd->req.execute(&cmd->req);
+		}
 	}
 
 	nvmet_prepare_receive_pdu(queue);
@@ -1143,9 +1166,16 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
 		goto out;
 	}
 
-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-	    cmd->rbytes_done == cmd->req.transfer_len)
-		cmd->req.execute(&cmd->req);
+	if (cmd->rbytes_done == cmd->req.transfer_len) {
+		if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
+			nvmet_tcp_queue_response(&cmd->req);
+		else {
+			if (unlikely(cmd == &queue->connect))
+				nvmet_tcp_executing_connect_cmd(queue);
+			cmd->req.execute(&cmd->req);
+		}
+	}
+
 	ret = 0;
 out:
 	nvmet_prepare_receive_pdu(queue);
-- 
2.18.2


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-18  8:28 elad.grupi
  2021-03-18 11:13 ` Hou Pu
@ 2021-03-18 13:08 ` Hou Pu
  1 sibling, 0 replies; 10+ messages in thread
From: Hou Pu @ 2021-03-18 13:08 UTC (permalink / raw)
  To: elad.grupi; +Cc: linux-nvme, sagi, houpu.main

I would like to verify this fix with latest 5.4 stable and upstream kernel.
Will tell you the result later.

Thanks,
Hou

[...]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
  2021-03-18  8:28 elad.grupi
@ 2021-03-18 11:13 ` Hou Pu
  2021-03-18 13:08 ` Hou Pu
  1 sibling, 0 replies; 10+ messages in thread
From: Hou Pu @ 2021-03-18 11:13 UTC (permalink / raw)
  To: elad.grupi; +Cc: linux-nvme, sagi

>In case there is an io that contains inline data and it goes to
>parsing error flow, command response will free command and iov
>before clearing the data on the socket buffer.
>This will delay the command response until receive flow is completed.
>
>Signed-off-by: Elad Grupi <elad.grupi@dell.com>

Hi Elad,

Could you please add this tag to help backport to stable branch?
 Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")

Thanks,
Hou

[...]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error
@ 2021-03-18  8:28 elad.grupi
  2021-03-18 11:13 ` Hou Pu
  2021-03-18 13:08 ` Hou Pu
  0 siblings, 2 replies; 10+ messages in thread
From: elad.grupi @ 2021-03-18  8:28 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: Elad Grupi

From: Elad Grupi <elad.grupi@dell.com>

In case there is an io that contains inline data and it goes to
parsing error flow, command response will free command and iov
before clearing the data on the socket buffer.
This will delay the command response until receive flow is completed.

Signed-off-by: Elad Grupi <elad.grupi@dell.com>
---
 drivers/nvme/target/tcp.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index d658c6e8263a..2b5d0c9c4e38 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -702,6 +702,17 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
 			return 0;
 	}
 
+	if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+			nvmet_tcp_has_data_in(cmd) &&
+			nvmet_tcp_has_inline_data(cmd))) {
+		/*
+		 * wait for inline data before processing the response
+		 * so the iov will not be freed
+		 */
+		queue->snd_cmd = NULL;
+		goto done_send;
+	}
+
 	if (cmd->state == NVMET_TCP_SEND_DATA_PDU) {
 		ret = nvmet_try_send_data_pdu(cmd);
 		if (ret <= 0)
@@ -1107,7 +1118,9 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
 	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
 	    cmd->rbytes_done == cmd->req.transfer_len) {
 		cmd->req.execute(&cmd->req);
-	}
+	} else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+			cmd->rbytes_done == cmd->req.transfer_len)
+		nvmet_tcp_queue_response(&cmd->req);
 
 	nvmet_prepare_receive_pdu(queue);
 	return 0;
@@ -1147,6 +1160,8 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
 	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
 	    cmd->rbytes_done == cmd->req.transfer_len)
 		cmd->req.execute(&cmd->req);
+	else if ((cmd->flags & NVMET_TCP_F_INIT_FAILED))
+		nvmet_tcp_queue_response(&cmd->req);
 	ret = 0;
 out:
 	nvmet_prepare_receive_pdu(queue);
-- 
2.18.2


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-03-26 18:01 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-18 12:55 [PATCH] nvmet-tcp: fix a segmentation fault during io parsing error elad.grupi
2021-03-19  3:52 ` Hou Pu
2021-03-19 17:26   ` Grupi, Elad
2021-03-22  4:06     ` Hou Pu
  -- strict thread matches above, loose matches on Subject: below --
2021-03-25 22:49 elad.grupi
2021-03-26 10:26 ` Hou Pu
2021-03-26 18:00   ` Grupi, Elad
2021-03-18  8:28 elad.grupi
2021-03-18 11:13 ` Hou Pu
2021-03-18 13:08 ` Hou Pu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).