linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
@ 2021-01-13 11:51 elad.grupi
  2021-01-13 22:47 ` Sagi Grimberg
  2021-03-16  6:21 ` Sagi Grimberg
  0 siblings, 2 replies; 17+ messages in thread
From: elad.grupi @ 2021-01-13 11:51 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: Elad Grupi

From: Elad Grupi <elad.grupi@dell.com>

    In case there is an io that contains inline data and it goes to
    parsing error flow, command response will free command and iov
    before clearing the data on the socket buffer.
    This will delay the command response until receive flow is completed.

Signed-off-by: Elad Grupi <elad.grupi@dell.com>
---
 drivers/nvme/target/tcp.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index d535080b781f..dea94da4c9ba 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
 static struct nvmet_fabrics_ops nvmet_tcp_ops;
 static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
 static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
+static void nvmet_tcp_queue_response(struct nvmet_req *req);
 
 static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
 		struct nvmet_tcp_cmd *cmd)
@@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
 		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
 	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
 		nvmet_setup_r2t_pdu(queue->snd_cmd);
-	else
+	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
+			nvmet_tcp_has_inline_data(queue->snd_cmd)) {
+		nvmet_tcp_queue_response(&queue->snd_cmd->req);
+		queue->snd_cmd = NULL;
+	} else
 		nvmet_setup_response_pdu(queue->snd_cmd);
 
 	return queue->snd_cmd;
-- 
2.16.5


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-13 11:51 [PATCH] nvme-tcp: fix a segmentation fault during io parsing error elad.grupi
@ 2021-01-13 22:47 ` Sagi Grimberg
  2021-01-14 11:51   ` Grupi, Elad
  2021-03-16  6:21 ` Sagi Grimberg
  1 sibling, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2021-01-13 22:47 UTC (permalink / raw)
  To: elad.grupi, linux-nvme

Hey Elad,

> From: Elad Grupi <elad.grupi@dell.com>
> 
>      In case there is an io that contains inline data and it goes to
>      parsing error flow, command response will free command and iov
>      before clearing the data on the socket buffer.
>      This will delay the command response until receive flow is completed.
> 
> Signed-off-by: Elad Grupi <elad.grupi@dell.com>
> ---
>   drivers/nvme/target/tcp.c | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index d535080b781f..dea94da4c9ba 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
>   static struct nvmet_fabrics_ops nvmet_tcp_ops;
>   static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
>   static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);
>   
>   static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
>   		struct nvmet_tcp_cmd *cmd)
> @@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
>   		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
>   	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
>   		nvmet_setup_r2t_pdu(queue->snd_cmd);
> -	else
> +	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
> +			nvmet_tcp_has_inline_data(queue->snd_cmd)) {

This deserves a comment I think.

Did you get a chance to look into what I sent you guys?

> +		nvmet_tcp_queue_response(&queue->snd_cmd->req);
> +		queue->snd_cmd = NULL;
> +	} else
>   		nvmet_setup_response_pdu(queue->snd_cmd);
>   
>   	return queue->snd_cmd;
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-13 22:47 ` Sagi Grimberg
@ 2021-01-14 11:51   ` Grupi, Elad
  2021-01-14 21:18     ` Sagi Grimberg
  0 siblings, 1 reply; 17+ messages in thread
From: Grupi, Elad @ 2021-01-14 11:51 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme

Thank you Sagi,

I will submit a new patch with a comment as requested.
The patch you sent did not solve that issue, I submitted some comments on the relevant thread.

Elad

-----Original Message-----
From: Sagi Grimberg <sagi@grimberg.me> 
Sent: Thursday, 14 January 2021 0:47
To: Grupi, Elad; linux-nvme@lists.infradead.org
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

Hey Elad,

> From: Elad Grupi <elad.grupi@dell.com>
> 
>      In case there is an io that contains inline data and it goes to
>      parsing error flow, command response will free command and iov
>      before clearing the data on the socket buffer.
>      This will delay the command response until receive flow is completed.
> 
> Signed-off-by: Elad Grupi <elad.grupi@dell.com>
> ---
>   drivers/nvme/target/tcp.c | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c 
> index d535080b781f..dea94da4c9ba 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
>   static struct nvmet_fabrics_ops nvmet_tcp_ops;
>   static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
>   static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);
>   
>   static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
>   		struct nvmet_tcp_cmd *cmd)
> @@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
>   		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
>   	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
>   		nvmet_setup_r2t_pdu(queue->snd_cmd);
> -	else
> +	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
> +			nvmet_tcp_has_inline_data(queue->snd_cmd)) {

This deserves a comment I think.

Did you get a chance to look into what I sent you guys?

> +		nvmet_tcp_queue_response(&queue->snd_cmd->req);
> +		queue->snd_cmd = NULL;
> +	} else
>   		nvmet_setup_response_pdu(queue->snd_cmd);
>   
>   	return queue->snd_cmd;
> 
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-14 11:51   ` Grupi, Elad
@ 2021-01-14 21:18     ` Sagi Grimberg
  2021-01-14 21:41       ` Grupi, Elad
  0 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2021-01-14 21:18 UTC (permalink / raw)
  To: Grupi, Elad, linux-nvme


> Thank you Sagi,
> 
> I will submit a new patch with a comment as requested.

Thanks.

> The patch you sent did not solve that issue, I submitted some comments on the relevant thread.

This was a silly typo, did you check if it addressed your issue when
you applied the needed change?

There are stuff there that I think we need to use for sorting out other
issues where your bug report is the trigger for this.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-14 21:18     ` Sagi Grimberg
@ 2021-01-14 21:41       ` Grupi, Elad
  2021-01-16  1:19         ` Sagi Grimberg
  0 siblings, 1 reply; 17+ messages in thread
From: Grupi, Elad @ 2021-01-14 21:41 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme

> This was a silly typo, did you check if it addressed your issue when you applied the needed change?

Yes. We applied the needed change before testing your patch, but still had kernel panic on recv flow when we had unread inline data on the socket.

> There are stuff there that I think we need to use for sorting out other issues where your bug report is the trigger for this.

I agree, changes looks good. 

-----Original Message-----
From: Sagi Grimberg <sagi@grimberg.me> 
Sent: Thursday, 14 January 2021 23:18
To: Grupi, Elad; linux-nvme@lists.infradead.org
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 


> Thank you Sagi,
> 
> I will submit a new patch with a comment as requested.

Thanks.

> The patch you sent did not solve that issue, I submitted some comments on the relevant thread.

This was a silly typo, did you check if it addressed your issue when you applied the needed change?

There are stuff there that I think we need to use for sorting out other issues where your bug report is the trigger for this.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-14 21:41       ` Grupi, Elad
@ 2021-01-16  1:19         ` Sagi Grimberg
  2021-01-17  9:46           ` Grupi, Elad
  0 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2021-01-16  1:19 UTC (permalink / raw)
  To: Grupi, Elad, linux-nvme


>> This was a silly typo, did you check if it addressed your issue when you applied the needed change?
> 
> Yes. We applied the needed change before testing your patch, but still had kernel panic on recv flow when we had unread inline data on the socket.


Can you share the stack trace?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-16  1:19         ` Sagi Grimberg
@ 2021-01-17  9:46           ` Grupi, Elad
  2021-01-31 15:48             ` Grupi, Elad
  2021-03-16  9:35             ` Hou Pu
  0 siblings, 2 replies; 17+ messages in thread
From: Grupi, Elad @ 2021-01-17  9:46 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme

Yes, it was reproduced with the validity check of resv1 field of connect command.

Jan 17 09:43:36 FNM00183301763-A kernel: nvmet_tcp: failed cmd 000000006fe9c801 id 0 opcode 127, data_len: 1024                                                                                                                                          
Jan 17 09:43:36 FNM00183301763-A kernel: general protection fault: 0000 [#1] SMP NOPTI                                                                                                                                                                   
Jan 17 09:43:36 FNM00183301763-A kernel: CPU: 10 PID: 615 Comm: kworker/10:1H Kdump: loaded Tainted: P           OE     4.19.158-coreos-r9999.188fe4a1ba192d57eaaa3462db70e1d9 #1                                                                        
Jan 17 09:43:36 FNM00183301763-A kernel: Hardware name: EMC 900-532-002/110-551-910C-01, BIOS 20.12 11/12/2020                                                                                                                                           
Jan 17 09:43:36 FNM00183301763-A kernel: Workqueue: nvmet_tcp_wq nvmet_tcp_io_work [nvmet_tcp]                                                                                                                                                           
Jan 17 09:43:36 FNM00183301763-A kernel: RIP: 0010:memcpy_erms+0x6/0x10                                                                                                                                                                                  
Jan 17 09:43:36 FNM00183301763-A kernel: Code: 90 90 90 90 eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe         
Jan 17 09:43:36 FNM00183301763-A kernel: RSP: 0018:ffffc90007bc3c70 EFLAGS: 00010206
Jan 17 09:43:36 FNM00183301763-A kernel: RAX: 34e40d75735a3225 RBX: 0000000000000400 RCX: 0000000000000400
Jan 17 09:43:36 FNM00183301763-A kernel: RDX: 0000000000000400 RSI: ffff8887f03368ca RDI: 34e40d75735a3225
Jan 17 09:43:36 FNM00183301763-A kernel: RBP: ffff88884ead9d60 R08: ffff8888b53e0530 R09: ffff889789776b4c
Jan 17 09:43:36 FNM00183301763-A kernel: R10: ffff8897897766c8 R11: ffff8887f03368ca R12: 0000000000000400
Jan 17 09:43:36 FNM00183301763-A kernel: R13: 0000000000000400 R14: ffff8887f0336cca R15: 0000000000000000
Jan 17 09:43:36 FNM00183301763-A kernel: FS:  0000000000000000(0000) GS:ffff8897dfe80000(0000) knlGS:0000000000000000
Jan 17 09:43:36 FNM00183301763-A kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 17 09:43:36 FNM00183301763-A kernel: CR2: 00007f592924d890 CR3: 000000000220a002 CR4: 00000000007606e0
Jan 17 09:43:36 FNM00183301763-A kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jan 17 09:43:36 FNM00183301763-A kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Jan 17 09:43:36 FNM00183301763-A kernel: PKRU: 55555554
Jan 17 09:43:36 FNM00183301763-A kernel: Call Trace:
Jan 17 09:43:36 FNM00183301763-A kernel:  _copy_to_iter+0x333/0x3d0
Jan 17 09:43:36 FNM00183301763-A kernel:  ? tcp_write_xmit+0x403/0xfe0
Jan 17 09:43:36 FNM00183301763-A kernel:  skb_copy_datagram_iter+0x7b/0x260
Jan 17 09:43:36 FNM00183301763-A kernel:  tcp_recvmsg+0x238/0xc90
Jan 17 09:43:36 FNM00183301763-A kernel:  ? tcp_sendpage_locked+0x44/0x60
Jan 17 09:43:36 FNM00183301763-A kernel:  ? __local_bh_enable_ip+0x60/0x70
Jan 17 09:43:36 FNM00183301763-A kernel:  ? tcp_sendpage+0x41/0x50
Jan 17 09:43:36 FNM00183301763-A kernel:  inet_recvmsg+0x5b/0xd0
Jan 17 09:43:36 FNM00183301763-A kernel:  nvmet_tcp_io_work+0x116/0xb40 [nvmet_tcp]
Jan 17 09:43:36 FNM00183301763-A kernel:  process_one_work+0x206/0x400
Jan 17 09:43:36 FNM00183301763-A kernel:  worker_thread+0x2d/0x3e0
Jan 17 09:43:36 FNM00183301763-A kernel:  ? process_one_work+0x400/0x400
Jan 17 09:43:36 FNM00183301763-A kernel:  kthread+0x112/0x130
Jan 17 09:43:36 FNM00183301763-A kernel:  ? kthread_bind+0x20/0x20
Jan 17 09:43:36 FNM00183301763-A kernel:  ret_from_fork+0x1f/0x40


-----Original Message-----
From: Sagi Grimberg <sagi@grimberg.me> 
Sent: Saturday, 16 January 2021 3:19
To: Grupi, Elad; linux-nvme@lists.infradead.org
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 


>> This was a silly typo, did you check if it addressed your issue when you applied the needed change?
> 
> Yes. We applied the needed change before testing your patch, but still had kernel panic on recv flow when we had unread inline data on the socket.


Can you share the stack trace?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-17  9:46           ` Grupi, Elad
@ 2021-01-31 15:48             ` Grupi, Elad
  2021-03-16  9:35             ` Hou Pu
  1 sibling, 0 replies; 17+ messages in thread
From: Grupi, Elad @ 2021-01-31 15:48 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme

Any update about this issue?

-----Original Message-----
From: Grupi, Elad 
Sent: Sunday, 17 January 2021 11:47
To: Sagi Grimberg; linux-nvme@lists.infradead.org
Subject: RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error

Yes, it was reproduced with the validity check of resv1 field of connect command.

Jan 17 09:43:36 FNM00183301763-A kernel: nvmet_tcp: failed cmd 000000006fe9c801 id 0 opcode 127, data_len: 1024                                                                                                                                          
Jan 17 09:43:36 FNM00183301763-A kernel: general protection fault: 0000 [#1] SMP NOPTI                                                                                                                                                                   
Jan 17 09:43:36 FNM00183301763-A kernel: CPU: 10 PID: 615 Comm: kworker/10:1H Kdump: loaded Tainted: P           OE     4.19.158-coreos-r9999.188fe4a1ba192d57eaaa3462db70e1d9 #1                                                                        
Jan 17 09:43:36 FNM00183301763-A kernel: Hardware name: EMC 900-532-002/110-551-910C-01, BIOS 20.12 11/12/2020                                                                                                                                           
Jan 17 09:43:36 FNM00183301763-A kernel: Workqueue: nvmet_tcp_wq nvmet_tcp_io_work [nvmet_tcp]                                                                                                                                                           
Jan 17 09:43:36 FNM00183301763-A kernel: RIP: 0010:memcpy_erms+0x6/0x10                                                                                                                                                                                  
Jan 17 09:43:36 FNM00183301763-A kernel: Code: 90 90 90 90 eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe         
Jan 17 09:43:36 FNM00183301763-A kernel: RSP: 0018:ffffc90007bc3c70 EFLAGS: 00010206 Jan 17 09:43:36 FNM00183301763-A kernel: RAX: 34e40d75735a3225 RBX: 0000000000000400 RCX: 0000000000000400 Jan 17 09:43:36 FNM00183301763-A kernel: RDX: 0000000000000400 RSI: ffff8887f03368ca RDI: 34e40d75735a3225 Jan 17 09:43:36 FNM00183301763-A kernel: RBP: ffff88884ead9d60 R08: ffff8888b53e0530 R09: ffff889789776b4c Jan 17 09:43:36 FNM00183301763-A kernel: R10: ffff8897897766c8 R11: ffff8887f03368ca R12: 0000000000000400 Jan 17 09:43:36 FNM00183301763-A kernel: R13: 0000000000000400 R14: ffff8887f0336cca R15: 0000000000000000 Jan 17 09:43:36 FNM00183301763-A kernel: FS:  0000000000000000(0000) GS:ffff8897dfe80000(0000) knlGS:0000000000000000 Jan 17 09:43:36 FNM00183301763-A kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jan 17 09:43:36 FNM00183301763-A kernel: CR2: 00007f592924d890 CR3: 000000000220a002 CR4: 00000000007606e0 Jan 17 09:43:36 FNM00183301763-A kernel: DR0: 0000000000000000
  DR1: 0000000000000000 DR2: 0000000000000000 Jan 17 09:43:36 FNM00183301763-A kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Jan 17 09:43:36 FNM00183301763-A kernel: PKRU: 55555554 Jan 17 09:43:36 FNM00183301763-A kernel: Call Trace:
Jan 17 09:43:36 FNM00183301763-A kernel:  _copy_to_iter+0x333/0x3d0 Jan 17 09:43:36 FNM00183301763-A kernel:  ? tcp_write_xmit+0x403/0xfe0 Jan 17 09:43:36 FNM00183301763-A kernel:  skb_copy_datagram_iter+0x7b/0x260 Jan 17 09:43:36 FNM00183301763-A kernel:  tcp_recvmsg+0x238/0xc90 Jan 17 09:43:36 FNM00183301763-A kernel:  ? tcp_sendpage_locked+0x44/0x60 Jan 17 09:43:36 FNM00183301763-A kernel:  ? __local_bh_enable_ip+0x60/0x70 Jan 17 09:43:36 FNM00183301763-A kernel:  ? tcp_sendpage+0x41/0x50 Jan 17 09:43:36 FNM00183301763-A kernel:  inet_recvmsg+0x5b/0xd0 Jan 17 09:43:36 FNM00183301763-A kernel:  nvmet_tcp_io_work+0x116/0xb40 [nvmet_tcp] Jan 17 09:43:36 FNM00183301763-A kernel:  process_one_work+0x206/0x400 Jan 17 09:43:36 FNM00183301763-A kernel:  worker_thread+0x2d/0x3e0 Jan 17 09:43:36 FNM00183301763-A kernel:  ? process_one_work+0x400/0x400 Jan 17 09:43:36 FNM00183301763-A kernel:  kthread+0x112/0x130 Jan 17 09:43:36 FNM00183301763-A kernel:  ? kthread_bind+0x20/0x20 Jan 17 09:43
 :36 FNM00183301763-A kernel:  ret_from_fork+0x1f/0x40


-----Original Message-----
From: Sagi Grimberg <sagi@grimberg.me>
Sent: Saturday, 16 January 2021 3:19
To: Grupi, Elad; linux-nvme@lists.infradead.org
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 


>> This was a silly typo, did you check if it addressed your issue when you applied the needed change?
> 
> Yes. We applied the needed change before testing your patch, but still had kernel panic on recv flow when we had unread inline data on the socket.


Can you share the stack trace?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-13 11:51 [PATCH] nvme-tcp: fix a segmentation fault during io parsing error elad.grupi
  2021-01-13 22:47 ` Sagi Grimberg
@ 2021-03-16  6:21 ` Sagi Grimberg
  2021-03-16 15:45   ` Grupi, Elad
  1 sibling, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2021-03-16  6:21 UTC (permalink / raw)
  To: elad.grupi, linux-nvme


> From: Elad Grupi <elad.grupi@dell.com>
> 
>      In case there is an io that contains inline data and it goes to
>      parsing error flow, command response will free command and iov
>      before clearing the data on the socket buffer.
>      This will delay the command response until receive flow is completed.
> 
> Signed-off-by: Elad Grupi <elad.grupi@dell.com>

Hey Elad,

I just realized that this patch was left unaddressed.

> ---
>   drivers/nvme/target/tcp.c | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index d535080b781f..dea94da4c9ba 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
>   static struct nvmet_fabrics_ops nvmet_tcp_ops;
>   static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
>   static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);
>   
>   static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
>   		struct nvmet_tcp_cmd *cmd)
> @@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
>   		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
>   	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
>   		nvmet_setup_r2t_pdu(queue->snd_cmd);
> -	else
> +	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
> +			nvmet_tcp_has_inline_data(queue->snd_cmd)) {
> +		nvmet_tcp_queue_response(&queue->snd_cmd->req);
> +		queue->snd_cmd = NULL;

Perhaps instead of rotating the command on the list, maybe
instead don't queue it in queue_response but rather
only when you complete reading the garbage?

Something like the following:
--
@@ -537,6 +537,12 @@ static void nvmet_tcp_queue_response(struct 
nvmet_req *req)
                 container_of(req, struct nvmet_tcp_cmd, req);
         struct nvmet_tcp_queue  *queue = cmd->queue;

+       if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+                    nvmet_tcp_has_inline_data(cmd))) {
+               /* fail the cmd when we finish processing the inline data */
+               return;
+       }
+
         llist_add(&cmd->lentry, &queue->resp_list);
         queue_work_on(queue_cpu(queue), nvmet_tcp_wq, 
&cmd->queue->io_work);
  }
@@ -1115,9 +1121,11 @@ static int nvmet_tcp_try_recv_data(struct 
nvmet_tcp_queue *queue)
         }
         nvmet_tcp_unmap_pdu_iovec(cmd);

-       if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-           cmd->rbytes_done == cmd->req.transfer_len) {
-               cmd->req.execute(&cmd->req);
+       if (cmd->rbytes_done == cmd->req.transfer_len) {
+               if (cmd->flags & NVMET_TCP_F_INIT_FAILED)
+                       nvmet_tcp_queue_response(&cmd->req);
+               else
+                       cmd->req.execute(&cmd->req);
         }

         nvmet_prepare_receive_pdu(queue);
--

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-01-17  9:46           ` Grupi, Elad
  2021-01-31 15:48             ` Grupi, Elad
@ 2021-03-16  9:35             ` Hou Pu
  2021-03-16 15:52               ` Grupi, Elad
  1 sibling, 1 reply; 17+ messages in thread
From: Hou Pu @ 2021-03-16  9:35 UTC (permalink / raw)
  To: elad.grupi; +Cc: linux-nvme, sagi

Hi Elad and Sagi,

I think this bug is the same one I am trying to fix. Sorry I did not notice
Elad was trying to fix this in time.

Elad, could please take a look at this thread. I think these two bug are same.
The callback is same.

[PATCH] nvmet-tcp: finish receiving before send back response if nvmet_req_init() failed.
https://lore.kernel.org/linux-nvme/a4ae0e4b-3d59-3a5a-1533-4545e2e4633e@gmail.com/T/#t

>@@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
>   	  nvmet_setup_c2h_data_pdu(queue->snd_cmd);
>-	else
>+	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
>+			       +nvmet_tcp_has_inline_data(queue->snd_cmd)) {
>+		nvmet_tcp_queue_response(&queue->snd_cmd->req);
>+		queue->snd_cmd = NULL;
>+	} else

Here when we get a new cmd and find it has data remains to be read.
I think that in fact this request might already been replied by nvmet_req_init().
And the inline data should also be consumed by read it from the socket.


Thanks,
Hou


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-03-16  6:21 ` Sagi Grimberg
@ 2021-03-16 15:45   ` Grupi, Elad
  2021-03-18  8:31     ` Grupi, Elad
  0 siblings, 1 reply; 17+ messages in thread
From: Grupi, Elad @ 2021-03-16 15:45 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme

Right. I will address the comment below and send new patch

-----Original Message-----
From: Sagi Grimberg <sagi@grimberg.me> 
Sent: Tuesday, 16 March 2021 8:21
To: Grupi, Elad; linux-nvme@lists.infradead.org
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 


> From: Elad Grupi <elad.grupi@dell.com>
> 
>      In case there is an io that contains inline data and it goes to
>      parsing error flow, command response will free command and iov
>      before clearing the data on the socket buffer.
>      This will delay the command response until receive flow is completed.
> 
> Signed-off-by: Elad Grupi <elad.grupi@dell.com>

Hey Elad,

I just realized that this patch was left unaddressed.

> ---
>   drivers/nvme/target/tcp.c | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c 
> index d535080b781f..dea94da4c9ba 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
>   static struct nvmet_fabrics_ops nvmet_tcp_ops;
>   static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
>   static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);
>   
>   static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
>   		struct nvmet_tcp_cmd *cmd)
> @@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
>   		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
>   	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
>   		nvmet_setup_r2t_pdu(queue->snd_cmd);
> -	else
> +	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
> +			nvmet_tcp_has_inline_data(queue->snd_cmd)) {
> +		nvmet_tcp_queue_response(&queue->snd_cmd->req);
> +		queue->snd_cmd = NULL;

Perhaps instead of rotating the command on the list, maybe instead don't queue it in queue_response but rather only when you complete reading the garbage?

Something like the following:
--
@@ -537,6 +537,12 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
                 container_of(req, struct nvmet_tcp_cmd, req);
         struct nvmet_tcp_queue  *queue = cmd->queue;

+       if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+                    nvmet_tcp_has_inline_data(cmd))) {
+               /* fail the cmd when we finish processing the inline data */
+               return;
+       }
+
         llist_add(&cmd->lentry, &queue->resp_list);
         queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
  }
@@ -1115,9 +1121,11 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
         }
         nvmet_tcp_unmap_pdu_iovec(cmd);

-       if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-           cmd->rbytes_done == cmd->req.transfer_len) {
-               cmd->req.execute(&cmd->req);
+       if (cmd->rbytes_done == cmd->req.transfer_len) {
+               if (cmd->flags & NVMET_TCP_F_INIT_FAILED)
+                       nvmet_tcp_queue_response(&cmd->req);
+               else
+                       cmd->req.execute(&cmd->req);
         }

         nvmet_prepare_receive_pdu(queue);
--
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-03-16  9:35             ` Hou Pu
@ 2021-03-16 15:52               ` Grupi, Elad
  2021-03-17  4:11                 ` Hou Pu
  0 siblings, 1 reply; 17+ messages in thread
From: Grupi, Elad @ 2021-03-16 15:52 UTC (permalink / raw)
  To: Hou Pu; +Cc: linux-nvme, sagi

Hi Hou,

You are correct, this is the same issue.

I have reviewed your patch and I have two concerns:
1. As Sagi mentioned, you might not have all the inline data in the socket at this point.
2. You might get hit on the last round of the budget of the receive loop, so you will still have a chance of hitting the response flow before reading all the data from the socket.

Elad

-----Original Message-----
From: Hou Pu <houpu.main@gmail.com> 
Sent: Tuesday, 16 March 2021 11:36
To: Grupi, Elad
Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me
Subject: RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 

Hi Elad and Sagi,

I think this bug is the same one I am trying to fix. Sorry I did not notice Elad was trying to fix this in time.

Elad, could please take a look at this thread. I think these two bug are same.
The callback is same.

[PATCH] nvmet-tcp: finish receiving before send back response if nvmet_req_init() failed.
https://lore.kernel.org/linux-nvme/a4ae0e4b-3d59-3a5a-1533-4545e2e4633e@gmail.com/T/#t

>@@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
>   	  nvmet_setup_c2h_data_pdu(queue->snd_cmd);
>-	else
>+	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
>+			       +nvmet_tcp_has_inline_data(queue->snd_cmd)) {
>+		nvmet_tcp_queue_response(&queue->snd_cmd->req);
>+		queue->snd_cmd = NULL;
>+	} else

Here when we get a new cmd and find it has data remains to be read.
I think that in fact this request might already been replied by nvmet_req_init().
And the inline data should also be consumed by read it from the socket.


Thanks,
Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-03-16 15:52               ` Grupi, Elad
@ 2021-03-17  4:11                 ` Hou Pu
  2021-03-18  8:31                   ` Grupi, Elad
  0 siblings, 1 reply; 17+ messages in thread
From: Hou Pu @ 2021-03-17  4:11 UTC (permalink / raw)
  To: Grupi, Elad; +Cc: linux-nvme, sagi


On 2021/3/16 11:52, Grupi, Elad wrote:
> Hi Hou,
>
> You are correct, this is the same issue.
>
> I have reviewed your patch and I have two concerns:
> 1. As Sagi mentioned, you might not have all the inline data in the socket at this point.
> 2. You might get hit on the last round of the budget of the receive loop, so you will still have a chance of hitting the response flow before reading all the data from the socket.

Yes, I realize my patch has these problems.

Thanks,

Hou


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-03-17  4:11                 ` Hou Pu
@ 2021-03-18  8:31                   ` Grupi, Elad
  0 siblings, 0 replies; 17+ messages in thread
From: Grupi, Elad @ 2021-03-18  8:31 UTC (permalink / raw)
  To: Hou Pu; +Cc: linux-nvme, sagi

I submitted new patch that addressing the below issues

http://lists.infradead.org/pipermail/linux-nvme/2021-March/023824.html

Elad


-----Original Message-----
From: Hou Pu <houpu.main@gmail.com> 
Sent: Wednesday, 17 March 2021 6:11
To: Grupi, Elad
Cc: linux-nvme@lists.infradead.org; sagi@grimberg.me
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 


On 2021/3/16 11:52, Grupi, Elad wrote:
> Hi Hou,
>
> You are correct, this is the same issue.
>
> I have reviewed your patch and I have two concerns:
> 1. As Sagi mentioned, you might not have all the inline data in the socket at this point.
> 2. You might get hit on the last round of the budget of the receive loop, so you will still have a chance of hitting the response flow before reading all the data from the socket.

Yes, I realize my patch has these problems.

Thanks,

Hou

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
  2021-03-16 15:45   ` Grupi, Elad
@ 2021-03-18  8:31     ` Grupi, Elad
  0 siblings, 0 replies; 17+ messages in thread
From: Grupi, Elad @ 2021-03-18  8:31 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme

Patch is ready in a new thread

http://lists.infradead.org/pipermail/linux-nvme/2021-March/023824.html

Elad

-----Original Message-----
From: Grupi, Elad 
Sent: Tuesday, 16 March 2021 17:46
To: Sagi Grimberg; linux-nvme@lists.infradead.org
Subject: RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error

Right. I will address the comment below and send new patch

-----Original Message-----
From: Sagi Grimberg <sagi@grimberg.me> 
Sent: Tuesday, 16 March 2021 8:21
To: Grupi, Elad; linux-nvme@lists.infradead.org
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 


> From: Elad Grupi <elad.grupi@dell.com>
> 
>      In case there is an io that contains inline data and it goes to
>      parsing error flow, command response will free command and iov
>      before clearing the data on the socket buffer.
>      This will delay the command response until receive flow is completed.
> 
> Signed-off-by: Elad Grupi <elad.grupi@dell.com>

Hey Elad,

I just realized that this patch was left unaddressed.

> ---
>   drivers/nvme/target/tcp.c | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c 
> index d535080b781f..dea94da4c9ba 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
>   static struct nvmet_fabrics_ops nvmet_tcp_ops;
>   static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
>   static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);
>   
>   static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
>   		struct nvmet_tcp_cmd *cmd)
> @@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
>   		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
>   	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
>   		nvmet_setup_r2t_pdu(queue->snd_cmd);
> -	else
> +	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
> +			nvmet_tcp_has_inline_data(queue->snd_cmd)) {
> +		nvmet_tcp_queue_response(&queue->snd_cmd->req);
> +		queue->snd_cmd = NULL;

Perhaps instead of rotating the command on the list, maybe instead don't queue it in queue_response but rather only when you complete reading the garbage?

Something like the following:
--
@@ -537,6 +537,12 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
                 container_of(req, struct nvmet_tcp_cmd, req);
         struct nvmet_tcp_queue  *queue = cmd->queue;

+       if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+                    nvmet_tcp_has_inline_data(cmd))) {
+               /* fail the cmd when we finish processing the inline data */
+               return;
+       }
+
         llist_add(&cmd->lentry, &queue->resp_list);
         queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
  }
@@ -1115,9 +1121,11 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
         }
         nvmet_tcp_unmap_pdu_iovec(cmd);

-       if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-           cmd->rbytes_done == cmd->req.transfer_len) {
-               cmd->req.execute(&cmd->req);
+       if (cmd->rbytes_done == cmd->req.transfer_len) {
+               if (cmd->flags & NVMET_TCP_F_INIT_FAILED)
+                       nvmet_tcp_queue_response(&cmd->req);
+               else
+                       cmd->req.execute(&cmd->req);
         }

         nvmet_prepare_receive_pdu(queue);
--
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
@ 2021-01-14 13:04 elad.grupi
  0 siblings, 0 replies; 17+ messages in thread
From: elad.grupi @ 2021-01-14 13:04 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: Elad Grupi

From: Elad Grupi <elad.grupi@dell.com>

    In case there is an io that contains inline data and it goes to
    parsing error flow, command response will free command and iov
    before clearing the data on the socket buffer.
    This will delay the command response until receive flow is completed.

Signed-off-by: Elad Grupi <elad.grupi@dell.com>
---
 drivers/nvme/target/tcp.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index d535080b781f..f181744b0214 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
 static struct nvmet_fabrics_ops nvmet_tcp_ops;
 static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
 static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
+static void nvmet_tcp_queue_response(struct nvmet_req *req);
 
 static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
 		struct nvmet_tcp_cmd *cmd)
@@ -476,7 +477,15 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
 		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
 	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
 		nvmet_setup_r2t_pdu(queue->snd_cmd);
-	else
+	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
+			nvmet_tcp_has_inline_data(queue->snd_cmd)) {
+		/*
+		 * wait for inline data before processing the response
+		 * so the iov will not be freed
+		 */
+		nvmet_tcp_queue_response(&queue->snd_cmd->req);
+		queue->snd_cmd = NULL;
+	} else
 		nvmet_setup_response_pdu(queue->snd_cmd);
 
 	return queue->snd_cmd;
-- 
2.16.5


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
@ 2021-01-12 14:00 elad.grupi
  0 siblings, 0 replies; 17+ messages in thread
From: elad.grupi @ 2021-01-12 14:00 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: Elad Grupi

From: Elad Grupi <Elad.Grupi@emc.com>

In case there is an io that contains inline data and it goes to
parsing error flow, command response will free command and iov
before clearing the data on the socket buffer.
This will delay the command response until receive flow is completed.

Signed-off-by: Elad Grupi <Elad.Grupi@emc.com>
---
 drivers/nvme/target/tcp.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index d535080b781f..dea94da4c9ba 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
 static struct nvmet_fabrics_ops nvmet_tcp_ops;
 static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
 static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
+static void nvmet_tcp_queue_response(struct nvmet_req *req);
 
 static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
 		struct nvmet_tcp_cmd *cmd)
@@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
 		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
 	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
 		nvmet_setup_r2t_pdu(queue->snd_cmd);
-	else
+	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
+			nvmet_tcp_has_inline_data(queue->snd_cmd)) {
+		nvmet_tcp_queue_response(&queue->snd_cmd->req);
+		queue->snd_cmd = NULL;
+	} else
 		nvmet_setup_response_pdu(queue->snd_cmd);
 
 	return queue->snd_cmd;
-- 
2.16.5


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-03-18  8:32 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-13 11:51 [PATCH] nvme-tcp: fix a segmentation fault during io parsing error elad.grupi
2021-01-13 22:47 ` Sagi Grimberg
2021-01-14 11:51   ` Grupi, Elad
2021-01-14 21:18     ` Sagi Grimberg
2021-01-14 21:41       ` Grupi, Elad
2021-01-16  1:19         ` Sagi Grimberg
2021-01-17  9:46           ` Grupi, Elad
2021-01-31 15:48             ` Grupi, Elad
2021-03-16  9:35             ` Hou Pu
2021-03-16 15:52               ` Grupi, Elad
2021-03-17  4:11                 ` Hou Pu
2021-03-18  8:31                   ` Grupi, Elad
2021-03-16  6:21 ` Sagi Grimberg
2021-03-16 15:45   ` Grupi, Elad
2021-03-18  8:31     ` Grupi, Elad
  -- strict thread matches above, loose matches on Subject: below --
2021-01-14 13:04 elad.grupi
2021-01-12 14:00 elad.grupi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).