linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: "Grupi, Elad" <Elad.Grupi@dell.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
Date: Thu, 18 Mar 2021 08:31:51 +0000	[thread overview]
Message-ID: <DM6PR19MB40114D428D4A81DD9505E5B0EF699@DM6PR19MB4011.namprd19.prod.outlook.com> (raw)
In-Reply-To: <DM6PR19MB401129067320A4B74CB68827EF6B9@DM6PR19MB4011.namprd19.prod.outlook.com>

Patch is ready in a new thread

http://lists.infradead.org/pipermail/linux-nvme/2021-March/023824.html

Elad

-----Original Message-----
From: Grupi, Elad 
Sent: Tuesday, 16 March 2021 17:46
To: Sagi Grimberg; linux-nvme@lists.infradead.org
Subject: RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error

Right. I will address the comment below and send new patch

-----Original Message-----
From: Sagi Grimberg <sagi@grimberg.me> 
Sent: Tuesday, 16 March 2021 8:21
To: Grupi, Elad; linux-nvme@lists.infradead.org
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error


[EXTERNAL EMAIL] 


> From: Elad Grupi <elad.grupi@dell.com>
> 
>      In case there is an io that contains inline data and it goes to
>      parsing error flow, command response will free command and iov
>      before clearing the data on the socket buffer.
>      This will delay the command response until receive flow is completed.
> 
> Signed-off-by: Elad Grupi <elad.grupi@dell.com>

Hey Elad,

I just realized that this patch was left unaddressed.

> ---
>   drivers/nvme/target/tcp.c | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c 
> index d535080b781f..dea94da4c9ba 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
>   static struct nvmet_fabrics_ops nvmet_tcp_ops;
>   static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
>   static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);
>   
>   static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
>   		struct nvmet_tcp_cmd *cmd)
> @@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
>   		nvmet_setup_c2h_data_pdu(queue->snd_cmd);
>   	else if (nvmet_tcp_need_data_in(queue->snd_cmd))
>   		nvmet_setup_r2t_pdu(queue->snd_cmd);
> -	else
> +	else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
> +			nvmet_tcp_has_inline_data(queue->snd_cmd)) {
> +		nvmet_tcp_queue_response(&queue->snd_cmd->req);
> +		queue->snd_cmd = NULL;

Perhaps instead of rotating the command on the list, maybe instead don't queue it in queue_response but rather only when you complete reading the garbage?

Something like the following:
--
@@ -537,6 +537,12 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
                 container_of(req, struct nvmet_tcp_cmd, req);
         struct nvmet_tcp_queue  *queue = cmd->queue;

+       if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+                    nvmet_tcp_has_inline_data(cmd))) {
+               /* fail the cmd when we finish processing the inline data */
+               return;
+       }
+
         llist_add(&cmd->lentry, &queue->resp_list);
         queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
  }
@@ -1115,9 +1121,11 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
         }
         nvmet_tcp_unmap_pdu_iovec(cmd);

-       if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
-           cmd->rbytes_done == cmd->req.transfer_len) {
-               cmd->req.execute(&cmd->req);
+       if (cmd->rbytes_done == cmd->req.transfer_len) {
+               if (cmd->flags & NVMET_TCP_F_INIT_FAILED)
+                       nvmet_tcp_queue_response(&cmd->req);
+               else
+                       cmd->req.execute(&cmd->req);
         }

         nvmet_prepare_receive_pdu(queue);
--
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-03-18  8:32 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-13 11:51 [PATCH] nvme-tcp: fix a segmentation fault during io parsing error elad.grupi
2021-01-13 22:47 ` Sagi Grimberg
2021-01-14 11:51   ` Grupi, Elad
2021-01-14 21:18     ` Sagi Grimberg
2021-01-14 21:41       ` Grupi, Elad
2021-01-16  1:19         ` Sagi Grimberg
2021-01-17  9:46           ` Grupi, Elad
2021-01-31 15:48             ` Grupi, Elad
2021-03-16  9:35             ` Hou Pu
2021-03-16 15:52               ` Grupi, Elad
2021-03-17  4:11                 ` Hou Pu
2021-03-18  8:31                   ` Grupi, Elad
2021-03-16  6:21 ` Sagi Grimberg
2021-03-16 15:45   ` Grupi, Elad
2021-03-18  8:31     ` Grupi, Elad [this message]
  -- strict thread matches above, loose matches on Subject: below --
2021-01-14 13:04 elad.grupi
2021-01-12 14:00 elad.grupi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR19MB40114D428D4A81DD9505E5B0EF699@DM6PR19MB4011.namprd19.prod.outlook.com \
    --to=elad.grupi@dell.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).