linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: linux-nvme@lists.infradead.org, hch@lst.de
Subject: Re: nvme tcp receive errors
Date: Wed, 21 Apr 2021 07:28:41 -0700	[thread overview]
Message-ID: <20210421142841.GA3575546@dhcp-10-100-145-180.wdc.com> (raw)
In-Reply-To: <5bc917c8-4e4c-7bfa-7cfa-24858993a042@grimberg.me>

On Tue, Apr 20, 2021 at 10:33:30PM -0700, Sagi Grimberg wrote:
> Can you retry with the following applied on top of what I sent you?

Thanks, we'll give this a try. Just a quick question on the first patch:

> --
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index c60c1dcfb587..ff39d37e9793 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -63,6 +63,7 @@ struct nvme_tcp_request {
>         /* send state */
>         size_t                  offset;
>         size_t                  data_sent;
> +       size_t                  data_recvd;
>         enum nvme_tcp_send_state state;
>         enum nvme_tcp_cmd_state cmd_state;
>  };
> @@ -769,6 +770,7 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue
> *queue, struct sk_buff *skb,
>                 *len -= recv_len;
>                 *offset += recv_len;
>                 queue->data_remaining -= recv_len;
> +               req->data_recvd += recv_len;

Does this need to get reset to 0 during the initial setup? It looks like
a "req->data_recvd = 0" in nvme_tcp_setup_cmd_pdu() should happen, no?

>         }
> 
>         if (!queue->data_remaining) {
> @@ -776,6 +778,7 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue
> *queue, struct sk_buff *skb,
>                         nvme_tcp_ddgst_final(queue->rcv_hash,
> &queue->exp_ddgst);
>                         queue->ddgst_remaining = NVME_TCP_DIGEST_LENGTH;
>                 } else {
> +                       BUG_ON(req->data_recvd != req->data_len);
>                         req->cmd_state = NVME_TCP_CMD_DATA_DONE;
>                         if (pdu->hdr.flags & NVME_TCP_F_DATA_SUCCESS) {
>                                 req->cmd_state = NVME_TCP_CMD_DONE;
> --
> 
> There might be a hidden assumption here that may cause this if multiple
> c2hdata pdus will come per request...
> 
> If that is the case, you can try the following (on top):
> --
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index ff39d37e9793..aabec8e6810a 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -773,19 +773,20 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue
> *queue, struct sk_buff *skb,
>                 req->data_recvd += recv_len;
>         }
> 
> -       if (!queue->data_remaining) {
> +       if (!queue->data_remaining)
> +               nvme_tcp_init_recv_ctx(queue);
> +
> +       if (req->data_recvd == req->data_len) {
>                 if (queue->data_digest) {
>                         nvme_tcp_ddgst_final(queue->rcv_hash,
> &queue->exp_ddgst);
>                         queue->ddgst_remaining = NVME_TCP_DIGEST_LENGTH;
>                 } else {
> -                       BUG_ON(req->data_recvd != req->data_len);
>                         req->cmd_state = NVME_TCP_CMD_DATA_DONE;
>                         if (pdu->hdr.flags & NVME_TCP_F_DATA_SUCCESS) {
>                                 req->cmd_state = NVME_TCP_CMD_DONE;
>                                 nvme_tcp_end_request(rq, NVME_SC_SUCCESS);
>                                 queue->nr_cqe++;
>                         }
> -                       nvme_tcp_init_recv_ctx(queue);
>                 }
>         }
> --

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-04-21 14:29 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-31 16:18 nvme tcp receive errors Keith Busch
2021-03-31 19:10 ` Sagi Grimberg
2021-03-31 20:49   ` Keith Busch
2021-03-31 22:16     ` Sagi Grimberg
2021-03-31 22:26       ` Keith Busch
2021-03-31 22:45         ` Sagi Grimberg
2021-04-02 17:11     ` Keith Busch
2021-04-02 17:27       ` Sagi Grimberg
2021-04-05 14:37         ` Keith Busch
2021-04-07 19:53           ` Keith Busch
2021-04-09 21:38             ` Sagi Grimberg
2021-04-27 23:39               ` Keith Busch
2021-04-27 23:55                 ` Sagi Grimberg
2021-04-28 15:58                   ` Keith Busch
2021-04-28 17:42                     ` Sagi Grimberg
2021-04-28 18:01                       ` Keith Busch
2021-04-28 23:06                         ` Sagi Grimberg
2021-04-29  3:33                           ` Keith Busch
2021-04-29  4:52                             ` Sagi Grimberg
2021-05-03 18:51                               ` Keith Busch
2021-05-03 19:58                                 ` Sagi Grimberg
2021-05-03 20:25                                   ` Keith Busch
2021-05-04 19:29                                     ` Sagi Grimberg
2021-04-09 18:04           ` Sagi Grimberg
2021-04-14  0:29             ` Keith Busch
2021-04-21  5:33               ` Sagi Grimberg
2021-04-21 14:28                 ` Keith Busch [this message]
2021-04-21 16:59                   ` Sagi Grimberg
2021-04-26 15:31                 ` Keith Busch
2021-04-27  3:10                   ` Sagi Grimberg
2021-04-27 18:12                     ` Keith Busch
2021-04-27 23:58                       ` Sagi Grimberg
2021-04-30 23:42                         ` Sagi Grimberg
2021-05-03 14:28                           ` Keith Busch
2021-05-03 19:36                             ` Sagi Grimberg
2021-05-03 19:38                               ` Sagi Grimberg
2021-05-03 19:44                                 ` Keith Busch
2021-05-03 20:00                                   ` Sagi Grimberg
2021-05-04 14:36                                     ` Keith Busch
2021-05-04 18:15                                       ` Sagi Grimberg
2021-05-04 19:14                                         ` Keith Busch
2021-05-10 18:06                                           ` Keith Busch
2021-05-10 18:18                                             ` Sagi Grimberg
2021-05-10 18:30                                               ` Keith Busch
2021-05-10 21:07                                                 ` Sagi Grimberg
2021-05-11  3:00                                                   ` Keith Busch
2021-05-11 17:17                                                     ` Sagi Grimberg
2021-05-13 15:48                                                       ` Keith Busch
2021-05-13 19:53                                                         ` Sagi Grimberg
2021-05-17 20:48                                                           ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210421142841.GA3575546@dhcp-10-100-145-180.wdc.com \
    --to=kbusch@kernel.org \
    --cc=hch@lst.de \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).