All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hao Wang <pkuwangh@gmail.com>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: Christoph Hellwig <hch@infradead.org>, Linux-nvme@lists.infradead.org
Subject: Re: Data corruption when using multiple devices with NVMEoF TCP
Date: Tue, 12 Jan 2021 00:55:59 -0800	[thread overview]
Message-ID: <CAJS6EdhW3d46zjXh7TVjFaB_Z4_=kA_fWcaEJKQdtLph56Y+kg@mail.gmail.com> (raw)
In-Reply-To: <fcb56817-1f34-e307-e888-45ebb661c31d@grimberg.me>

Yes, this patch fixes the problem! Thanks!

Tested on top of a0d54b4f5b21.

Hao

On Mon, Jan 11, 2021 at 5:29 PM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> > Hey Hao,
> >
> >> Here is the entire log (and it's a new one, i.e. above snippet not
> >> included):
> >> https://drive.google.com/file/d/16ArIs5-Jw4P2f17A_ftKLm1A4LQUFpmg/view?usp=sharing
> >>
> >>
> >> What I found is the data corruption does not always happen, especially
> >> when I copy a small directory. So I guess a lot of log entries should
> >> just look fine.
> >
> > So this seems to be a breakage that existed for some time now with
> > multipage bvecs that you have been the first one to report. This
> > seems to be related to bio merges, which is seems strange to me
> > why this just now comes up, perhaps it is the combination with
> > raid0 that triggers this, I'm not sure.
>
> OK, I think I understand what is going on. With multipage bvecs
> bios can split in the middle of a bvec entry, and then merge
> back with another bio.
>
> The issue is that we are not capping the last bvec entry send length
> calculation in that.
>
> I think that just this can also resolve the issue:
> --
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 973d5d683180..c6b0a189a494 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -201,8 +201,9 @@ static inline size_t nvme_tcp_req_cur_offset(struct
> nvme_tcp_request *req)
>
>   static inline size_t nvme_tcp_req_cur_length(struct nvme_tcp_request *req)
>   {
> -       return min_t(size_t, req->iter.bvec->bv_len - req->iter.iov_offset,
> -                       req->pdu_len - req->pdu_sent);
> +       return min_t(size_t, req->iter.count,
> +                       min_t(size_t, req->iter.bvec->bv_len -
> req->iter.iov_offset,
> +                               req->pdu_len - req->pdu_sent));
>   }
>
>   static inline size_t nvme_tcp_pdu_data_left(struct nvme_tcp_request *req)
> --

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

      parent reply	other threads:[~2021-01-12  8:56 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-22 18:09 Data corruption when using multiple devices with NVMEoF TCP Hao Wang
2020-12-22 19:29 ` Sagi Grimberg
2020-12-22 19:58   ` Hao Wang
2020-12-23  8:41     ` Sagi Grimberg
2020-12-23  8:43       ` Christoph Hellwig
2020-12-23 21:23         ` Sagi Grimberg
2020-12-23 22:23           ` Hao Wang
2020-12-24  1:51         ` Hao Wang
2020-12-24  2:57           ` Sagi Grimberg
2020-12-24 10:28             ` Hao Wang
2020-12-24 17:56               ` Sagi Grimberg
2020-12-25  7:49                 ` Hao Wang
2020-12-25  9:05                   ` Sagi Grimberg
     [not found]                     ` <CAJS6Edgb+yCW5q5dA=MEkL0eYs4MXoopdiz72nhkxpkd5Fe_cA@mail.gmail.com>
2020-12-29  1:25                       ` Sagi Grimberg
2021-01-06  1:53                       ` Sagi Grimberg
2021-01-06  8:21                         ` Hao Wang
2021-01-11  8:56                         ` Hao Wang
2021-01-11 10:11                           ` Sagi Grimberg
     [not found]                             ` <CAJS6Edi9Es1zR9QC+=kwVjAFAGYrEru4vibW42ffyWoMDutFhQ@mail.gmail.com>
2021-01-12  0:36                               ` Sagi Grimberg
2021-01-12  1:29                                 ` Sagi Grimberg
2021-01-12  2:22                                   ` Ming Lei
2021-01-12  6:49                                     ` Sagi Grimberg
2021-01-12  8:55                                   ` Hao Wang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJS6EdhW3d46zjXh7TVjFaB_Z4_=kA_fWcaEJKQdtLph56Y+kg@mail.gmail.com' \
    --to=pkuwangh@gmail.com \
    --cc=Linux-nvme@lists.infradead.org \
    --cc=hch@infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.