All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Hao Wang <pkuwangh@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>, Linux-nvme@lists.infradead.org
Subject: Re: Data corruption when using multiple devices with NVMEoF TCP
Date: Mon, 11 Jan 2021 16:36:33 -0800	[thread overview]
Message-ID: <4684c86a-8cc7-c5ae-0d6b-9f0e7c59eda5@grimberg.me> (raw)
In-Reply-To: <CAJS6Edi9Es1zR9QC+=kwVjAFAGYrEru4vibW42ffyWoMDutFhQ@mail.gmail.com>

Hey Hao,

> Here is the entire log (and it's a new one, i.e. above snippet not 
> included):
> https://drive.google.com/file/d/16ArIs5-Jw4P2f17A_ftKLm1A4LQUFpmg/view?usp=sharing
> 
> What I found is the data corruption does not always happen, especially 
> when I copy a small directory. So I guess a lot of log entries should 
> just look fine.

So this seems to be a breakage that existed for some time now with
multipage bvecs that you have been the first one to report. This
seems to be related to bio merges, which is seems strange to me
why this just now comes up, perhaps it is the combination with
raid0 that triggers this, I'm not sure.

IIUC, this should resolve your issue, care to give it a go?
--
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 973d5d683180..6bceadc204a8 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -201,8 +201,9 @@ static inline size_t nvme_tcp_req_cur_offset(struct 
nvme_tcp_request *req)

  static inline size_t nvme_tcp_req_cur_length(struct nvme_tcp_request *req)
  {
-       return min_t(size_t, req->iter.bvec->bv_len - req->iter.iov_offset,
-                       req->pdu_len - req->pdu_sent);
+       return min_t(size_t, req->iter.count,
+                       min_t(size_t, req->iter.bvec->bv_len - 
req->iter.iov_offset,
+                               req->pdu_len - req->pdu_sent));
  }

  static inline size_t nvme_tcp_pdu_data_left(struct nvme_tcp_request *req)
@@ -223,7 +224,7 @@ static void nvme_tcp_init_iter(struct 
nvme_tcp_request *req,
         struct request *rq = blk_mq_rq_from_pdu(req);
         struct bio_vec *vec;
         unsigned int size;
-       int nsegs;
+       int nsegs = 0;
         size_t offset;

         if (rq->rq_flags & RQF_SPECIAL_PAYLOAD) {
@@ -233,11 +234,15 @@ static void nvme_tcp_init_iter(struct 
nvme_tcp_request *req,
                 offset = 0;
         } else {
                 struct bio *bio = req->curr_bio;
+               struct bvec_iter bi;
+               struct bio_vec bv;

                 vec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
-               nsegs = bio_segments(bio);
+               bio_for_each_bvec(bv, bio, bi) {
+                       nsegs++;
+               }
                 size = bio->bi_iter.bi_size;
-               offset = bio->bi_iter.bi_bvec_done;
+               offset = mp_bvec_iter_offset(bio->bi_io_vec, 
bio->bi_iter) - vec->bv_offset;
         }

         iov_iter_bvec(&req->iter, dir, vec, nsegs, size);
--

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2021-01-12  0:37 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-22 18:09 Data corruption when using multiple devices with NVMEoF TCP Hao Wang
2020-12-22 19:29 ` Sagi Grimberg
2020-12-22 19:58   ` Hao Wang
2020-12-23  8:41     ` Sagi Grimberg
2020-12-23  8:43       ` Christoph Hellwig
2020-12-23 21:23         ` Sagi Grimberg
2020-12-23 22:23           ` Hao Wang
2020-12-24  1:51         ` Hao Wang
2020-12-24  2:57           ` Sagi Grimberg
2020-12-24 10:28             ` Hao Wang
2020-12-24 17:56               ` Sagi Grimberg
2020-12-25  7:49                 ` Hao Wang
2020-12-25  9:05                   ` Sagi Grimberg
     [not found]                     ` <CAJS6Edgb+yCW5q5dA=MEkL0eYs4MXoopdiz72nhkxpkd5Fe_cA@mail.gmail.com>
2020-12-29  1:25                       ` Sagi Grimberg
2021-01-06  1:53                       ` Sagi Grimberg
2021-01-06  8:21                         ` Hao Wang
2021-01-11  8:56                         ` Hao Wang
2021-01-11 10:11                           ` Sagi Grimberg
     [not found]                             ` <CAJS6Edi9Es1zR9QC+=kwVjAFAGYrEru4vibW42ffyWoMDutFhQ@mail.gmail.com>
2021-01-12  0:36                               ` Sagi Grimberg [this message]
2021-01-12  1:29                                 ` Sagi Grimberg
2021-01-12  2:22                                   ` Ming Lei
2021-01-12  6:49                                     ` Sagi Grimberg
2021-01-12  8:55                                   ` Hao Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4684c86a-8cc7-c5ae-0d6b-9f0e7c59eda5@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=Linux-nvme@lists.infradead.org \
    --cc=hch@infradead.org \
    --cc=pkuwangh@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.