From: "Theodore Ts'o" <tytso@mit.edu>
To: Changheun Lee <nanich.lee@samsung.com>
Cc: alex_y_xu@yahoo.ca, axboe@kernel.dk, bgoncalv@redhat.com,
bvanassche@acm.org, dm-crypt@saout.de, hch@lst.de,
jaegeuk@kernel.org, linux-block@vger.kernel.org,
linux-ext4@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-nvme@lists.infradead.org, ming.lei@redhat.com,
yi.zhang@redhat.com
Subject: Re: regression: data corruption with ext4 on LUKS on nvme with torvalds master
Date: Thu, 13 May 2021 10:15:31 -0400 [thread overview]
Message-ID: <YJ00g8oBZkduQXIe@mit.edu> (raw)
In-Reply-To: <20210513094222.17635-1-nanich.lee@samsung.com>
On Thu, May 13, 2021 at 06:42:22PM +0900, Changheun Lee wrote:
>
> Problem might be casued by exhausting of memory. And memory exhausting
> would be caused by setting of small bio_max_size. Actually it was not
> reproduced in my VM environment at first. But, I reproduced same problem
> when bio_max_size is set with 8KB forced. Too many bio allocation would
> be occurred by setting of 8KB bio_max_size.
Hmm... I'm not sure how to align your diagnosis with the symptoms in
the bug report. If we were limited by memory, that should slow down
the I/O, but we should still be making forward progress, no? And a
forced reboot should not result in data corruption, unless maybe there
was a missing check for a failed memory allocation, causing data to be
written to the wrong location, a missing error check leading to the
block or file system layer not noticing that a write had failed
(although again, memory exhaustion should not lead to failed writes;
it might slow us down, sure, but if writes are being failed, something
is Badly Going Wrong --- things like writes to the swap device or
writes by the page cleaner must succeed, or else Things Would Go Bad
In A Hurry).
- Ted
next prev parent reply other threads:[~2021-05-13 14:16 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20210513100034epcas1p4b23892cd77bde73c777eea6dc51c16a4@epcas1p4.samsung.com>
2021-05-13 9:42 ` regression: data corruption with ext4 on LUKS on nvme with torvalds master Changheun Lee
2021-05-13 14:15 ` Theodore Ts'o [this message]
2021-05-13 15:59 ` Bart Van Assche
[not found] ` <0e7b0b6e-e78c-f22d-af8d-d7bdcb597bea@gmail.com>
2021-05-13 19:22 ` Mikulas Patocka
2021-05-13 21:18 ` Bart Van Assche
2021-05-14 9:43 ` Mikulas Patocka
2021-05-14 9:50 ` Mikulas Patocka
[not found] ` <CGME20210514104426epcas1p3ee2f22f8e18c961118795c356e6a14ae@epcas1p3.samsung.com>
2021-05-14 10:26 ` Changheun Lee
2021-07-09 20:45 ` Samuel Mendoza-Jonas
[not found] <1620493841.bxdq8r5haw.none.ref@localhost>
2021-05-08 17:54 ` Alex Xu (Hello71)
2021-05-09 2:29 ` Alex Xu (Hello71)
2021-05-09 3:51 ` Jens Axboe
2021-05-09 14:47 ` Alex Xu (Hello71)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YJ00g8oBZkduQXIe@mit.edu \
--to=tytso@mit.edu \
--cc=alex_y_xu@yahoo.ca \
--cc=axboe@kernel.dk \
--cc=bgoncalv@redhat.com \
--cc=bvanassche@acm.org \
--cc=dm-crypt@saout.de \
--cc=hch@lst.de \
--cc=jaegeuk@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=ming.lei@redhat.com \
--cc=nanich.lee@samsung.com \
--cc=yi.zhang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).