linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Changheun Lee <nanich.lee@samsung.com>
To: alex_y_xu@yahoo.ca
Cc: axboe@kernel.dk, bgoncalv@redhat.com, bvanassche@acm.org,
	dm-crypt@saout.de, hch@lst.de, jaegeuk@kernel.org,
	linux-block@vger.kernel.org, linux-ext4@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	ming.lei@redhat.com, yi.zhang@redhat.com
Subject: Re: regression: data corruption with ext4 on LUKS on nvme with torvalds master
Date: Thu, 13 May 2021 18:42:22 +0900	[thread overview]
Message-ID: <20210513094222.17635-1-nanich.lee@samsung.com> (raw)
In-Reply-To: <CGME20210513100034epcas1p4b23892cd77bde73c777eea6dc51c16a4@epcas1p4.samsung.com>

> Excerpts from Jens Axboe's message of May 8, 2021 11:51 pm:
> > On 5/8/21 8:29 PM, Alex Xu (Hello71) wrote:
> >> Excerpts from Alex Xu (Hello71)'s message of May 8, 2021 1:54 pm:
> >>> Hi all,
> >>>
> >>> Using torvalds master, I recently encountered data corruption on my ext4 
> >>> volume on LUKS on NVMe. Specifically, during heavy writes, the system 
> >>> partially hangs; SysRq-W shows that processes are blocked in the kernel 
> >>> on I/O. After forcibly rebooting, chunks of files are replaced with 
> >>> other, unrelated data. I'm not sure exactly what the data is; some of it 
> >>> is unknown binary data, but in at least one case, a list of file paths 
> >>> was inserted into a file, indicating that the data is misdirected after 
> >>> encryption.
> >>>
> >>> This issue appears to affect files receiving writes in the temporal 
> >>> vicinity of the hang, but affects both new and old data: for example, my 
> >>> shell history file was corrupted up to many months before.
> >>>
> >>> The drive reports no SMART issues.
> >>>
> >>> I believe this is a regression in the kernel related to something merged 
> >>> in the last few days, as it consistently occurs with my most recent 
> >>> kernel versions, but disappears when reverting to an older kernel.
> >>>
> >>> I haven't investigated further, such as by bisecting. I hope this is 
> >>> sufficient information to give someone a lead on the issue, and if it is 
> >>> a bug, nail it down before anybody else loses data.
> >>>
> >>> Regards,
> >>> Alex.
> >>>
> >> 
> >> I found the following test to reproduce a hang, which I guess may be the 
> >> cause:
> >> 
> >> host$ cd /tmp
> >> host$ truncate -s 10G drive
> >> host$ qemu-system-x86_64 -drive format=raw,file=drive,if=none,id=drive -device nvme,drive=drive,serial=1 [... more VM setup options]
> >> guest$ cryptsetup luksFormat /dev/nvme0n1
> >> [accept warning, use any password]
> >> guest$ cryptsetup open /dev/nvme0n1
> >> [enter password]
> >> guest$ mkfs.ext4 /dev/mapper/test
> >> [normal output...]
> >> Creating journal (16384 blocks): [hangs forever]
> >> 
> >> I bisected this issue to:
> >> 
> >> cd2c7545ae1beac3b6aae033c7f31193b3255946 is the first bad commit
> >> commit cd2c7545ae1beac3b6aae033c7f31193b3255946
> >> Author: Changheun Lee <nanich.lee@samsung.com>
> >> Date:   Mon May 3 18:52:03 2021 +0900
> >> 
> >>     bio: limit bio max size
> >> 
> >> I didn't try reverting this commit or further reducing the test case. 
> >> Let me know if you need my kernel config or other information.
> > 
> > If you have time, please do test with that reverted. I'd be anxious to
> > get this revert queued up for 5.13-rc1.
> > 
> > -- 
> > Jens Axboe
> > 
> > 
> 
> I tested reverting it on top of b741596468b010af2846b75f5e75a842ce344a6e 
> ("Merge tag 'riscv-for-linus-5.13-mw1' of 
> git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux"), causing it 
> to no longer hang. I didn't check if this fixes the data corruption, but 
> I assume so.
> 
> I also tested a 1 GB image (works either way), and a virtio-blk 
> interface (works either way)
> 
> The Show Blocked State from the VM (without revert):
> 
> sysrq: Show Blocked State
> task:kworker/u2:0    state:D stack:    0 pid:    7 ppid:     2 flags:0x00004000
> Workqueue: kcryptd/252:0 kcryptd_crypt
> Call Trace:
> __schedule+0x1a2/0x4f0
> schedule+0x63/0xe0
> schedule_timeout+0x6a/0xd0
> ? lock_timer_base+0x80/0x80
> io_schedule_timeout+0x4c/0x70
> mempool_alloc+0xfc/0x130
> ? __wake_up_common_lock+0x90/0x90
> kcryptd_crypt+0x291/0x4e0
> process_one_work+0x1b1/0x300
> worker_thread+0x48/0x3d0
> ? process_one_work+0x300/0x300
> kthread+0x129/0x150
> ? __kthread_create_worker+0x100/0x100
> ret_from_fork+0x22/0x30
> task:mkfs.ext4       state:D stack:    0 pid:  979 ppid:   964 flags:0x00004000
> Call Trace:
> __schedule+0x1a2/0x4f0
> ? __schedule+0x1aa/0x4f0
> schedule+0x63/0xe0
> schedule_timeout+0x99/0xd0
> io_schedule_timeout+0x4c/0x70
> wait_for_completion_io+0x74/0xc0
> submit_bio_wait+0x46/0x60
> blkdev_issue_zeroout+0x118/0x1f0
> blkdev_fallocate+0x125/0x180
> vfs_fallocate+0x126/0x2e0
> __x64_sys_fallocate+0x37/0x60
> do_syscall_64+0x61/0x80
> ? do_syscall_64+0x6e/0x80
> entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> Regards,
> Alex.
> 

First of all, thank you very much for report a bug. And sorry about your
data lose.

Problem might be casued by exhausting of memory. And memory exhausting
would be caused by setting of small bio_max_size. Actually it was not
reproduced in my VM environment at first. But, I reproduced same problem
when bio_max_size is set with 8KB forced. Too many bio allocation would
be occurred by setting of 8KB bio_max_size.

So I prepare v10 patch to fix this bug. It will prevent that bio_max_size
is set with small size. bio_max_size will be set with 1MB as a minimum.
This size is same with legacy bio size before applying of "multipage bvec".

It will be very helpful to me If you test with v10 patch. :)

Thanks,
Changheun Lee.

       reply	other threads:[~2021-05-13 10:00 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20210513100034epcas1p4b23892cd77bde73c777eea6dc51c16a4@epcas1p4.samsung.com>
2021-05-13  9:42 ` Changheun Lee [this message]
2021-05-13 14:15   ` Theodore Ts'o
2021-05-13 15:59     ` Bart Van Assche
     [not found]       ` <0e7b0b6e-e78c-f22d-af8d-d7bdcb597bea@gmail.com>
2021-05-13 19:22         ` Mikulas Patocka
2021-05-13 21:18           ` Bart Van Assche
2021-05-14  9:43             ` Mikulas Patocka
2021-05-14  9:50           ` Mikulas Patocka
     [not found]             ` <CGME20210514104426epcas1p3ee2f22f8e18c961118795c356e6a14ae@epcas1p3.samsung.com>
2021-05-14 10:26               ` Changheun Lee
2021-07-09 20:45                 ` Samuel Mendoza-Jonas
     [not found] <1620493841.bxdq8r5haw.none.ref@localhost>
2021-05-08 17:54 ` Alex Xu (Hello71)
2021-05-09  2:29   ` Alex Xu (Hello71)
2021-05-09  3:51     ` Jens Axboe
2021-05-09 14:47       ` Alex Xu (Hello71)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210513094222.17635-1-nanich.lee@samsung.com \
    --to=nanich.lee@samsung.com \
    --cc=alex_y_xu@yahoo.ca \
    --cc=axboe@kernel.dk \
    --cc=bgoncalv@redhat.com \
    --cc=bvanassche@acm.org \
    --cc=dm-crypt@saout.de \
    --cc=hch@lst.de \
    --cc=jaegeuk@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=yi.zhang@redhat.com \
    --subject='Re: regression: data corruption with ext4 on LUKS on nvme with torvalds master' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).