From: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
To: Marc MERLIN <marc@merlins.org>
Cc: "linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: 5.6 pretty massive unexplained btrfs corruption: parent transid verify failed + open_ctree failed
Date: Tue, 7 Jul 2020 23:44:07 -0400 [thread overview]
Message-ID: <20200708034407.GE10769@hungrycats.org> (raw)
In-Reply-To: <20200707035530.GP30660@merlins.org>
On Mon, Jul 06, 2020 at 08:55:30PM -0700, Marc MERLIN wrote:
> I'd love to know what went wrong so that it doesn't happen again, but let me know if you'd like data off this
> before I wipe it (which I assume is the only way out at this point)
> myth:~# btrfs check --mode=lowmem /dev/mapper/crypt_bcache0
> parent transid verify failed on 7325633544192 wanted 359658 found 359661
> parent transid verify failed on 7325633544192 wanted 359658 found 359661
> parent transid verify failed on 7325633544192 wanted 359658 found 359661
> Ignoring transid failure
> leaf parent key incorrect 7325633544192
> ERROR: failed to read block groups: Operation not permitted
> ERROR: cannot open file system
>
>
> I did run bees on that filesystem, but I also just did a full btrfs check on it, and it came back clean:
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/crypt_bcache4
> UUID: 36f5079e-ca6c-4855-8639-ccb82695c18d
> [1/7] checking root items
> Fixed 0 roots.
> [2/7] checking extents
> No device size related problem found
> [3/7] checking free space cache
> cache and super generation don't match, space cache will be invalidated
> [4/7] checking fs roots
> [5/7] checking only csums items (without verifying data)
> [6/7] checking root refs
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 18089211043840 bytes used, no error found
> total csum bytes: 17580412652
> total tree bytes: 82326192128
> total fs tree bytes: 56795086848
> total extent tree bytes: 5154258944
> btree space waste bytes: 13682108904
> file data blocks allocated: 24050542804992
>
>
> I then moved it to the target machine, started a btrfs send to it, and it failed quickly (due to a mistake
> I had an old btrfs binary on that server, but I'm hoping most of the work is done in kernel space and that the user space
> btrfs should not corrupt the disk if it's a bit old)
btrfs send has historically had bugs but not filesystem-damaging
ones (just relatively harmless kernel crashes and send failures).
btrfs receive is almost entirely userspace--it can't corrupt anything
that can't be corrupted by normal filesystem operations.
> myth:/mnt# uname -r
> 5.6.5-amd64-preempt-sysrq-20190817
>
> Soon after, the copy failed:
> [ 2575.931316] BTRFS info (device dm-0): use zlib compression, level 3
> [ 2575.931329] BTRFS info (device dm-0): disk space caching is enabled
> [ 2575.931343] BTRFS info (device dm-0): has skinny extents
> [ 2577.286749] BTRFS info (device dm-0): bdev /dev/mapper/crypt_bcache0 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
This line does indicate an older problem with the filesystem. It doesn't
tell us whether the corruption happened yesterday or a year ago.
You will need to look at your older kernel logs for that.
> [ 2607.943516] BTRFS info (device dm-0): enabling ssd optimizations
> [ 2708.835200] BTRFS warning (device dm-0): block group 13002170433536 has wrong amount of free space
> [ 2708.835209] BTRFS warning (device dm-0): failed to load free space cache for block group 13002170433536, rebuilding it now
> [ 2740.589580] BTRFS warning (device dm-0): block group 17151175950336 has wrong amount of free space
> [ 2740.589593] BTRFS warning (device dm-0): failed to load free space cache for block group 17151175950336, rebuilding it now
> [ 2797.204169] perf: interrupt took too long (3146 > 3138), lowering kernel.perf_event_max_sample_rate to 63500
> [ 2882.545242] BTRFS info (device dm-0): the free space cache file (26234763345920) is invalid, skip it
Use space_cache=v2, especially on a big filesystem because space_cache=v1
slows down linearly with filesystem size. There is no need for these
warnings. They're probably a symptom rather than cause here.
> [ 3071.631905] BTRFS error (device dm-0): parent transid verify failed on 6353897537536 wanted 359658 found 359661
> [ 3071.643430] BTRFS error (device dm-0): parent transid verify failed on 6353897537536 wanted 359658 found 359661
> [ 3071.661985] BTRFS error (device dm-0): parent transid verify failed on 6353897537536 wanted 359658 found 359661
> [ 3071.661995] BTRFS: error (device dm-0) in btrfs_run_delayed_refs:2210: errno=-5 IO failure
> [ 3071.661999] BTRFS info (device dm-0): forced readonly
> [ 3071.662567] BTRFS error (device dm-0): parent transid verify failed on 6353897537536 wanted 359658 found 359661
> [ 3071.663076] BTRFS error (device dm-0): parent transid verify failed on 6353897537536 wanted 359658 found 359661
> [ 3071.663083] BTRFS: error (device dm-0) in btrfs_run_delayed_refs:2210: errno=-5 IO failure
>
> Ok, maybe there was an IO failure, although none was shown by the kernel:
The "IO failure" mentioned here is the earlier parent transid verify
failure. When the verification fails, the caller gets -EIO.
> however, now the FS is mostly dead:
> [ 3649.106084] BTRFS info (device dm-0): use zlib compression, level 3
> [ 3649.106095] BTRFS info (device dm-0): disk space caching is enabled
> [ 3649.106100] BTRFS info (device dm-0): has skinny extents
> [ 3650.445828] BTRFS info (device dm-0): bdev /dev/mapper/crypt_bcache0 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
> [ 3652.952110] BTRFS error (device dm-0): parent transid verify failed on 7325633544192 wanted 359658 found 359661
> [ 3652.959199] BTRFS error (device dm-0): parent transid verify failed on 7325633544192 wanted 359658 found 359661
> [ 3652.959208] BTRFS error (device dm-0): failed to read block groups: -5
> [ 3653.002227] BTRFS error (device dm-0): open_ctree failed
> [ 3876.808183] BTRFS info (device dm-0): use zlib compression, level 3
> [ 3876.808192] BTRFS info (device dm-0): disk space caching is enabled
> [ 3876.808195] BTRFS info (device dm-0): has skinny extents
> [ 3878.140763] BTRFS info (device dm-0): bdev /dev/mapper/crypt_bcache0 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
> [ 3880.623113] BTRFS error (device dm-0): parent transid verify failed on 7325633544192 wanted 359658 found 359661
> [ 3880.633290] BTRFS error (device dm-0): parent transid verify failed on 7325633544192 wanted 359658 found 359661
> [ 3880.633298] BTRFS error (device dm-0): failed to read block groups: -5
> [ 3880.669435] BTRFS error (device dm-0): open_ctree failed
> [ 4057.606879] BTRFS info (device dm-0): use zlib compression, level 3
> [ 4057.606890] BTRFS info (device dm-0): disk space caching is enabled
> [ 4057.606894] BTRFS info (device dm-0): has skinny extents
> [ 4058.886212] BTRFS info (device dm-0): bdev /dev/mapper/crypt_bcache0 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
> [ 4061.501589] BTRFS error (device dm-0): parent transid verify failed on 7325633544192 wanted 359658 found 359661
> [ 4061.503790] BTRFS error (device dm-0): parent transid verify failed on 7325633544192 wanted 359658 found 359661
The amount of damage here is small, but it looks like you lost a
superblock update or two (btrfs expected an old page and found a new one)
so the root of the filesystem now points at an old tree that has since
been partially overwritten.
Any part of the filesystem on the other side of the missing metadata pages
is no longer accessible without a brute force search of the metadata.
This might take longer than mkfs+restore with the current btrfs check
--repair, especially if you have to run chunk-recover as well.
> myth:/mnt# btrfs-zero-log /dev/mapper/crypt_bcache0
> WARNING: this utility is deprecated, please use 'btrfs rescue zero-log'
> parent transid verify failed on 7325633544192 wanted 359658 found 359661
> parent transid verify failed on 7325633544192 wanted 359658 found 359661
> parent transid verify failed on 7325633544192 wanted 359658 found 359661
> parent transid verify failed on 7325633544192 wanted 359658 found 359661
> Ignoring transid failure
> leaf parent key incorrect 7325633544192
> Clearing log on /dev/mapper/crypt_bcache0, previous log_root 0, level 0
Clearing log tree will have no effect on parent transid verify failure.
It only helps to work around bugs that occur during log tree replay,
which happens at a later stage of mounting the filesystem.
> myth:/mnt# mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/mapper/crypt_bcache0 /mnt/btrfs_bigbackup/
> mount: wrong fs type, bad option, bad superblock on /dev/mapper/crypt_bcache0,
> missing codepage or helper program, or other error
> In some cases useful info is found in syslog - try
> dmesg | tail or so
>
> myth:/mnt# dmtail
> [ 6665.975324] BTRFS info (device dm-0): unrecognized mount option 'rootflags=recovery'
> [ 6665.975357] BTRFS error (device dm-0): open_ctree failed
> [ 6686.664202] BTRFS warning (device dm-0): 'recovery' is deprecated, use 'usebackuproot' instead
> [ 6686.664213] BTRFS info (device dm-0): trying to use backup root at mount time
> [ 6686.664219] BTRFS info (device dm-0): disabling disk space caching
> [ 6686.664224] BTRFS info (device dm-0): force clearing of disk cache
> [ 6686.664232] BTRFS info (device dm-0): has skinny extents
> [ 6687.911926] BTRFS info (device dm-0): bdev /dev/mapper/crypt_bcache0 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
> [ 6690.522785] BTRFS error (device dm-0): parent transid verify failed on 7325633544192 wanted 359658 found 359661
> [ 6690.529495] BTRFS error (device dm-0): parent transid verify failed on 7325633544192 wanted 359658 found 359661
> [ 6690.529504] BTRFS error (device dm-0): failed to read block groups: -5
> [ 6690.556227] BTRFS error (device dm-0): open_ctree failed
>
>
> myth:/mnt# btrfs restore /dev/mapper/crypt_bcache0 /mnt/btrfs_bigbackup/
> Skipping snapshot win_ro.20200615_02:49:02
> Skipping snapshot 0Notmachines_ro.20200626_16:09:30
> Skipping snapshot 1Appliances_ro.20200626_17:08:56
> Skipping snapshot debian32_ro.20200626_17:11:57
> Skipping snapshot debian64_ro.20200626_17:18:15
> Skipping snapshot ubuntu_ro.20200626_17:18:44
> Skipping snapshot win_ro.20200626_18:39:13
> Skipping snapshot 0Notmachines_ro.20200629_00:48:39
> Skipping snapshot 1Appliances_ro.20200629_01:11:44
> Skipping snapshot debian32_ro.20200629_01:11:50
> Skipping snapshot debian64_ro.20200629_01:19:52
> Skipping snapshot ubuntu_ro.20200629_01:20:01
> Error copying data for /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> Error searching /mnt/btrfs_bigbackup/DS2/backup-btrfssend/gargamel/root_last_ro/lib/modules/5.1.19-amd64-preempt-sysrq-20180818/kernel/drivers/video/fbdev/arkfb.ko
> myth:/mnt# l /mnt/btrfs_bigbackup/
> is missing a lot of stuff
>
> It's a backup server, I can recreate the data, but it will probably take
> 2 weeks to copy everything again, and I'd like to know what on earth
> happened so that I can avoid having this again.
"parent transid verify failed" is btrfs's gentle way of saying "your
devices have busted write ordering, reconfigure or replace them and
try again."
In a later post in this thread, you posted a log showing your drive model
numbers and firmware revisions. White Label repackages drives made
by other companies(*) under their own part number (singular, they all
use the same part number), but they leave the firmware revision intact,
so we can look up the firmware revision and see that you have two
different WD models in your md6 array from families with known broken
firmware write caching.
sdd is running firmware 0957, also found in circa-2014 WD Green.
The others are running 01.01RA2 firmware that appears in a model family
that includes some broken WD Green and Red models from a few years back
(including the venerable datavore 80.00A80). I have a few of the WD
branded versions of these drives. They are unusable with write cache
enabled. 1 in 10 unclean shutdowns lead to filesystem corruption on
btrfs; on ext4, git and postgresql database corruption. After disabling
write cache, I've used them for years with no problems.
Hopefully your bcache drive is OK, you didn't post any details on that.
bcache on a drive with buggy firmware write caching fails *spectacularly*.
You can work around buggy write cache firmware with a udev rule like
this to disable write cache on all the drives:
ACTION=="add|change", SUBSYSTEM=="block", DRIVERS=="sd", KERNEL=="sd*[!0-9]", RUN+="/sbin/hdparm -W 0 $devnode"
Note that in your logs, the kernel reports that 'sdd' has write cache
disabled already, maybe due to lack of firmware support or a conservative
default setting. That makes it probably the only drive in that array
that is working properly.
bcache could be losing its mind too, but although I've heard a lot
of rumors of bcache bugs, I've yet to catch it having a problem that
wasn't directly caused by bad SSD firmware or host configuration.
If the bcache was configured in writeback mode and it was separated from
the backing device for a while then there could be a consistency issue
that would result in something like this, but bcache is pretty good at
preventing that.
In theory, space_cache=v1 might have consistency issues that lead to some
or all of the symptoms above; however, a) btrfs has checks at multiple
points to detect or prevent that, and b) the inconsistency would have
to be caused by a firmware write cache bug or bcache bug anyway.
There are some other questionable things in your setup: you have a
mdadm-raid5 with no journal device, so if PPL is also not enabled,
and you are running btrfs on top, then this filesystem is vulnerable
to instant destruction by mdadm-raid5 write hole after a disk fails.
bcache in writeback mode with a single cache pool using multiple physical
backing store devices is vulnerable to extra data corruption failures
in the event that the SSD(s) go bad. I'm guessing that none of the
backing store drives have scterc support, which will complicate error
recovery with SATA bus timeouts and resets as disks fail (though most
of the problems with that are conveniently also prevented by disabling
write cache). None of these issues caused problems today, though,
and won't cause a problem until disks start to fail.
(*) their product description text says "other companies", but maybe
White Label is just a part of WD, hiding their shame as they dispose of
unsalable inventory in an unsuspecting market. Don't know, don't care
enough to find out.
>
> Thanks,
> Marc
> --
> "A mouse is a device used to point at the xterm you want to type in" - A.S.R.
>
> Home page: http://marc.merlins.org/ | PGP 7F55D5F27AAF9D08
next prev parent reply other threads:[~2020-07-08 3:44 UTC|newest]
Thread overview: 479+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-17 20:32 btrfs check (not lowmem) and OOM-like hangs (4.17.6) Marc MERLIN
2018-07-17 20:59 ` Marc MERLIN
2018-07-18 0:05 ` Qu Wenruo
2018-07-18 0:24 ` Marc MERLIN
2018-07-19 9:22 ` Qu Wenruo
2019-10-18 2:56 ` 5.1.21: fs/btrfs/extent-tree.c:7100 __btrfs_free_extent+0x18b/0x921 Marc MERLIN
2019-10-19 3:07 ` Marc MERLIN
2019-10-23 0:55 ` Marc MERLIN
2019-10-26 3:36 ` Marc MERLIN
2020-05-24 21:30 ` 5.5 kernel and btrfs-progs v5.6 create and cannot fix 'root 204162 inode 14058737 errors 1000, some csum missing' Marc MERLIN
2020-05-25 3:01 ` Marc MERLIN
2020-05-25 16:37 ` Chris Murphy
2020-05-25 20:16 ` Marc MERLIN
2020-05-25 20:24 ` Chris Murphy
2020-05-25 20:39 ` Marc MERLIN
2020-05-25 22:47 ` Chris Murphy
2020-05-25 22:51 ` Chris Murphy
2020-05-26 0:13 ` Marc MERLIN
2020-07-07 3:55 ` 5.6 pretty massive unexplained btrfs corruption: parent transid verify failed + open_ctree failed Marc MERLIN
2020-07-07 14:31 ` Josef Bacik
2020-07-07 17:25 ` Marc MERLIN
2020-07-08 3:44 ` Zygo Blaxell [this message]
2020-07-08 4:10 ` Marc MERLIN
2020-07-08 5:49 ` Zygo Blaxell
2020-07-08 17:41 ` Marc MERLIN
2020-07-08 17:44 ` Roman Mamedov
2020-07-08 22:47 ` Zygo Blaxell
2022-03-29 17:18 ` Marc MERLIN
2022-03-30 5:38 ` Andrei Borzenkov
2022-03-30 14:39 ` Marc MERLIN
2022-03-31 17:19 ` Rebuilding 24TB Raid5 array (was btrfs corruption: parent transid verify failed + open_ctree failed) Marc MERLIN
2022-04-03 23:33 ` Josef Bacik
2022-04-04 1:01 ` Marc MERLIN
2022-04-04 15:08 ` Marc MERLIN
2022-04-04 17:18 ` Josef Bacik
2022-04-04 17:43 ` Marc MERLIN
2022-04-04 17:53 ` Josef Bacik
2022-04-04 18:10 ` Marc MERLIN
2022-04-04 18:46 ` Josef Bacik
2022-04-04 19:04 ` Marc MERLIN
2022-04-04 19:52 ` Josef Bacik
2022-04-04 20:33 ` Marc MERLIN
2022-04-04 21:04 ` Josef Bacik
2022-04-04 21:29 ` Marc MERLIN
2022-04-04 21:40 ` Josef Bacik
2022-04-04 22:09 ` Marc MERLIN
2022-04-04 22:34 ` Josef Bacik
2022-04-04 22:45 ` Marc MERLIN
2022-04-04 22:52 ` Josef Bacik
2022-04-04 23:18 ` Marc MERLIN
2022-04-04 23:24 ` Josef Bacik
2022-04-04 23:42 ` Marc MERLIN
2022-04-05 0:08 ` Josef Bacik
2022-04-05 0:13 ` Marc MERLIN
2022-04-05 0:15 ` Josef Bacik
2022-04-05 0:18 ` Marc MERLIN
2022-04-05 0:24 ` Josef Bacik
2022-04-05 0:28 ` Marc MERLIN
2022-04-05 0:39 ` Josef Bacik
2022-04-05 0:58 ` Marc MERLIN
2022-04-05 1:06 ` Josef Bacik
2022-04-05 1:16 ` Marc MERLIN
2022-04-05 1:22 ` Josef Bacik
2022-04-05 1:42 ` Marc MERLIN
2022-04-05 1:55 ` Josef Bacik
2022-04-05 2:07 ` Marc MERLIN
2022-04-05 14:11 ` Josef Bacik
2022-04-05 15:53 ` Marc MERLIN
2022-04-05 15:55 ` Josef Bacik
2022-04-05 17:41 ` Josef Bacik
2022-04-05 18:11 ` Marc MERLIN
2022-04-05 18:36 ` Josef Bacik
2022-04-05 19:51 ` Marc MERLIN
2022-04-05 19:56 ` Josef Bacik
2022-04-05 19:59 ` Marc MERLIN
2022-04-05 20:05 ` Josef Bacik
2022-04-05 20:08 ` Marc MERLIN
2022-04-05 20:24 ` Josef Bacik
2022-04-05 20:37 ` Marc MERLIN
2022-04-05 21:07 ` Josef Bacik
2022-04-05 21:14 ` Marc MERLIN
2022-04-05 21:19 ` Josef Bacik
2022-04-05 21:25 ` Marc MERLIN
2022-04-05 21:26 ` Marc MERLIN
2022-04-05 21:35 ` Josef Bacik
2022-04-05 21:43 ` Marc MERLIN
2022-04-05 22:41 ` Josef Bacik
2022-04-05 22:58 ` Marc MERLIN
2022-04-06 0:23 ` Josef Bacik
2022-04-06 0:30 ` Marc MERLIN
2022-04-06 0:35 ` Marc MERLIN
2022-04-06 0:39 ` Josef Bacik
2022-04-06 1:08 ` Josef Bacik
2022-04-06 1:14 ` Marc MERLIN
2022-04-06 3:12 ` Marc MERLIN
2022-04-06 3:34 ` Marc MERLIN
2022-04-06 15:20 ` Josef Bacik
2022-04-06 18:54 ` Marc MERLIN
2022-04-06 18:57 ` Josef Bacik
2022-04-06 19:16 ` Marc MERLIN
2022-04-06 19:53 ` Josef Bacik
2022-04-06 20:34 ` Marc MERLIN
2022-04-06 20:38 ` Josef Bacik
2022-04-06 20:56 ` Marc MERLIN
2022-04-06 21:05 ` Josef Bacik
2022-04-07 1:08 ` Marc MERLIN
2022-04-07 1:18 ` Josef Bacik
2022-04-07 4:37 ` Marc MERLIN
2022-04-07 4:40 ` Marc MERLIN
2022-04-07 7:30 ` Martin Steigerwald
2022-04-07 5:20 ` Marc MERLIN
2022-04-07 16:29 ` Marc MERLIN
2022-04-07 17:07 ` Josef Bacik
2022-04-07 19:11 ` Martin Steigerwald
2022-04-07 22:09 ` Josef Bacik
2022-04-07 22:33 ` Marc MERLIN
2022-04-08 10:22 ` Marc MERLIN
2022-04-08 10:23 ` Josef Bacik
2022-04-08 20:09 ` Josef Bacik
2022-04-11 1:37 ` Marc MERLIN
2022-04-05 23:51 ` Zygo Blaxell
2022-04-06 0:08 ` Marc MERLIN
2022-04-06 1:40 ` Zygo Blaxell
2022-04-06 4:09 ` Marc MERLIN
2022-04-06 18:07 ` Zygo Blaxell
2022-04-06 19:13 ` Marc MERLIN
2022-04-06 19:45 ` Zygo Blaxell
2022-04-06 20:38 ` figuring out why transient double raid failure caused a fair amount of btrfs corruption Marc MERLIN
2022-04-06 20:51 ` Josef Bacik
2022-04-06 21:14 ` Marc MERLIN
2022-04-07 12:27 ` Zygo Blaxell
2022-04-22 18:48 ` Rebuilding 24TB Raid5 array (was btrfs corruption: parent transid verify failed + open_ctree failed) Marc MERLIN
2022-04-22 19:46 ` Josef Bacik
2022-04-22 20:01 ` Marc MERLIN
2022-04-23 20:12 ` Marc MERLIN
2022-04-23 20:53 ` Josef Bacik
2022-04-24 16:20 ` Josef Bacik
2022-04-24 16:24 ` Marc MERLIN
2022-04-24 17:09 ` Josef Bacik
2022-04-24 18:43 ` Marc MERLIN
2022-04-24 19:17 ` Josef Bacik
2022-04-24 19:44 ` Marc MERLIN
2022-04-24 20:01 ` Josef Bacik
2022-04-24 20:31 ` Marc MERLIN
2022-04-24 20:32 ` Josef Bacik
2022-04-24 20:54 ` Marc MERLIN
2022-04-24 21:01 ` Josef Bacik
[not found] ` <20220424210732.GC29107@merlins.org>
[not found] ` <CAEzrpqcMV+paWShgAnF8d9WaSQ1Fd5R_DZPRQp-+VNsJGDoASg@mail.gmail.com>
[not found] ` <20220424212058.GD29107@merlins.org>
[not found] ` <CAEzrpqcBvh0MC6WeXQ+-80igZhg6t68OcgZnKi6xu+r=njifeA@mail.gmail.com>
2022-04-24 22:38 ` Marc MERLIN
2022-04-24 22:56 ` Josef Bacik
2022-04-24 23:14 ` Marc MERLIN
2022-04-24 23:27 ` Josef Bacik
2022-04-25 0:24 ` Marc MERLIN
2022-04-25 0:36 ` Josef Bacik
2022-04-26 0:28 ` Marc MERLIN
2022-04-26 20:43 ` Marc MERLIN
2022-04-26 21:20 ` Josef Bacik
2022-04-26 21:36 ` Josef Bacik
2022-04-27 3:54 ` Marc MERLIN
2022-04-27 14:44 ` Josef Bacik
2022-04-27 16:34 ` Marc MERLIN
2022-04-27 17:49 ` Josef Bacik
2022-04-27 18:24 ` Marc MERLIN
2022-04-27 20:21 ` Josef Bacik
2022-04-27 21:02 ` Marc MERLIN
2022-04-27 21:11 ` Josef Bacik
2022-04-27 21:20 ` Marc MERLIN
2022-04-27 21:27 ` Josef Bacik
2022-04-27 22:59 ` Marc MERLIN
2022-04-27 23:02 ` Josef Bacik
2022-04-27 23:21 ` Josef Bacik
2022-04-28 0:18 ` Marc MERLIN
2022-04-28 0:44 ` Josef Bacik
2022-04-28 3:00 ` Marc MERLIN
2022-04-28 3:08 ` Josef Bacik
2022-04-28 3:11 ` Marc MERLIN
2022-04-28 4:03 ` Josef Bacik
2022-04-28 4:12 ` Marc MERLIN
2022-04-28 15:30 ` Josef Bacik
2022-04-28 16:27 ` Marc MERLIN
2022-04-28 20:13 ` Josef Bacik
2022-04-28 20:22 ` Marc MERLIN
2022-04-28 20:28 ` Josef Bacik
2022-04-28 20:57 ` Marc MERLIN
2022-04-28 20:58 ` Josef Bacik
2022-04-28 21:42 ` Marc MERLIN
2022-04-28 21:54 ` Josef Bacik
2022-04-28 22:27 ` Marc MERLIN
2022-04-28 23:24 ` Josef Bacik
2022-04-29 0:56 ` Marc MERLIN
2022-04-29 1:11 ` Josef Bacik
2022-04-29 1:34 ` Marc MERLIN
2022-04-29 1:38 ` Josef Bacik
2022-04-29 4:03 ` Marc MERLIN
2022-04-29 12:41 ` Josef Bacik
2022-04-29 15:16 ` Marc MERLIN
2022-04-29 15:27 ` Josef Bacik
2022-04-29 17:16 ` Marc MERLIN
2022-04-29 17:52 ` Josef Bacik
2022-04-29 18:58 ` Marc MERLIN
2022-04-29 19:40 ` Josef Bacik
2022-04-30 2:24 ` Marc MERLIN
2022-04-30 3:13 ` Josef Bacik
2022-04-30 13:07 ` Marc MERLIN
2022-04-30 16:40 ` Josef Bacik
2022-04-30 23:11 ` Marc MERLIN
2022-05-01 2:48 ` Josef Bacik
2022-05-01 4:54 ` Marc MERLIN
2022-05-01 11:28 ` Josef Bacik
2022-05-01 15:22 ` Marc MERLIN
2022-05-01 23:09 ` Josef Bacik
2022-05-02 1:25 ` Marc MERLIN
2022-05-02 16:41 ` Josef Bacik
2022-05-02 17:34 ` Marc MERLIN
2022-05-02 19:07 ` Josef Bacik
2022-05-02 20:08 ` Marc MERLIN
2022-05-02 21:03 ` Josef Bacik
2022-05-02 21:49 ` Marc MERLIN
2022-05-02 23:16 ` Josef Bacik
2022-05-02 23:41 ` Marc MERLIN
2022-05-03 1:06 ` Josef Bacik
2022-05-03 1:26 ` Marc MERLIN
2022-05-03 2:38 ` Josef Bacik
2022-05-03 4:02 ` Marc MERLIN
2022-05-03 4:13 ` Josef Bacik
2022-05-03 4:55 ` Marc MERLIN
2022-05-03 16:00 ` Josef Bacik
2022-05-03 17:24 ` Marc MERLIN
2022-05-05 15:08 ` Marc MERLIN
2022-05-05 15:27 ` Josef Bacik
2022-05-06 3:19 ` Marc MERLIN
2022-05-07 0:25 ` Josef Bacik
2022-05-07 1:15 ` Josef Bacik
2022-05-07 15:39 ` Marc MERLIN
2022-05-07 18:58 ` Josef Bacik
2022-05-07 19:36 ` Marc MERLIN
2022-05-08 19:45 ` Marc MERLIN
2022-05-08 19:55 ` Josef Bacik
2022-05-08 20:52 ` Marc MERLIN
2022-05-08 21:20 ` Marc MERLIN
2022-05-08 21:49 ` Josef Bacik
2022-05-08 22:14 ` Marc MERLIN
2022-05-09 0:22 ` Josef Bacik
2022-05-09 0:46 ` Marc MERLIN
2022-05-09 16:17 ` Josef Bacik
2022-05-09 17:00 ` Marc MERLIN
2022-05-09 17:09 ` Josef Bacik
2022-05-09 17:19 ` Marc MERLIN
2022-05-10 1:04 ` Josef Bacik
2022-05-10 1:08 ` Marc MERLIN
2022-05-10 1:18 ` Josef Bacik
2022-05-10 1:32 ` Marc MERLIN
2022-05-10 2:03 ` Josef Bacik
2022-05-10 2:19 ` Marc MERLIN
2022-05-10 13:21 ` Josef Bacik
2022-05-10 14:37 ` Marc MERLIN
2022-05-10 15:20 ` Josef Bacik
2022-05-10 16:06 ` Marc MERLIN
2022-05-10 16:14 ` Josef Bacik
2022-05-10 16:44 ` Marc MERLIN
2022-05-10 21:15 ` Marc MERLIN
2022-05-10 23:38 ` Josef Bacik
2022-05-11 0:08 ` Marc MERLIN
2022-05-11 0:28 ` Josef Bacik
2022-05-11 1:48 ` Marc MERLIN
2022-05-11 11:43 ` Josef Bacik
2022-05-11 15:03 ` Marc MERLIN
2022-05-11 15:21 ` Josef Bacik
2022-05-11 16:00 ` Marc MERLIN
2022-05-11 16:05 ` Josef Bacik
2022-05-11 18:00 ` Goffredo Baroncelli
2022-05-12 2:39 ` Zygo Blaxell
2022-05-11 18:58 ` Marc MERLIN
[not found] ` <20220513144113.GA16501@merlins.org>
[not found] ` <CAEzrpqfYg=Zf_GYjyvc+WZsnoCjiPTAS-08C_rB=gey74DGUqA@mail.gmail.com>
2022-05-15 2:57 ` Marc MERLIN
2022-05-15 14:02 ` Josef Bacik
2022-05-15 14:41 ` Marc MERLIN
2022-05-15 15:24 ` Josef Bacik
2022-05-15 15:33 ` Marc MERLIN
2022-05-15 15:35 ` Josef Bacik
2022-05-15 15:41 ` Marc MERLIN
2022-05-15 15:48 ` Josef Bacik
2022-05-15 21:29 ` Marc MERLIN
2022-05-15 23:01 ` Marc MERLIN
2022-05-16 0:01 ` Josef Bacik
2022-05-16 0:57 ` Marc MERLIN
2022-05-16 14:50 ` Josef Bacik
2022-05-16 15:16 ` Marc MERLIN
2022-05-16 15:31 ` Josef Bacik
2022-05-16 15:36 ` Marc MERLIN
2022-05-16 16:53 ` Marc MERLIN
2022-05-16 16:55 ` Josef Bacik
2022-05-17 19:49 ` Josef Bacik
2022-05-17 20:27 ` Marc MERLIN
2022-05-17 20:39 ` Josef Bacik
2022-05-17 21:22 ` Marc MERLIN
2022-05-18 18:26 ` Josef Bacik
2022-05-18 19:12 ` Marc MERLIN
2022-05-18 19:17 ` Josef Bacik
2022-05-19 22:28 ` Marc MERLIN
2022-05-24 1:13 ` Marc MERLIN
2022-05-24 18:26 ` Josef Bacik
2022-05-24 19:13 ` Marc MERLIN
2022-05-25 14:35 ` Josef Bacik
2022-05-26 17:10 ` Marc MERLIN
2022-05-26 17:12 ` Josef Bacik
2022-05-26 17:31 ` Marc MERLIN
2022-05-26 17:44 ` Josef Bacik
2022-05-26 18:12 ` Marc MERLIN
2022-05-26 18:54 ` Josef Bacik
2022-05-26 19:15 ` Marc MERLIN
2022-05-26 19:55 ` Josef Bacik
2022-05-26 21:39 ` Marc MERLIN
2022-05-26 23:23 ` Josef Bacik
2022-05-27 1:16 ` Marc MERLIN
2022-05-27 18:35 ` Josef Bacik
2022-05-27 23:26 ` Marc MERLIN
2022-05-28 0:13 ` Josef Bacik
2022-05-28 20:08 ` Josef Bacik
2022-05-28 22:56 ` Marc MERLIN
2022-05-29 1:00 ` Josef Bacik
2022-05-29 3:51 ` Marc MERLIN
2022-05-29 15:00 ` Josef Bacik
2022-05-29 15:33 ` Marc MERLIN
2022-05-29 17:32 ` Josef Bacik
2022-05-29 18:05 ` Marc MERLIN
2022-05-29 18:58 ` Josef Bacik
2022-05-29 19:42 ` Marc MERLIN
2022-05-29 19:49 ` Josef Bacik
2022-05-29 20:04 ` Marc MERLIN
2022-05-29 20:32 ` Josef Bacik
2022-05-30 0:37 ` Marc MERLIN
2022-05-30 1:14 ` Josef Bacik
2022-05-30 19:18 ` Marc MERLIN
2022-05-30 20:53 ` Josef Bacik
2022-05-31 1:12 ` Marc MERLIN
2022-05-31 20:57 ` Josef Bacik
2022-05-31 22:49 ` Marc MERLIN
2022-06-01 0:14 ` Josef Bacik
2022-06-01 0:25 ` Marc MERLIN
2022-06-01 1:26 ` Josef Bacik
2022-06-01 1:29 ` Marc MERLIN
2022-06-01 2:10 ` Josef Bacik
2022-06-01 3:15 ` Marc MERLIN
2022-06-01 13:56 ` Josef Bacik
2022-06-01 16:39 ` Marc MERLIN
2022-06-01 18:00 ` Josef Bacik
2022-06-01 18:08 ` Marc MERLIN
2022-06-01 18:42 ` Josef Bacik
2022-06-01 18:50 ` Marc MERLIN
2022-06-01 19:01 ` Josef Bacik
2022-06-01 20:57 ` Josef Bacik
2022-06-01 21:40 ` Marc MERLIN
2022-06-01 22:34 ` Josef Bacik
2022-06-01 22:36 ` Marc MERLIN
2022-06-01 22:54 ` Josef Bacik
2022-06-01 22:56 ` Marc MERLIN
2022-06-01 23:04 ` Josef Bacik
2022-06-01 23:10 ` Marc MERLIN
2022-06-02 0:04 ` Josef Bacik
2022-06-02 0:06 ` Marc MERLIN
2022-06-02 1:23 ` Josef Bacik
2022-06-02 1:55 ` Marc MERLIN
2022-06-02 2:03 ` Josef Bacik
2022-06-02 2:16 ` Marc MERLIN
2022-06-02 14:07 ` Josef Bacik
2022-06-02 14:21 ` Marc MERLIN
2022-06-02 14:27 ` Josef Bacik
2022-06-02 14:36 ` Marc MERLIN
2022-06-02 18:43 ` Josef Bacik
2022-06-02 19:08 ` Marc MERLIN
2022-06-02 19:35 ` Josef Bacik
2022-06-02 19:51 ` Marc MERLIN
2022-06-02 19:53 ` Josef Bacik
2022-06-02 19:56 ` Marc MERLIN
2022-06-02 20:06 ` Josef Bacik
2022-06-02 20:32 ` Marc MERLIN
2022-06-03 2:20 ` Josef Bacik
2022-06-03 14:47 ` Marc MERLIN
2022-06-03 16:17 ` Josef Bacik
2022-06-03 16:42 ` Marc MERLIN
2022-06-03 17:07 ` Marc MERLIN
2022-06-03 18:34 ` Josef Bacik
2022-06-03 18:39 ` Marc MERLIN
2022-06-04 12:49 ` Josef Bacik
2022-06-04 13:48 ` Marc MERLIN
2022-06-04 23:10 ` Josef Bacik
2022-06-05 0:13 ` Marc MERLIN
2022-06-05 19:37 ` Josef Bacik
2022-06-05 20:11 ` Marc MERLIN
2022-06-05 20:58 ` Josef Bacik
2022-06-05 21:26 ` Marc MERLIN
2022-06-05 21:43 ` Josef Bacik
2022-06-05 21:50 ` Marc MERLIN
2022-06-05 23:03 ` Josef Bacik
2022-06-06 0:05 ` Marc MERLIN
2022-06-06 1:11 ` Josef Bacik
2022-06-06 1:22 ` Marc MERLIN
2022-06-06 20:42 ` Josef Bacik
2022-06-06 21:08 ` Marc MERLIN
2022-06-06 21:19 ` Josef Bacik
2022-06-06 21:23 ` Marc MERLIN
2022-06-06 21:39 ` Josef Bacik
2022-06-06 21:50 ` Marc MERLIN
2022-06-06 22:00 ` Josef Bacik
2022-06-06 22:17 ` Marc MERLIN
2022-06-07 2:28 ` Josef Bacik
2022-06-07 2:37 ` Marc MERLIN
2022-06-07 2:57 ` Josef Bacik
2022-06-07 3:22 ` Marc MERLIN
2022-06-07 14:51 ` Josef Bacik
2022-06-07 14:53 ` Marc MERLIN
2022-06-07 15:00 ` Josef Bacik
2022-06-07 15:18 ` Marc MERLIN
2022-06-07 15:21 ` Josef Bacik
2022-06-07 15:32 ` Marc MERLIN
2022-06-07 17:56 ` Josef Bacik
2022-06-07 18:27 ` Marc MERLIN
2022-06-07 19:46 ` Josef Bacik
2022-06-07 19:57 ` Marc MERLIN
2022-06-07 20:10 ` Josef Bacik
2022-06-07 20:25 ` Marc MERLIN
2022-06-07 20:44 ` Marc MERLIN
2022-06-07 20:58 ` Josef Bacik
2022-06-07 21:25 ` Marc MERLIN
2022-06-07 23:33 ` Josef Bacik
2022-06-07 23:37 ` Marc MERLIN
2022-06-07 23:41 ` Josef Bacik
2022-06-08 0:07 ` Marc MERLIN
2022-06-08 0:32 ` Josef Bacik
2022-06-08 0:42 ` Marc MERLIN
2022-06-08 1:31 ` Josef Bacik
2022-06-08 2:12 ` Marc MERLIN
2022-06-08 20:57 ` Josef Bacik
2022-06-08 21:30 ` Marc MERLIN
2022-06-08 21:33 ` Josef Bacik
2022-06-08 21:38 ` Marc MERLIN
2022-06-08 22:46 ` Josef Bacik
2022-06-09 3:01 ` Marc MERLIN
2022-06-09 20:46 ` Josef Bacik
2022-06-09 21:15 ` Marc MERLIN
2022-06-10 18:47 ` Josef Bacik
2022-06-10 19:11 ` Marc MERLIN
2022-06-10 19:55 ` Josef Bacik
2022-06-11 0:14 ` Marc MERLIN
2022-06-11 14:59 ` Josef Bacik
2022-06-12 17:06 ` Marc MERLIN
2022-06-12 20:05 ` Josef Bacik
2022-06-12 21:19 ` Marc MERLIN
2022-06-12 22:32 ` Josef Bacik
2022-06-12 17:37 ` Marc MERLIN
2022-06-12 20:06 ` Josef Bacik
2022-06-12 21:14 ` Marc MERLIN
2022-06-13 17:56 ` Marc MERLIN
2022-06-13 18:28 ` Marc MERLIN
2022-06-13 18:29 ` Josef Bacik
2022-06-13 20:46 ` Marc MERLIN
2022-06-13 22:19 ` Josef Bacik
2022-06-13 23:52 ` Marc MERLIN
2022-06-15 1:44 ` Josef Bacik
2022-06-15 14:29 ` Marc MERLIN
2022-06-15 14:55 ` Marc MERLIN
2022-06-15 21:18 ` Josef Bacik
2022-06-15 21:53 ` Marc MERLIN
2022-06-15 23:16 ` Josef Bacik
2022-06-15 23:21 ` Marc MERLIN
2022-06-15 23:26 ` Eldon
2022-06-16 0:22 ` Sweet Tea Dorminy
2022-06-16 20:16 ` Neal Gompa
2022-04-05 16:22 ` Roman Mamedov
2022-04-05 22:06 ` Marc MERLIN
2022-04-05 18:38 ` Zygo Blaxell
2022-04-05 19:31 ` Marc MERLIN
2020-08-12 22:34 ` 5.6 pretty massive unexplained btrfs corruption: parent transid verify failed + open_ctree failed Marc MERLIN
2020-08-13 7:39 ` Roman Mamedov
2020-08-13 15:07 ` Marc MERLIN
2020-08-14 2:19 ` Zygo Blaxell
2020-08-14 1:43 ` Zygo Blaxell
2020-08-15 4:41 ` Marc MERLIN
2018-07-18 19:42 ` btrfs check (not lowmem) and OOM-like hangs (4.17.6) Andrei Borzenkov
2018-07-18 21:56 ` Marc MERLIN
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200708034407.GE10769@hungrycats.org \
--to=ce3g8jdj@umail.furryterror.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=marc@merlins.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.