From: Marc MERLIN <marc@merlins.org>
To: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Cc: Josef Bacik <josef@toxicpanda.com>,
"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: Rebuilding 24TB Raid5 array (was btrfs corruption: parent transid verify failed + open_ctree failed)
Date: Tue, 5 Apr 2022 21:09:13 -0700 [thread overview]
Message-ID: <20220406040913.GE3307770@merlins.org> (raw)
In-Reply-To: <Ykzvoz47Rvknw7aH@hungrycats.org>
On Tue, Apr 05, 2022 at 09:40:51PM -0400, Zygo Blaxell wrote:
> Based on the history, I'd expect the filesystem is missing some number
> of tree nodes, from a few dozen to thousands, depending on how many
> writes were dropped after the 2nd drive failure before it was detected.
> Since the array was also degraded at that time, with 4 drives in raid5,
> there's 3 data drives, and if one of them was offline then we'd have a 2/3
> success rate reading metadata blocks and 1/3 garbage. That's definitely
> in the "we need to write new software to recover from this" territory.
I think your conclusion is correct. I'm very dismayed that the
filesystem didn't go read only right away.
Hell, the mdadm block device should have done read only as soon as it
lost more than one drive.
Why were any writes allowed once more than one drive was missing?
Let's look at this for a second:
Mar 28 02:28:11 gargamel kernel: [1512988.446844] sd 6:1:8:0: Device offlined - not ready after error recovery
Mar 28 02:28:11 gargamel kernel: [1512988.475270] sd 6:1:8:0: rejecting I/O to offline device
Mar 28 02:28:11 gargamel kernel: [1512988.491531] blk_update_request: I/O error, dev sdi, sector 261928312 op 0x0:(READ) flags 0x84700
phys_seg 42 prio class 0
Mar 28 02:28:11 gargamel kernel: [1512988.525073] blk_update_request: I/O error, dev sdi, sector 261928824 op 0x0:(READ) flags 0x80700
phys_seg 5 prio class 0
Mar 28 02:28:12 gargamel kernel: [1512988.579667] blk_update_request: I/O error, dev sdi, sector 261927936 op 0x0:(READ) flags 0x80700
phys_seg 47 prio class 0
(..)
Mar 28 02:28:12 gargamel kernel: [1512988.615910] md: super_written gets error=10
Mar 28 02:28:12 gargamel kernel: [1512988.619241] md/raid:md7: Disk failure on sdi1, disabling device.
Mar 28 02:28:12 gargamel kernel: [1512988.619241] md/raid:md7: Operation continuing on 4 devices.
Mar 28 02:28:21 gargamel kernel: [1512998.170192] usb 2-1.6-port1: disabled by hub (EMI?), re-enabling...
Mar 28 02:28:21 gargamel kernel: [1512998.240404] print_req_error: 134 callbacks suppressed
Mar 28 02:28:21 gargamel kernel: [1512998.240406] blk_update_request: I/O error, dev sdi, sector 11721044992 op 0x0:(READ) flags 0x807
00 phys_seg 1 prio class 0
Mar 28 02:28:21 gargamel kernel: [1512998.243415] ftdi_sio 2-1.6.1:1.0: device disconnected
Mar 28 02:28:21 gargamel kernel: [1512998.341221] blk_update_request: I/O error, dev sdi, sector 11721044992 op 0x0:(READ) flags 0x0 p
(...)
Mar 28 02:28:22 gargamel kernel: [1512998.716351] blk_update_request: I/O error, dev sdi, sector 2058 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Mar 28 02:28:22 gargamel kernel: [1512998.716362] blk_update_request: I/O error, dev sdi, sector 2059 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Ok, one drive died, but raid5 continues degraded.
md7 : active raid5 sdi1[5](F) sdo1[7] sdg1[6] sdj1[3] sdh1[1]
23441561600 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
bitmap: 0/44 pages [0KB], 65536KB chunk
not sure what these were:
Mar 29 00:00:08 gargamel kernel: [1590505.415665] bcache: bch_count_backing_io_errors() md7: Read-ahead I/O failed on backing device, ignore
Mar 29 00:00:09 gargamel kernel: [1590505.866094] bcache: bch_count_backing_io_errors() md7: Read-ahead I/O failed on backing device, ignore
9H later a 2nd drive dies just when I'm replacing the failed one:
Mar 29 09:30:12 gargamel kernel: [1624709.301830] sd 6:1:5:0: [sdh] tag#523 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=2s
Mar 29 09:30:12 gargamel kernel: [1624709.331812] sd 6:1:5:0: [sdh] tag#523 CDB: Read(16) 88 00 00 00 00 00 00 26 3f d0 00 00 00 18 00 00
Mar 29 09:30:12 gargamel kernel: [1624709.359459] blk_update_request: I/O error, dev sdh, sector 2506704 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0
Mar 29 09:30:12 gargamel kernel: [1624709.359465] md/raid:md7: read error not correctable (sector 2504656 on sdh1).
Mar 29 09:30:12 gargamel kernel: [1624709.359471] md/raid:md7: read error not correctable (sector 2504664 on sdh1).
Mar 29 09:30:12 gargamel kernel: [1624709.359472] md/raid:md7: read error not correctable (sector 2504672 on sdh1).
Mar 29 09:30:12 gargamel kernel: [1624709.359486] md/raid:md7: read error not correctable (sector 2504656 on sdh1).
Mar 29 09:30:12 gargamel kernel: [1624709.455886] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.681637] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.695785] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.717624] md/raid:md7: read error not correctable (sector 2504664 on sdh1).
Mar 29 09:30:13 gargamel kernel: [1624709.739746] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.757206] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.770348] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.790298] md/raid:md7: read error not correctable (sector 2504672 on sdh1).
Mar 29 09:30:13 gargamel kernel: [1624709.812546] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.825856] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.839006] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.866656] md/raid:md7: read error not correctable (sector 24815552 on sdh1).
Mar 29 09:30:13 gargamel kernel: [1624709.898796] md/raid:md7: read error not correctable (sector 24815552 on sdh1).
Mar 29 09:30:13 gargamel kernel: [1624709.921135] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.934315] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.947500] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624709.988589] md/raid:md7: read error not correctable (sector 1763985936 on sdh1).
Mar 29 09:30:13 gargamel kernel: [1624710.036204] md/raid:md7: read error not correctable (sector 1763985936 on sdh1).
Mar 29 09:30:13 gargamel kernel: [1624710.059121] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624710.088858] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624710.102026] md: super_written gets error=10
Mar 29 09:30:13 gargamel kernel: [1624710.158830] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.096055] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:36:37 gargamel kernel: [1625094.122910] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 1, flush 0, corrupt 0, gen 0
Mar 29 09:36:37 gargamel kernel: [1625094.153249] md/raid:md7: read error not correctable (sector 6562801616 on sdh1).
Mar 29 09:36:37 gargamel kernel: [1625094.176011] md/raid:md7: read error not correctable (sector 6562801624 on sdh1).
Mar 29 09:36:37 gargamel kernel: [1625094.223351] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.250628] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.263726] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.276989] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.290121] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.303267] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.325084] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.342083] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.355206] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.368394] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.383304] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.396423] md: super_written gets error=10
Mar 29 09:36:37 gargamel kernel: [1625094.409498] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:36:37 gargamel kernel: [1625094.436355] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 2, flush 0, corrupt 0, gen 0
Mar 29 09:36:37 gargamel kernel: [1625094.466729] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:36:37 gargamel kernel: [1625094.493600] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 3, flush 0, corrupt 0, gen 0
Mar 29 09:36:37 gargamel kernel: [1625094.523998] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:36:38 gargamel kernel: [1625094.550938] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 4, flush 0, corrupt 0, gen 0
Mar 29 09:37:34 gargamel kernel: [1625151.066422] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:34 gargamel kernel: [1625151.093309] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 5, flush 0, corrupt 0, gen 0
Mar 29 09:37:34 gargamel kernel: [1625151.124768] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:34 gargamel kernel: [1625151.151651] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 6, flush 0, corrupt 0, gen 0
Mar 29 09:37:34 gargamel kernel: [1625151.182803] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:34 gargamel kernel: [1625151.209677] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 7, flush 0, corrupt 0, gen 0
Mar 29 09:37:34 gargamel kernel: [1625151.239972] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:34 gargamel kernel: [1625151.266862] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 8, flush 0, corrupt 0, gen 0
Mar 29 09:37:34 gargamel kernel: [1625151.297234] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:34 gargamel kernel: [1625151.324094] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 9, flush 0, corrupt 0, gen 0
Mar 29 09:37:34 gargamel kernel: [1625151.354422] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:34 gargamel kernel: [1625151.381286] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 10, flush 0, corrupt 0, gen 0
Mar 29 09:37:34 gargamel kernel: [1625151.411926] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:34 gargamel kernel: [1625151.438770] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 11, flush 0, corrupt 0, gen 0
Mar 29 09:37:34 gargamel kernel: [1625151.469361] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:34 gargamel kernel: [1625151.496269] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 12, flush 0, corrupt 0, gen 0
Mar 29 09:37:35 gargamel kernel: [1625151.527455] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:35 gargamel kernel: [1625151.554360] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 13, flush 0, corrupt 0, gen 0
Mar 29 09:37:35 gargamel kernel: [1625151.584963] bcache: bch_count_backing_io_errors() md7: IO error on backing device, unrecoverable
Mar 29 09:37:35 gargamel kernel: [1625151.611842] BTRFS error (device dm-17): bdev /dev/mapper/dshelf1a errs: wr 0, rd 14, flush 0, corrupt 0, gen 0
24 seconds are lost before btrfs notices anything, and then it seems to
continue without going read only for another full minute before things
stopped.
Why would the raid5 not go read only immediately after 2 drives are lost?
> Normally, I'd expect that once we dig through a few layers of simple
> dropped write blocks, we'll start hitting metadata pages with bad csums
> and trashed contents, since the parity blocks will be garbage in the raid5
> stripes where the writes were lost. One important data point against
> this theory is that we have not seen a csum failure yet, so maybe this
> is a different (possibly better) scenario. Possibly some of the lost
> writes on the raid5 are still stored in the bcache, so there's few or no
> garbage blocks (though reading the array through the cache might evict
> the last copy of usable data and make some damage permanent--you might
> want to make a backup copy of the cache device).
Interesting, thanks.
> Backup roots only work if writes are dropped only in the most recent
> transaction, maybe two, because only these trees are guaranteed to be
> intact on disk. After that, previously occupied pages are fair game
> for new write allocations, and old metadata will be lost. Unlike other
> filesystems, btrfs never writes metadata in the same place twice, so when
> a write is dropped, there isn't an old copy of the data still available at
> the location of the dropped write--that location contains some completely
> unrelated piece of the metadata tree whose current version now lives
> at some other location. Later tree updates will overwrite old copies
> of the updated page, destroying the data in the affected page forever.
> Essentially there will be a set of metadata pages where you have two
> versions of different ages, and another set of metadata pages where you
> have zero versions, and (hopefully) most of the other pages are intact.
I see. It's definitely a lot more complex and much more likely to break when some
amount of recent writes get lost/corrupted.
> If we have a superblock, the chunk tree, and a subvol tree, we can
> drop all the other trees and rebuild them (bonus points if the csum
> tree survived, then we can verify all the data was recovered correctly;
> otherwise, we can read all the files and make a new csum tree but it won't
> detect any data corruption that might have happened in degraded mode).
> This is roughly what 'btrfs check --init-extent-tree' does (though due
> to implementation details it has a few extra dependencies that might
> get in the way) and you can find subvol roots with btrfs-find-root.
Got it, thanks.
> If we don't have any intact subvol trees (or the subvol trees we really
> want aren't intact), then we can't recover this way. Instead we'd have
> to scrape the disk looking for metadata leaf nodes, and try to reinsert
> those into a new tree structure. The trick here is that we'll have the
> duplicated and inconsitent nodes and we won't have some nodes at all,
> and we'll have to make sense of those (or pass it to the existing btrfs
> check and hope it can cope with them). I'm guessing that a simplified
> version of this is what Josef is building at this point, or will be
> building soon if we aren't extremely lucky and find an intact subvol tree.
> After building an intact subvol tree (even with a few garbage items in it
> as long as check can handle them) we can go back to the --init-extent-tree
> step and rebuild the rest of the filesystem.
I see. In that case, I'm still happy to help, to help improve the tools, but if
I'm looking at some amount of non trivial loss/corruption, at soe point
I'll go back to backups, since they'll be more intact than this now
damaged filesystem.
Thanks for the detailed answers.
Marc
--
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Home page: http://marc.merlins.org/ | PGP 7F55D5F27AAF9D08
next prev parent reply other threads:[~2022-04-06 13:31 UTC|newest]
Thread overview: 479+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-17 20:32 btrfs check (not lowmem) and OOM-like hangs (4.17.6) Marc MERLIN
2018-07-17 20:59 ` Marc MERLIN
2018-07-18 0:05 ` Qu Wenruo
2018-07-18 0:24 ` Marc MERLIN
2018-07-19 9:22 ` Qu Wenruo
2019-10-18 2:56 ` 5.1.21: fs/btrfs/extent-tree.c:7100 __btrfs_free_extent+0x18b/0x921 Marc MERLIN
2019-10-19 3:07 ` Marc MERLIN
2019-10-23 0:55 ` Marc MERLIN
2019-10-26 3:36 ` Marc MERLIN
2020-05-24 21:30 ` 5.5 kernel and btrfs-progs v5.6 create and cannot fix 'root 204162 inode 14058737 errors 1000, some csum missing' Marc MERLIN
2020-05-25 3:01 ` Marc MERLIN
2020-05-25 16:37 ` Chris Murphy
2020-05-25 20:16 ` Marc MERLIN
2020-05-25 20:24 ` Chris Murphy
2020-05-25 20:39 ` Marc MERLIN
2020-05-25 22:47 ` Chris Murphy
2020-05-25 22:51 ` Chris Murphy
2020-05-26 0:13 ` Marc MERLIN
2020-07-07 3:55 ` 5.6 pretty massive unexplained btrfs corruption: parent transid verify failed + open_ctree failed Marc MERLIN
2020-07-07 14:31 ` Josef Bacik
2020-07-07 17:25 ` Marc MERLIN
2020-07-08 3:44 ` Zygo Blaxell
2020-07-08 4:10 ` Marc MERLIN
2020-07-08 5:49 ` Zygo Blaxell
2020-07-08 17:41 ` Marc MERLIN
2020-07-08 17:44 ` Roman Mamedov
2020-07-08 22:47 ` Zygo Blaxell
2022-03-29 17:18 ` Marc MERLIN
2022-03-30 5:38 ` Andrei Borzenkov
2022-03-30 14:39 ` Marc MERLIN
2022-03-31 17:19 ` Rebuilding 24TB Raid5 array (was btrfs corruption: parent transid verify failed + open_ctree failed) Marc MERLIN
2022-04-03 23:33 ` Josef Bacik
2022-04-04 1:01 ` Marc MERLIN
2022-04-04 15:08 ` Marc MERLIN
2022-04-04 17:18 ` Josef Bacik
2022-04-04 17:43 ` Marc MERLIN
2022-04-04 17:53 ` Josef Bacik
2022-04-04 18:10 ` Marc MERLIN
2022-04-04 18:46 ` Josef Bacik
2022-04-04 19:04 ` Marc MERLIN
2022-04-04 19:52 ` Josef Bacik
2022-04-04 20:33 ` Marc MERLIN
2022-04-04 21:04 ` Josef Bacik
2022-04-04 21:29 ` Marc MERLIN
2022-04-04 21:40 ` Josef Bacik
2022-04-04 22:09 ` Marc MERLIN
2022-04-04 22:34 ` Josef Bacik
2022-04-04 22:45 ` Marc MERLIN
2022-04-04 22:52 ` Josef Bacik
2022-04-04 23:18 ` Marc MERLIN
2022-04-04 23:24 ` Josef Bacik
2022-04-04 23:42 ` Marc MERLIN
2022-04-05 0:08 ` Josef Bacik
2022-04-05 0:13 ` Marc MERLIN
2022-04-05 0:15 ` Josef Bacik
2022-04-05 0:18 ` Marc MERLIN
2022-04-05 0:24 ` Josef Bacik
2022-04-05 0:28 ` Marc MERLIN
2022-04-05 0:39 ` Josef Bacik
2022-04-05 0:58 ` Marc MERLIN
2022-04-05 1:06 ` Josef Bacik
2022-04-05 1:16 ` Marc MERLIN
2022-04-05 1:22 ` Josef Bacik
2022-04-05 1:42 ` Marc MERLIN
2022-04-05 1:55 ` Josef Bacik
2022-04-05 2:07 ` Marc MERLIN
2022-04-05 14:11 ` Josef Bacik
2022-04-05 15:53 ` Marc MERLIN
2022-04-05 15:55 ` Josef Bacik
2022-04-05 17:41 ` Josef Bacik
2022-04-05 18:11 ` Marc MERLIN
2022-04-05 18:36 ` Josef Bacik
2022-04-05 19:51 ` Marc MERLIN
2022-04-05 19:56 ` Josef Bacik
2022-04-05 19:59 ` Marc MERLIN
2022-04-05 20:05 ` Josef Bacik
2022-04-05 20:08 ` Marc MERLIN
2022-04-05 20:24 ` Josef Bacik
2022-04-05 20:37 ` Marc MERLIN
2022-04-05 21:07 ` Josef Bacik
2022-04-05 21:14 ` Marc MERLIN
2022-04-05 21:19 ` Josef Bacik
2022-04-05 21:25 ` Marc MERLIN
2022-04-05 21:26 ` Marc MERLIN
2022-04-05 21:35 ` Josef Bacik
2022-04-05 21:43 ` Marc MERLIN
2022-04-05 22:41 ` Josef Bacik
2022-04-05 22:58 ` Marc MERLIN
2022-04-06 0:23 ` Josef Bacik
2022-04-06 0:30 ` Marc MERLIN
2022-04-06 0:35 ` Marc MERLIN
2022-04-06 0:39 ` Josef Bacik
2022-04-06 1:08 ` Josef Bacik
2022-04-06 1:14 ` Marc MERLIN
2022-04-06 3:12 ` Marc MERLIN
2022-04-06 3:34 ` Marc MERLIN
2022-04-06 15:20 ` Josef Bacik
2022-04-06 18:54 ` Marc MERLIN
2022-04-06 18:57 ` Josef Bacik
2022-04-06 19:16 ` Marc MERLIN
2022-04-06 19:53 ` Josef Bacik
2022-04-06 20:34 ` Marc MERLIN
2022-04-06 20:38 ` Josef Bacik
2022-04-06 20:56 ` Marc MERLIN
2022-04-06 21:05 ` Josef Bacik
2022-04-07 1:08 ` Marc MERLIN
2022-04-07 1:18 ` Josef Bacik
2022-04-07 4:37 ` Marc MERLIN
2022-04-07 4:40 ` Marc MERLIN
2022-04-07 7:30 ` Martin Steigerwald
2022-04-07 5:20 ` Marc MERLIN
2022-04-07 16:29 ` Marc MERLIN
2022-04-07 17:07 ` Josef Bacik
2022-04-07 19:11 ` Martin Steigerwald
2022-04-07 22:09 ` Josef Bacik
2022-04-07 22:33 ` Marc MERLIN
2022-04-08 10:22 ` Marc MERLIN
2022-04-08 10:23 ` Josef Bacik
2022-04-08 20:09 ` Josef Bacik
2022-04-11 1:37 ` Marc MERLIN
2022-04-05 23:51 ` Zygo Blaxell
2022-04-06 0:08 ` Marc MERLIN
2022-04-06 1:40 ` Zygo Blaxell
2022-04-06 4:09 ` Marc MERLIN [this message]
2022-04-06 18:07 ` Zygo Blaxell
2022-04-06 19:13 ` Marc MERLIN
2022-04-06 19:45 ` Zygo Blaxell
2022-04-06 20:38 ` figuring out why transient double raid failure caused a fair amount of btrfs corruption Marc MERLIN
2022-04-06 20:51 ` Josef Bacik
2022-04-06 21:14 ` Marc MERLIN
2022-04-07 12:27 ` Zygo Blaxell
2022-04-22 18:48 ` Rebuilding 24TB Raid5 array (was btrfs corruption: parent transid verify failed + open_ctree failed) Marc MERLIN
2022-04-22 19:46 ` Josef Bacik
2022-04-22 20:01 ` Marc MERLIN
2022-04-23 20:12 ` Marc MERLIN
2022-04-23 20:53 ` Josef Bacik
2022-04-24 16:20 ` Josef Bacik
2022-04-24 16:24 ` Marc MERLIN
2022-04-24 17:09 ` Josef Bacik
2022-04-24 18:43 ` Marc MERLIN
2022-04-24 19:17 ` Josef Bacik
2022-04-24 19:44 ` Marc MERLIN
2022-04-24 20:01 ` Josef Bacik
2022-04-24 20:31 ` Marc MERLIN
2022-04-24 20:32 ` Josef Bacik
2022-04-24 20:54 ` Marc MERLIN
2022-04-24 21:01 ` Josef Bacik
[not found] ` <20220424210732.GC29107@merlins.org>
[not found] ` <CAEzrpqcMV+paWShgAnF8d9WaSQ1Fd5R_DZPRQp-+VNsJGDoASg@mail.gmail.com>
[not found] ` <20220424212058.GD29107@merlins.org>
[not found] ` <CAEzrpqcBvh0MC6WeXQ+-80igZhg6t68OcgZnKi6xu+r=njifeA@mail.gmail.com>
2022-04-24 22:38 ` Marc MERLIN
2022-04-24 22:56 ` Josef Bacik
2022-04-24 23:14 ` Marc MERLIN
2022-04-24 23:27 ` Josef Bacik
2022-04-25 0:24 ` Marc MERLIN
2022-04-25 0:36 ` Josef Bacik
2022-04-26 0:28 ` Marc MERLIN
2022-04-26 20:43 ` Marc MERLIN
2022-04-26 21:20 ` Josef Bacik
2022-04-26 21:36 ` Josef Bacik
2022-04-27 3:54 ` Marc MERLIN
2022-04-27 14:44 ` Josef Bacik
2022-04-27 16:34 ` Marc MERLIN
2022-04-27 17:49 ` Josef Bacik
2022-04-27 18:24 ` Marc MERLIN
2022-04-27 20:21 ` Josef Bacik
2022-04-27 21:02 ` Marc MERLIN
2022-04-27 21:11 ` Josef Bacik
2022-04-27 21:20 ` Marc MERLIN
2022-04-27 21:27 ` Josef Bacik
2022-04-27 22:59 ` Marc MERLIN
2022-04-27 23:02 ` Josef Bacik
2022-04-27 23:21 ` Josef Bacik
2022-04-28 0:18 ` Marc MERLIN
2022-04-28 0:44 ` Josef Bacik
2022-04-28 3:00 ` Marc MERLIN
2022-04-28 3:08 ` Josef Bacik
2022-04-28 3:11 ` Marc MERLIN
2022-04-28 4:03 ` Josef Bacik
2022-04-28 4:12 ` Marc MERLIN
2022-04-28 15:30 ` Josef Bacik
2022-04-28 16:27 ` Marc MERLIN
2022-04-28 20:13 ` Josef Bacik
2022-04-28 20:22 ` Marc MERLIN
2022-04-28 20:28 ` Josef Bacik
2022-04-28 20:57 ` Marc MERLIN
2022-04-28 20:58 ` Josef Bacik
2022-04-28 21:42 ` Marc MERLIN
2022-04-28 21:54 ` Josef Bacik
2022-04-28 22:27 ` Marc MERLIN
2022-04-28 23:24 ` Josef Bacik
2022-04-29 0:56 ` Marc MERLIN
2022-04-29 1:11 ` Josef Bacik
2022-04-29 1:34 ` Marc MERLIN
2022-04-29 1:38 ` Josef Bacik
2022-04-29 4:03 ` Marc MERLIN
2022-04-29 12:41 ` Josef Bacik
2022-04-29 15:16 ` Marc MERLIN
2022-04-29 15:27 ` Josef Bacik
2022-04-29 17:16 ` Marc MERLIN
2022-04-29 17:52 ` Josef Bacik
2022-04-29 18:58 ` Marc MERLIN
2022-04-29 19:40 ` Josef Bacik
2022-04-30 2:24 ` Marc MERLIN
2022-04-30 3:13 ` Josef Bacik
2022-04-30 13:07 ` Marc MERLIN
2022-04-30 16:40 ` Josef Bacik
2022-04-30 23:11 ` Marc MERLIN
2022-05-01 2:48 ` Josef Bacik
2022-05-01 4:54 ` Marc MERLIN
2022-05-01 11:28 ` Josef Bacik
2022-05-01 15:22 ` Marc MERLIN
2022-05-01 23:09 ` Josef Bacik
2022-05-02 1:25 ` Marc MERLIN
2022-05-02 16:41 ` Josef Bacik
2022-05-02 17:34 ` Marc MERLIN
2022-05-02 19:07 ` Josef Bacik
2022-05-02 20:08 ` Marc MERLIN
2022-05-02 21:03 ` Josef Bacik
2022-05-02 21:49 ` Marc MERLIN
2022-05-02 23:16 ` Josef Bacik
2022-05-02 23:41 ` Marc MERLIN
2022-05-03 1:06 ` Josef Bacik
2022-05-03 1:26 ` Marc MERLIN
2022-05-03 2:38 ` Josef Bacik
2022-05-03 4:02 ` Marc MERLIN
2022-05-03 4:13 ` Josef Bacik
2022-05-03 4:55 ` Marc MERLIN
2022-05-03 16:00 ` Josef Bacik
2022-05-03 17:24 ` Marc MERLIN
2022-05-05 15:08 ` Marc MERLIN
2022-05-05 15:27 ` Josef Bacik
2022-05-06 3:19 ` Marc MERLIN
2022-05-07 0:25 ` Josef Bacik
2022-05-07 1:15 ` Josef Bacik
2022-05-07 15:39 ` Marc MERLIN
2022-05-07 18:58 ` Josef Bacik
2022-05-07 19:36 ` Marc MERLIN
2022-05-08 19:45 ` Marc MERLIN
2022-05-08 19:55 ` Josef Bacik
2022-05-08 20:52 ` Marc MERLIN
2022-05-08 21:20 ` Marc MERLIN
2022-05-08 21:49 ` Josef Bacik
2022-05-08 22:14 ` Marc MERLIN
2022-05-09 0:22 ` Josef Bacik
2022-05-09 0:46 ` Marc MERLIN
2022-05-09 16:17 ` Josef Bacik
2022-05-09 17:00 ` Marc MERLIN
2022-05-09 17:09 ` Josef Bacik
2022-05-09 17:19 ` Marc MERLIN
2022-05-10 1:04 ` Josef Bacik
2022-05-10 1:08 ` Marc MERLIN
2022-05-10 1:18 ` Josef Bacik
2022-05-10 1:32 ` Marc MERLIN
2022-05-10 2:03 ` Josef Bacik
2022-05-10 2:19 ` Marc MERLIN
2022-05-10 13:21 ` Josef Bacik
2022-05-10 14:37 ` Marc MERLIN
2022-05-10 15:20 ` Josef Bacik
2022-05-10 16:06 ` Marc MERLIN
2022-05-10 16:14 ` Josef Bacik
2022-05-10 16:44 ` Marc MERLIN
2022-05-10 21:15 ` Marc MERLIN
2022-05-10 23:38 ` Josef Bacik
2022-05-11 0:08 ` Marc MERLIN
2022-05-11 0:28 ` Josef Bacik
2022-05-11 1:48 ` Marc MERLIN
2022-05-11 11:43 ` Josef Bacik
2022-05-11 15:03 ` Marc MERLIN
2022-05-11 15:21 ` Josef Bacik
2022-05-11 16:00 ` Marc MERLIN
2022-05-11 16:05 ` Josef Bacik
2022-05-11 18:00 ` Goffredo Baroncelli
2022-05-12 2:39 ` Zygo Blaxell
2022-05-11 18:58 ` Marc MERLIN
[not found] ` <20220513144113.GA16501@merlins.org>
[not found] ` <CAEzrpqfYg=Zf_GYjyvc+WZsnoCjiPTAS-08C_rB=gey74DGUqA@mail.gmail.com>
2022-05-15 2:57 ` Marc MERLIN
2022-05-15 14:02 ` Josef Bacik
2022-05-15 14:41 ` Marc MERLIN
2022-05-15 15:24 ` Josef Bacik
2022-05-15 15:33 ` Marc MERLIN
2022-05-15 15:35 ` Josef Bacik
2022-05-15 15:41 ` Marc MERLIN
2022-05-15 15:48 ` Josef Bacik
2022-05-15 21:29 ` Marc MERLIN
2022-05-15 23:01 ` Marc MERLIN
2022-05-16 0:01 ` Josef Bacik
2022-05-16 0:57 ` Marc MERLIN
2022-05-16 14:50 ` Josef Bacik
2022-05-16 15:16 ` Marc MERLIN
2022-05-16 15:31 ` Josef Bacik
2022-05-16 15:36 ` Marc MERLIN
2022-05-16 16:53 ` Marc MERLIN
2022-05-16 16:55 ` Josef Bacik
2022-05-17 19:49 ` Josef Bacik
2022-05-17 20:27 ` Marc MERLIN
2022-05-17 20:39 ` Josef Bacik
2022-05-17 21:22 ` Marc MERLIN
2022-05-18 18:26 ` Josef Bacik
2022-05-18 19:12 ` Marc MERLIN
2022-05-18 19:17 ` Josef Bacik
2022-05-19 22:28 ` Marc MERLIN
2022-05-24 1:13 ` Marc MERLIN
2022-05-24 18:26 ` Josef Bacik
2022-05-24 19:13 ` Marc MERLIN
2022-05-25 14:35 ` Josef Bacik
2022-05-26 17:10 ` Marc MERLIN
2022-05-26 17:12 ` Josef Bacik
2022-05-26 17:31 ` Marc MERLIN
2022-05-26 17:44 ` Josef Bacik
2022-05-26 18:12 ` Marc MERLIN
2022-05-26 18:54 ` Josef Bacik
2022-05-26 19:15 ` Marc MERLIN
2022-05-26 19:55 ` Josef Bacik
2022-05-26 21:39 ` Marc MERLIN
2022-05-26 23:23 ` Josef Bacik
2022-05-27 1:16 ` Marc MERLIN
2022-05-27 18:35 ` Josef Bacik
2022-05-27 23:26 ` Marc MERLIN
2022-05-28 0:13 ` Josef Bacik
2022-05-28 20:08 ` Josef Bacik
2022-05-28 22:56 ` Marc MERLIN
2022-05-29 1:00 ` Josef Bacik
2022-05-29 3:51 ` Marc MERLIN
2022-05-29 15:00 ` Josef Bacik
2022-05-29 15:33 ` Marc MERLIN
2022-05-29 17:32 ` Josef Bacik
2022-05-29 18:05 ` Marc MERLIN
2022-05-29 18:58 ` Josef Bacik
2022-05-29 19:42 ` Marc MERLIN
2022-05-29 19:49 ` Josef Bacik
2022-05-29 20:04 ` Marc MERLIN
2022-05-29 20:32 ` Josef Bacik
2022-05-30 0:37 ` Marc MERLIN
2022-05-30 1:14 ` Josef Bacik
2022-05-30 19:18 ` Marc MERLIN
2022-05-30 20:53 ` Josef Bacik
2022-05-31 1:12 ` Marc MERLIN
2022-05-31 20:57 ` Josef Bacik
2022-05-31 22:49 ` Marc MERLIN
2022-06-01 0:14 ` Josef Bacik
2022-06-01 0:25 ` Marc MERLIN
2022-06-01 1:26 ` Josef Bacik
2022-06-01 1:29 ` Marc MERLIN
2022-06-01 2:10 ` Josef Bacik
2022-06-01 3:15 ` Marc MERLIN
2022-06-01 13:56 ` Josef Bacik
2022-06-01 16:39 ` Marc MERLIN
2022-06-01 18:00 ` Josef Bacik
2022-06-01 18:08 ` Marc MERLIN
2022-06-01 18:42 ` Josef Bacik
2022-06-01 18:50 ` Marc MERLIN
2022-06-01 19:01 ` Josef Bacik
2022-06-01 20:57 ` Josef Bacik
2022-06-01 21:40 ` Marc MERLIN
2022-06-01 22:34 ` Josef Bacik
2022-06-01 22:36 ` Marc MERLIN
2022-06-01 22:54 ` Josef Bacik
2022-06-01 22:56 ` Marc MERLIN
2022-06-01 23:04 ` Josef Bacik
2022-06-01 23:10 ` Marc MERLIN
2022-06-02 0:04 ` Josef Bacik
2022-06-02 0:06 ` Marc MERLIN
2022-06-02 1:23 ` Josef Bacik
2022-06-02 1:55 ` Marc MERLIN
2022-06-02 2:03 ` Josef Bacik
2022-06-02 2:16 ` Marc MERLIN
2022-06-02 14:07 ` Josef Bacik
2022-06-02 14:21 ` Marc MERLIN
2022-06-02 14:27 ` Josef Bacik
2022-06-02 14:36 ` Marc MERLIN
2022-06-02 18:43 ` Josef Bacik
2022-06-02 19:08 ` Marc MERLIN
2022-06-02 19:35 ` Josef Bacik
2022-06-02 19:51 ` Marc MERLIN
2022-06-02 19:53 ` Josef Bacik
2022-06-02 19:56 ` Marc MERLIN
2022-06-02 20:06 ` Josef Bacik
2022-06-02 20:32 ` Marc MERLIN
2022-06-03 2:20 ` Josef Bacik
2022-06-03 14:47 ` Marc MERLIN
2022-06-03 16:17 ` Josef Bacik
2022-06-03 16:42 ` Marc MERLIN
2022-06-03 17:07 ` Marc MERLIN
2022-06-03 18:34 ` Josef Bacik
2022-06-03 18:39 ` Marc MERLIN
2022-06-04 12:49 ` Josef Bacik
2022-06-04 13:48 ` Marc MERLIN
2022-06-04 23:10 ` Josef Bacik
2022-06-05 0:13 ` Marc MERLIN
2022-06-05 19:37 ` Josef Bacik
2022-06-05 20:11 ` Marc MERLIN
2022-06-05 20:58 ` Josef Bacik
2022-06-05 21:26 ` Marc MERLIN
2022-06-05 21:43 ` Josef Bacik
2022-06-05 21:50 ` Marc MERLIN
2022-06-05 23:03 ` Josef Bacik
2022-06-06 0:05 ` Marc MERLIN
2022-06-06 1:11 ` Josef Bacik
2022-06-06 1:22 ` Marc MERLIN
2022-06-06 20:42 ` Josef Bacik
2022-06-06 21:08 ` Marc MERLIN
2022-06-06 21:19 ` Josef Bacik
2022-06-06 21:23 ` Marc MERLIN
2022-06-06 21:39 ` Josef Bacik
2022-06-06 21:50 ` Marc MERLIN
2022-06-06 22:00 ` Josef Bacik
2022-06-06 22:17 ` Marc MERLIN
2022-06-07 2:28 ` Josef Bacik
2022-06-07 2:37 ` Marc MERLIN
2022-06-07 2:57 ` Josef Bacik
2022-06-07 3:22 ` Marc MERLIN
2022-06-07 14:51 ` Josef Bacik
2022-06-07 14:53 ` Marc MERLIN
2022-06-07 15:00 ` Josef Bacik
2022-06-07 15:18 ` Marc MERLIN
2022-06-07 15:21 ` Josef Bacik
2022-06-07 15:32 ` Marc MERLIN
2022-06-07 17:56 ` Josef Bacik
2022-06-07 18:27 ` Marc MERLIN
2022-06-07 19:46 ` Josef Bacik
2022-06-07 19:57 ` Marc MERLIN
2022-06-07 20:10 ` Josef Bacik
2022-06-07 20:25 ` Marc MERLIN
2022-06-07 20:44 ` Marc MERLIN
2022-06-07 20:58 ` Josef Bacik
2022-06-07 21:25 ` Marc MERLIN
2022-06-07 23:33 ` Josef Bacik
2022-06-07 23:37 ` Marc MERLIN
2022-06-07 23:41 ` Josef Bacik
2022-06-08 0:07 ` Marc MERLIN
2022-06-08 0:32 ` Josef Bacik
2022-06-08 0:42 ` Marc MERLIN
2022-06-08 1:31 ` Josef Bacik
2022-06-08 2:12 ` Marc MERLIN
2022-06-08 20:57 ` Josef Bacik
2022-06-08 21:30 ` Marc MERLIN
2022-06-08 21:33 ` Josef Bacik
2022-06-08 21:38 ` Marc MERLIN
2022-06-08 22:46 ` Josef Bacik
2022-06-09 3:01 ` Marc MERLIN
2022-06-09 20:46 ` Josef Bacik
2022-06-09 21:15 ` Marc MERLIN
2022-06-10 18:47 ` Josef Bacik
2022-06-10 19:11 ` Marc MERLIN
2022-06-10 19:55 ` Josef Bacik
2022-06-11 0:14 ` Marc MERLIN
2022-06-11 14:59 ` Josef Bacik
2022-06-12 17:06 ` Marc MERLIN
2022-06-12 20:05 ` Josef Bacik
2022-06-12 21:19 ` Marc MERLIN
2022-06-12 22:32 ` Josef Bacik
2022-06-12 17:37 ` Marc MERLIN
2022-06-12 20:06 ` Josef Bacik
2022-06-12 21:14 ` Marc MERLIN
2022-06-13 17:56 ` Marc MERLIN
2022-06-13 18:28 ` Marc MERLIN
2022-06-13 18:29 ` Josef Bacik
2022-06-13 20:46 ` Marc MERLIN
2022-06-13 22:19 ` Josef Bacik
2022-06-13 23:52 ` Marc MERLIN
2022-06-15 1:44 ` Josef Bacik
2022-06-15 14:29 ` Marc MERLIN
2022-06-15 14:55 ` Marc MERLIN
2022-06-15 21:18 ` Josef Bacik
2022-06-15 21:53 ` Marc MERLIN
2022-06-15 23:16 ` Josef Bacik
2022-06-15 23:21 ` Marc MERLIN
2022-06-15 23:26 ` Eldon
2022-06-16 0:22 ` Sweet Tea Dorminy
2022-06-16 20:16 ` Neal Gompa
2022-04-05 16:22 ` Roman Mamedov
2022-04-05 22:06 ` Marc MERLIN
2022-04-05 18:38 ` Zygo Blaxell
2022-04-05 19:31 ` Marc MERLIN
2020-08-12 22:34 ` 5.6 pretty massive unexplained btrfs corruption: parent transid verify failed + open_ctree failed Marc MERLIN
2020-08-13 7:39 ` Roman Mamedov
2020-08-13 15:07 ` Marc MERLIN
2020-08-14 2:19 ` Zygo Blaxell
2020-08-14 1:43 ` Zygo Blaxell
2020-08-15 4:41 ` Marc MERLIN
2018-07-18 19:42 ` btrfs check (not lowmem) and OOM-like hangs (4.17.6) Andrei Borzenkov
2018-07-18 21:56 ` Marc MERLIN
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220406040913.GE3307770@merlins.org \
--to=marc@merlins.org \
--cc=ce3g8jdj@umail.furryterror.org \
--cc=josef@toxicpanda.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).