All of lore.kernel.org
 help / color / mirror / Atom feed
* Help recovering filesystem (if possible)
@ 2021-11-15  1:52 Matthew Dawson
  2021-11-15 10:46 ` Kai Krakow
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Dawson @ 2021-11-15  1:52 UTC (permalink / raw)
  To: linux-btrfs

Hi all,

I recently upgrade one of my machines to the 5.15.2 kernel.  on the first 
reboot, I had a kernel fault during the initialization (I didn't get to 
capture the printed stack trace, but I'm 99% sure it did not have BTRFS 
related calls).  I then rebooted the machine back to a 5.14 kernel, but the 
BCache (writeback) cache was corrupted.  I then force started the underlying 
disks, but now my BTRFS filesystem will no longer mount.  I realize there may 
be missing/corrupted data, but I would like to ideally get any data I can off 
the disks.

This system involves 10 8TB disk, some are doing BCache -> LUKS -> BTRFS, some 
are doing LUKS -> BTRFS.

When I try to mount the filesystem, I get the following in dmesg:
[117632.798339] BTRFS info (device dm-0): flagging fs with big metadata feature
[117632.798344] BTRFS info (device dm-0): disk space caching is enabled
[117632.798346] BTRFS info (device dm-0): has skinny extents
[117632.873186] BTRFS error (device dm-0): parent transid verify failed on 
132806584614912 wanted 3240123 found 3240119
[117632.873542] BTRFS error (device dm-0): parent transid verify failed on 
132806584614912 wanted 3240123 found 3240119
[117632.873592] BTRFS warning (device dm-0): couldn't read tree root
[117632.883662] BTRFS error (device dm-0): open_ctree failed

I then tried using rescue=all,ro to mount the filesystem, but got:
[117658.264048] BTRFS info (device dm-0): flagging fs with big metadata feature
[117658.264056] BTRFS info (device dm-0): enabling all of the rescue options
[117658.264057] BTRFS info (device dm-0): ignoring data csums
[117658.264059] BTRFS info (device dm-0): ignoring bad roots
[117658.264060] BTRFS info (device dm-0): disabling log replay at mount time
[117658.264061] BTRFS info (device dm-0): disk space caching is enabled
[117658.264062] BTRFS info (device dm-0): has skinny extents
[117658.286252] BTRFS error (device dm-0): parent transid verify failed on 
132806584614912 wanted 3240123 found 3240119
[117658.286573] BTRFS error (device dm-0): parent transid verify failed on 
132806584614912 wanted 3240123 found 3240119
[117658.286614] BTRFS warning (device dm-0): couldn't read tree root
[117658.294632] BTRFS error (device dm-0): open_ctree failed

Running btrfs check (not repair) to see if it had anything else printed:
parent transid verify failed on 132806546751488 wanted 3240122 found 3239869
parent transid verify failed on 132806546751488 wanted 3240122 found 3239869
parent transid verify failed on 132806571458560 wanted 3240122 found 3239869
parent transid verify failed on 132806571458560 wanted 3240122 found 3239869
parent transid verify failed on 132806571458560 wanted 3240122 found 3239869
parent transid verify failed on 132806571458560 wanted 3240122 found 3239869
Ignoring transid failure
leaf parent key incorrect 132806571458560
Couldn't setup extent tree
ERROR: cannot open file system

Running btrfs restore to see if data could be recovered prints:
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
Ignoring transid failure
Couldn't setup extent tree
Couldn't setup device tree
Could not open root, trying backup super
warning, device 6 is missing
warning, device 13 is missing
warning, device 12 is missing
warning, device 11 is missing
warning, device 7 is missing
warning, device 9 is missing
warning, device 14 is missing
bytenr mismatch, want=136920576753664, have=0
ERROR: cannot read chunk root
Could not open root, trying backup super
warning, device 6 is missing
warning, device 13 is missing
warning, device 12 is missing
warning, device 11 is missing
warning, device 7 is missing
warning, device 9 is missing
warning, device 14 is missing
bytenr mismatch, want=136920576753664, have=0
ERROR: cannot read chunk root
Could not open root, trying backup super

(All disks are present in the system)

Is there any hope in recovering this data?  Or should I give up on it at this 
point and reformat?  Most of the data is backed up (or are backups 
themselves), but I'd like to get what I can.

Thanks,
-- 
Matthew



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Help recovering filesystem (if possible)
  2021-11-15  1:52 Help recovering filesystem (if possible) Matthew Dawson
@ 2021-11-15 10:46 ` Kai Krakow
  2021-11-18  2:57   ` Matthew Dawson
  0 siblings, 1 reply; 8+ messages in thread
From: Kai Krakow @ 2021-11-15 10:46 UTC (permalink / raw)
  To: Matthew Dawson; +Cc: linux-btrfs

Am Mo., 15. Nov. 2021 um 02:55 Uhr schrieb Matthew Dawson
<matthew@mjdsystems.ca>:
> I recently upgrade one of my machines to the 5.15.2 kernel.  on the first
> reboot, I had a kernel fault during the initialization (I didn't get to
> capture the printed stack trace, but I'm 99% sure it did not have BTRFS
> related calls).  I then rebooted the machine back to a 5.14 kernel, but the
> BCache (writeback) cache was corrupted.  I then force started the underlying
> disks, but now my BTRFS filesystem will no longer mount.  I realize there may
> be missing/corrupted data, but I would like to ideally get any data I can off
> the disks.

I had a similar issue lately where the system didn't reboot cleanly
(there's some issue in the BIOS or with the SSD firmware where it
would disconnect the SSD from SATA a few seconds after boot, forcing
bcache into detaching dirty caches).

Since you are seeing transaction IDs lacking behind expectations, I
think you've lost dirty writeback data from bcache. Do fix this in the
future, you should use bcache only in writearound or writethrough
mode.

> This system involves 10 8TB disk, some are doing BCache -> LUKS -> BTRFS, some
> are doing LUKS -> BTRFS.

Not LUKS here, and all my btrfs pool members are attached to a single
SSD as caching frontend.

> When I try to mount the filesystem, I get the following in dmesg:
> [117632.798339] BTRFS info (device dm-0): flagging fs with big metadata feature
> [117632.798344] BTRFS info (device dm-0): disk space caching is enabled
> [117632.798346] BTRFS info (device dm-0): has skinny extents
> [117632.873186] BTRFS error (device dm-0): parent transid verify failed on
> 132806584614912 wanted 3240123 found 3240119

I had luck with the following steps:

* ensure that all members are attached to bcache as they should
* ensure bcache is running in writearound mode for each member
* ensure that btrfs did scan for all members

Next, I started `btrfs check` for each member disk, eventually one
would contain the needed disk structures and only showed a few errors.

I was then able to mount btrfs through that device node, open ctree
didn't fail this time. I don't remember if I used "usebackuproot" for
mount or a similar switch for "btrfs check".

I then ran `btrfs scrub` which fixed the broken metadata. Luckily, I
had only metadata corruption on the disks which had dirty writeback
cleared, and metadata runs in RAID-1 mode for me.

"btrfs check" then didn't find any errors. Reboot worked fine.

[...]
> Is there any hope in recovering this data?  Or should I give up on it at this
> point and reformat?  Most of the data is backed up (or are backups
> themselves), but I'd like to get what I can.

Well, I'm doing daily backups with borg - to a different technology
(no btrfs, no bcache, different system). I don't think backing up
btrfs to btrfs is a brilliant idea, especially not when both are
mounted to the same system.

You may try my steps above. If you've found a member device which
shows fewer errors, you COULD try to repair it if mount still fails
(or try one of the recovery mount options). But you may want to ask
the experts again here.

Depending on how much dirty writeback you've lost in bcache, chances
may be good that one of the members has enough metadata to
successfully mount or repair the filesystem. Or at least, it's a good
start for "btrfs restore" then.

What do we learn from this?

* probably do not use bcache in writeback mode if you can avoid it
* switch bcache to writearound mode before kernel upgrades, wait for
writeback to finish
* success mounting btrfs may depend a lot on which member device you
actually mount


HTH
Kai

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Help recovering filesystem (if possible)
  2021-11-15 10:46 ` Kai Krakow
@ 2021-11-18  2:57   ` Matthew Dawson
  2021-11-18 21:09     ` Zygo Blaxell
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Dawson @ 2021-11-18  2:57 UTC (permalink / raw)
  To: Kai Krakow; +Cc: linux-btrfs

On Monday, November 15, 2021 5:46:43 A.M. EST Kai Krakow wrote:
> Am Mo., 15. Nov. 2021 um 02:55 Uhr schrieb Matthew Dawson
> 
> <matthew@mjdsystems.ca>:
> > I recently upgrade one of my machines to the 5.15.2 kernel.  on the first
> > reboot, I had a kernel fault during the initialization (I didn't get to
> > capture the printed stack trace, but I'm 99% sure it did not have BTRFS
> > related calls).  I then rebooted the machine back to a 5.14 kernel, but
> > the
> > BCache (writeback) cache was corrupted.  I then force started the
> > underlying disks, but now my BTRFS filesystem will no longer mount.  I
> > realize there may be missing/corrupted data, but I would like to ideally
> > get any data I can off the disks.
> 
> I had a similar issue lately where the system didn't reboot cleanly
> (there's some issue in the BIOS or with the SSD firmware where it
> would disconnect the SSD from SATA a few seconds after boot, forcing
> bcache into detaching dirty caches).
> 
> Since you are seeing transaction IDs lacking behind expectations, I
> think you've lost dirty writeback data from bcache. Do fix this in the
> future, you should use bcache only in writearound or writethrough
> mode.
Considering I started the bcache devices without the cache, I don't doubt I've 
lost writeback data and I have no doubts there will be issues.  At this point 
I'm just in data recovery, trying to get what I can.

> 
> > This system involves 10 8TB disk, some are doing BCache -> LUKS -> BTRFS,
> > some are doing LUKS -> BTRFS.
> 
> Not LUKS here, and all my btrfs pool members are attached to a single
> SSD as caching frontend.
> 
> > When I try to mount the filesystem, I get the following in dmesg:
> > [117632.798339] BTRFS info (device dm-0): flagging fs with big metadata
> > feature [117632.798344] BTRFS info (device dm-0): disk space caching is
> > enabled [117632.798346] BTRFS info (device dm-0): has skinny extents
> > [117632.873186] BTRFS error (device dm-0): parent transid verify failed on
> > 132806584614912 wanted 3240123 found 3240119
> 
> I had luck with the following steps:
> 
> * ensure that all members are attached to bcache as they should
> * ensure bcache is running in writearound mode for each member
> * ensure that btrfs did scan for all members
> 
> Next, I started `btrfs check` for each member disk, eventually one
> would contain the needed disk structures and only showed a few errors.
> 
> I was then able to mount btrfs through that device node, open ctree
> didn't fail this time. I don't remember if I used "usebackuproot" for
> mount or a similar switch for "btrfs check".
> 
> I then ran `btrfs scrub` which fixed the broken metadata. Luckily, I
> had only metadata corruption on the disks which had dirty writeback
> cleared, and metadata runs in RAID-1 mode for me.
> 
> "btrfs check" then didn't find any errors. Reboot worked fine.
Thanks for the suggestion.  Unfortunately, all my disks report basically the 
same errors, so I wasn't able to recover my system this way.

> 
> [...]
> 
> > Is there any hope in recovering this data?  Or should I give up on it at
> > this point and reformat?  Most of the data is backed up (or are backups
> > themselves), but I'd like to get what I can.
> 
> Well, I'm doing daily backups with borg - to a different technology
> (no btrfs, no bcache, different system). I don't think backing up
> btrfs to btrfs is a brilliant idea, especially not when both are
> mounted to the same system.
I'm not quite that redundant, but the backups of things I really care about 
are actually to an off-site system.  But accessing data through a backup can be 
painful compared to hopefully just getting it out.  Also the local backups on 
the system would be nice to have, for historical purposes.

> 
> You may try my steps above. If you've found a member device which
> shows fewer errors, you COULD try to repair it if mount still fails
> (or try one of the recovery mount options). But you may want to ask
> the experts again here.
I did try, thanks.  Unfortunately as noted above it wasn't helpful.

Hopefully someone has a different idea?  I am posting here because I feel any 
luck is going to start using more dangerous options and those usually say to 
ask the mailing list first.

> 
> Depending on how much dirty writeback you've lost in bcache, chances
> may be good that one of the members has enough metadata to
> successfully mount or repair the filesystem. Or at least, it's a good
> start for "btrfs restore" then.
> 
> What do we learn from this?
> 
> * probably do not use bcache in writeback mode if you can avoid it
> * switch bcache to writearound mode before kernel upgrades, wait for
> writeback to finish
> * success mounting btrfs may depend a lot on which member device you
> actually mount

Thanks,
-- 
Matthew



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Help recovering filesystem (if possible)
  2021-11-18  2:57   ` Matthew Dawson
@ 2021-11-18 21:09     ` Zygo Blaxell
  2021-11-19  4:42       ` Matthew Dawson
  0 siblings, 1 reply; 8+ messages in thread
From: Zygo Blaxell @ 2021-11-18 21:09 UTC (permalink / raw)
  To: Matthew Dawson; +Cc: Kai Krakow, linux-btrfs

On Wed, Nov 17, 2021 at 09:57:40PM -0500, Matthew Dawson wrote:
> On Monday, November 15, 2021 5:46:43 A.M. EST Kai Krakow wrote:
> > Am Mo., 15. Nov. 2021 um 02:55 Uhr schrieb Matthew Dawson
> > 
> > <matthew@mjdsystems.ca>:
> > > I recently upgrade one of my machines to the 5.15.2 kernel.  on the first
> > > reboot, I had a kernel fault during the initialization (I didn't get to
> > > capture the printed stack trace, but I'm 99% sure it did not have BTRFS
> > > related calls).  I then rebooted the machine back to a 5.14 kernel, but
> > > the
> > > BCache (writeback) cache was corrupted.  I then force started the
> > > underlying disks, but now my BTRFS filesystem will no longer mount.  I
> > > realize there may be missing/corrupted data, but I would like to ideally
> > > get any data I can off the disks.
> > 
> > I had a similar issue lately where the system didn't reboot cleanly
> > (there's some issue in the BIOS or with the SSD firmware where it
> > would disconnect the SSD from SATA a few seconds after boot, forcing
> > bcache into detaching dirty caches).
> > 
> > Since you are seeing transaction IDs lacking behind expectations, I
> > think you've lost dirty writeback data from bcache. Do fix this in the
> > future, you should use bcache only in writearound or writethrough
> > mode.
> Considering I started the bcache devices without the cache, I don't doubt I've 
> lost writeback data and I have no doubts there will be issues.  At this point 
> I'm just in data recovery, trying to get what I can.

The word "issues" is not adequate to describe the catastrophic damage
to metadata that occurs if the contents of a writeback cache are lost.

If writeback failure happens to only one btrfs device's cache, you
can recover with btrfs raid1 self-healing using intact copies stored
on working devices.  If it happens on multiple btrfs devices at once
(e.g. due to misconfiguration of bcache with more than one btrfs device
per pool or more than one bcache pool per SSD, or due to a kernel bug
that affects all bcache instances at once, or a firmware bug that affects
each SSD device the same way during a crash) then recovery isn't possible.

Writeback cache failures are _bad_, falling between "many thousands of
bad sectors" and "total disk failure" in terms of difficulty of recovery.

> Hopefully someone has a different idea?  I am posting here because I feel any 
> luck is going to start using more dangerous options and those usually say to 
> ask the mailing list first.

Your best option would be to get the caches running again, at least in
read-only mode.  It's not a good option, but all your other options depend
on having access to as many cached dirty pages as possible.  If all you
have is the backing devices, then now is the time to scrape what you
can from the drives with 'btrfs restore' then make use of your backups.

This is what you're up against:

btrfs writes metadata pages in a specific order to keep one complete
metadata tree on disk intact at all times.  This means that a specific
item of metadata (e.g. a directory or inode) is stored in different disk
blocks at different times.  Old physical disk blocks are frequently
recycled to store different data--not merely newer versions of the
same items, but completely unrelated items from different areas of
the filesystem.

Writeback caches write to backing devices in mostly sequential
LBA order for performance.  This is a defining characteristic of a
writeback cache--if the cache maintained the btrfs write order on the
backing device then we'd call it a "writethrough" or "writebehind"
cache instead.  Writeback caches don't need to respect write order for
individual blocks on the backing device as long as they can guarantee they
will eventually finish writing all of the data out to the backing device
(i.e. they restart writeback automatically after a reboot or crash).

During writeback, some metadata items will temporarily appear on the
backing device two or more times (a new version of the item was written,
but an old version of the item has not been overwritten yet and remains
on the backing device) while other items will be completely missing (the
old version of the item has been overwritten, but the new version of the
item has not been written yet, so no version of the item exists on the
backing device).  The backing disk will normally be missing significant
portions of the filesystem's metadata as long as there are dirty pages
in the cache device.

A recovery tool reading the backing device can't simply find an old
version of an inode's metadata, get a location for most of its data
blocks, and guess the locations of remaining blocks or truncate the file
(as tools like e2fsck do).  The missing btrfs metadata items are not
present at all on the backing device, because their old versions will be
erased from the backing device during writeback, while the new versions
haven't been written yet and will exist only in the cache device.

If the cache had a non-trivial number of dirty blocks when it failed, then
the above losses occur many thousands of times in the metadata trees, and
each lost page may contain metadata for hundreds of files.  The backing
disk will contain a severely damaged (some might say "destroyed")
filesystem.  Recovery tools would be able to delete incomplete objects
from the filesystem and make the filesystem mountable, but with
significant (if not total) data losses.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Help recovering filesystem (if possible)
  2021-11-18 21:09     ` Zygo Blaxell
@ 2021-11-19  4:42       ` Matthew Dawson
  2021-11-24  4:43         ` Zygo Blaxell
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Dawson @ 2021-11-19  4:42 UTC (permalink / raw)
  To: Zygo Blaxell; +Cc: Kai Krakow, linux-btrfs

On Thursday, November 18, 2021 4:09:15 P.M. EST Zygo Blaxell wrote:
> On Wed, Nov 17, 2021 at 09:57:40PM -0500, Matthew Dawson wrote:
> > On Monday, November 15, 2021 5:46:43 A.M. EST Kai Krakow wrote:
> > > Am Mo., 15. Nov. 2021 um 02:55 Uhr schrieb Matthew Dawson
> > > 
> > > <matthew@mjdsystems.ca>:
> > > > I recently upgrade one of my machines to the 5.15.2 kernel.  on the
> > > > first
> > > > reboot, I had a kernel fault during the initialization (I didn't get
> > > > to
> > > > capture the printed stack trace, but I'm 99% sure it did not have
> > > > BTRFS
> > > > related calls).  I then rebooted the machine back to a 5.14 kernel,
> > > > but
> > > > the
> > > > BCache (writeback) cache was corrupted.  I then force started the
> > > > underlying disks, but now my BTRFS filesystem will no longer mount.  I
> > > > realize there may be missing/corrupted data, but I would like to
> > > > ideally
> > > > get any data I can off the disks.
> > > 
> > > I had a similar issue lately where the system didn't reboot cleanly
> > > (there's some issue in the BIOS or with the SSD firmware where it
> > > would disconnect the SSD from SATA a few seconds after boot, forcing
> > > bcache into detaching dirty caches).
> > > 
> > > Since you are seeing transaction IDs lacking behind expectations, I
> > > think you've lost dirty writeback data from bcache. Do fix this in the
> > > future, you should use bcache only in writearound or writethrough
> > > mode.
> > 
> > Considering I started the bcache devices without the cache, I don't doubt
> > I've lost writeback data and I have no doubts there will be issues.  At
> > this point I'm just in data recovery, trying to get what I can.
> 
> The word "issues" is not adequate to describe the catastrophic damage
> to metadata that occurs if the contents of a writeback cache are lost.
> 
> If writeback failure happens to only one btrfs device's cache, you
> can recover with btrfs raid1 self-healing using intact copies stored
> on working devices.  If it happens on multiple btrfs devices at once
> (e.g. due to misconfiguration of bcache with more than one btrfs device
> per pool or more than one bcache pool per SSD, or due to a kernel bug
> that affects all bcache instances at once, or a firmware bug that affects
> each SSD device the same way during a crash) then recovery isn't possible.
> 
> Writeback cache failures are _bad_, falling between "many thousands of
> bad sectors" and "total disk failure" in terms of difficulty of recovery.
> 
> > Hopefully someone has a different idea?  I am posting here because I feel
> > any luck is going to start using more dangerous options and those usually
> > say to ask the mailing list first.
> 
> Your best option would be to get the caches running again, at least in
> read-only mode.  It's not a good option, but all your other options depend
> on having access to as many cached dirty pages as possible.  If all you
> have is the backing devices, then now is the time to scrape what you
> can from the drives with 'btrfs restore' then make use of your backups.
At this point I think I'm stuck with just the backing devices (with GB of lost 
dirty data on the cache).  And I'm primarily in data recovery, trying to get 
whatever good data I can to help supplement the backed up data.

As mentioned in my first email though, btrfs restore fails with the following 
error message:
# btrfs restore -l /dev/dm-2
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
Ignoring transid failure
Couldn't setup extent tree
Couldn't setup device tree
Could not open root, trying backup super
warning, device 6 is missing
warning, device 13 is missing
warning, device 12 is missing
warning, device 11 is missing
warning, device 7 is missing
warning, device 9 is missing
warning, device 14 is missing
bytenr mismatch, want=136920576753664, have=0
ERROR: cannot read chunk root
Could not open root, trying backup super
warning, device 6 is missing
warning, device 13 is missing
warning, device 12 is missing
warning, device 11 is missing
warning, device 7 is missing
warning, device 9 is missing
warning, device 14 is missing
bytenr mismatch, want=136920576753664, have=0
ERROR: cannot read chunk root
Could not open root, trying backup super

When all devices are up and reported to the kernel.  I was looking for help to 
try and move beyond these errors and get whatever may still be available.

If further recovery is impossible that's fine I'll wipe and start over, but I 
rather try some risky things to get what I can before I do so.

> 
> This is what you're up against:
> 
> btrfs writes metadata pages in a specific order to keep one complete
> metadata tree on disk intact at all times.  This means that a specific
> item of metadata (e.g. a directory or inode) is stored in different disk
> blocks at different times.  Old physical disk blocks are frequently
> recycled to store different data--not merely newer versions of the
> same items, but completely unrelated items from different areas of
> the filesystem.
> 
> Writeback caches write to backing devices in mostly sequential
> LBA order for performance.  This is a defining characteristic of a
> writeback cache--if the cache maintained the btrfs write order on the
> backing device then we'd call it a "writethrough" or "writebehind"
> cache instead.  Writeback caches don't need to respect write order for
> individual blocks on the backing device as long as they can guarantee they
> will eventually finish writing all of the data out to the backing device
> (i.e. they restart writeback automatically after a reboot or crash).
> 
> During writeback, some metadata items will temporarily appear on the
> backing device two or more times (a new version of the item was written,
> but an old version of the item has not been overwritten yet and remains
> on the backing device) while other items will be completely missing (the
> old version of the item has been overwritten, but the new version of the
> item has not been written yet, so no version of the item exists on the
> backing device).  The backing disk will normally be missing significant
> portions of the filesystem's metadata as long as there are dirty pages
> in the cache device.
> 
> A recovery tool reading the backing device can't simply find an old
> version of an inode's metadata, get a location for most of its data
> blocks, and guess the locations of remaining blocks or truncate the file
> (as tools like e2fsck do).  The missing btrfs metadata items are not
> present at all on the backing device, because their old versions will be
> erased from the backing device during writeback, while the new versions
> haven't been written yet and will exist only in the cache device.
> 
> If the cache had a non-trivial number of dirty blocks when it failed, then
> the above losses occur many thousands of times in the metadata trees, and
> each lost page may contain metadata for hundreds of files.  The backing
> disk will contain a severely damaged (some might say "destroyed")
> filesystem.  Recovery tools would be able to delete incomplete objects
> from the filesystem and make the filesystem mountable, but with
> significant (if not total) data losses.





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Help recovering filesystem (if possible)
  2021-11-19  4:42       ` Matthew Dawson
@ 2021-11-24  4:43         ` Zygo Blaxell
  2021-11-24  5:11           ` Matthew Dawson
  0 siblings, 1 reply; 8+ messages in thread
From: Zygo Blaxell @ 2021-11-24  4:43 UTC (permalink / raw)
  To: Matthew Dawson; +Cc: Kai Krakow, linux-btrfs

On Thu, Nov 18, 2021 at 11:42:05PM -0500, Matthew Dawson wrote:
> On Thursday, November 18, 2021 4:09:15 P.M. EST Zygo Blaxell wrote:
> > On Wed, Nov 17, 2021 at 09:57:40PM -0500, Matthew Dawson wrote:
> > > On Monday, November 15, 2021 5:46:43 A.M. EST Kai Krakow wrote:
> > > > Am Mo., 15. Nov. 2021 um 02:55 Uhr schrieb Matthew Dawson
> > > > 
> > > > <matthew@mjdsystems.ca>:
> > > > > I recently upgrade one of my machines to the 5.15.2 kernel.  on the
> > > > > first
> > > > > reboot, I had a kernel fault during the initialization (I didn't get
> > > > > to
> > > > > capture the printed stack trace, but I'm 99% sure it did not have
> > > > > BTRFS
> > > > > related calls).  I then rebooted the machine back to a 5.14 kernel,
> > > > > but
> > > > > the
> > > > > BCache (writeback) cache was corrupted.  I then force started the
> > > > > underlying disks, but now my BTRFS filesystem will no longer mount.  I
> > > > > realize there may be missing/corrupted data, but I would like to
> > > > > ideally
> > > > > get any data I can off the disks.
> > > > 
> > > > I had a similar issue lately where the system didn't reboot cleanly
> > > > (there's some issue in the BIOS or with the SSD firmware where it
> > > > would disconnect the SSD from SATA a few seconds after boot, forcing
> > > > bcache into detaching dirty caches).
> > > > 
> > > > Since you are seeing transaction IDs lacking behind expectations, I
> > > > think you've lost dirty writeback data from bcache. Do fix this in the
> > > > future, you should use bcache only in writearound or writethrough
> > > > mode.
> > > 
> > > Considering I started the bcache devices without the cache, I don't doubt
> > > I've lost writeback data and I have no doubts there will be issues.  At
> > > this point I'm just in data recovery, trying to get what I can.
> > 
> > The word "issues" is not adequate to describe the catastrophic damage
> > to metadata that occurs if the contents of a writeback cache are lost.
> > 
> > If writeback failure happens to only one btrfs device's cache, you
> > can recover with btrfs raid1 self-healing using intact copies stored
> > on working devices.  If it happens on multiple btrfs devices at once
> > (e.g. due to misconfiguration of bcache with more than one btrfs device
> > per pool or more than one bcache pool per SSD, or due to a kernel bug
> > that affects all bcache instances at once, or a firmware bug that affects
> > each SSD device the same way during a crash) then recovery isn't possible.
> > 
> > Writeback cache failures are _bad_, falling between "many thousands of
> > bad sectors" and "total disk failure" in terms of difficulty of recovery.
> > 
> > > Hopefully someone has a different idea?  I am posting here because I feel
> > > any luck is going to start using more dangerous options and those usually
> > > say to ask the mailing list first.
> > 
> > Your best option would be to get the caches running again, at least in
> > read-only mode.  It's not a good option, but all your other options depend
> > on having access to as many cached dirty pages as possible.  If all you
> > have is the backing devices, then now is the time to scrape what you
> > can from the drives with 'btrfs restore' then make use of your backups.
> At this point I think I'm stuck with just the backing devices (with GB of lost 
> dirty data on the cache).  And I'm primarily in data recovery, trying to get 
> whatever good data I can to help supplement the backed up data.

I don't use words like "catastrophic" casually.  Recovery typically
isn't possible with the backing disks after a writeback cache failure.

The writeback cache algorithm will prefer to keep the most critical
metadata in cache, while writing out-of-date metadata pages out to the
backing devices.  This process effectively wipes btrfs metadata off
the backing disks as the cache fills up, and puts it back as the cache
flushes out.  If a large dirty cache dies, it can leave nothing behind.

> As mentioned in my first email though, btrfs restore fails with the following 
> error message:
> # btrfs restore -l /dev/dm-2
> parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
> parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
> parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
> parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
> Ignoring transid failure
> Couldn't setup extent tree
> Couldn't setup device tree
> Could not open root, trying backup super
> warning, device 6 is missing
> warning, device 13 is missing
> warning, device 12 is missing
> warning, device 11 is missing
> warning, device 7 is missing
> warning, device 9 is missing
> warning, device 14 is missing
> bytenr mismatch, want=136920576753664, have=0
> ERROR: cannot read chunk root
> Could not open root, trying backup super
> warning, device 6 is missing
> warning, device 13 is missing
> warning, device 12 is missing
> warning, device 11 is missing
> warning, device 7 is missing
> warning, device 9 is missing
> warning, device 14 is missing
> bytenr mismatch, want=136920576753664, have=0
> ERROR: cannot read chunk root
> Could not open root, trying backup super
> When all devices are up and reported to the kernel.  I was looking for help to 
> try and move beyond these errors and get whatever may still be available.

The general btrfs recovery process is:

	1.  Restore device and chunk trees.  Without these, btrfs
	can't translate logical to physical block addresses, or even
	recognize its own devices, so you get "device is missing" errors.
	The above log shows that device and chunk tree data is now in the
	cache--or at least, not on the backing disks.	'btrfs rescue
	chunk-recover' may locate an older copy of this data by brute
	force search of the disk, if an older copy still exists.

	2.  Find subvol roots to read data.  'btrfs-find-root' will
	do a brute-force search of the disks to locate subvol roots,
	which you can pass to 'btrfs restore -l' to try to read files.
	Normally this produces hundreds of candidates and you'll have
	to try each one.  If you have an old snapshot (one that predates
	the last full cache flush, and no balance, device shrink, device
	remove, defrag, or dedupe operation has occurred since) then you
	might be able to read its entire tree.	Subvols that are modified
	recently will be unusable as they will be missing many or all
	of their pages (they will be in the cache, not the backing disks).

	3.  Verify the data you get back.  The csum tree is no longer
	usable, so you'll have no way to know if any data that you get
	from the filesystem is correct or garbage.  This is true even if
	you are reading from an old snapshot, as the csum tree is global
	to all subvols and will be modified (and moved into the cache)
	by any write to the filesystem.

In the logs above we see that you have missing pages in extent, chunk,
and device trees.  In a writeback cache setup, new versions of these
trees will be written to the cache, while the old versions are partially
or completely erased on the backing devices in the process of flushing
out previous dirty pages.  This pattern will repeat for subvol and csum
trees, leaving you with severely damaged or unusable metadata on the
backing disks as long as there are dirty pages in cache.

> If further recovery is impossible that's fine I'll wipe and start over, but I 
> rather try some risky things to get what I can before I do so.

I wouldn't say it's impossible in theory, but in practice it is a level
of effort comparable to unshredding a phone book--after someone has
grabbed a handful of the shredded paper and burned it.

High-risk interventions like 'check --repair --init-extent-tree' are
likely to have no effect in the best case (they'll give up due to lack
of usable metadata), and will destroy even more data in the worst case
(they'll try modifying the filesystem and overwrite some of the surviving
data).  They depend on having intact device and subvol trees to work,
so if you can't get those back, there's no need to try anything else.

In theory, if you can infer the file structure from the contents of the
files, you might be able to guess some of the missing metadata.  e.g. the
logical-to-physical translation in the device tree only provides about
16 bits of an extent byte address, so you could theoretically build
a tool which tries all 65536 most likely disk locations for a block
until it finds a plausible content match for a file, and use that tool
to reconstruct the device tree.  It might even be possible to automate
this using fragments of the csum tree (assuming the relevant parts of
the csum tree exist on the backing devices and not only in the cache).
This is only the theory--practical tools to do this kind of recovery
don't yet exist.

> > This is what you're up against:
> > 
> > btrfs writes metadata pages in a specific order to keep one complete
> > metadata tree on disk intact at all times.  This means that a specific
> > item of metadata (e.g. a directory or inode) is stored in different disk
> > blocks at different times.  Old physical disk blocks are frequently
> > recycled to store different data--not merely newer versions of the
> > same items, but completely unrelated items from different areas of
> > the filesystem.
> > 
> > Writeback caches write to backing devices in mostly sequential
> > LBA order for performance.  This is a defining characteristic of a
> > writeback cache--if the cache maintained the btrfs write order on the
> > backing device then we'd call it a "writethrough" or "writebehind"
> > cache instead.  Writeback caches don't need to respect write order for
> > individual blocks on the backing device as long as they can guarantee they
> > will eventually finish writing all of the data out to the backing device
> > (i.e. they restart writeback automatically after a reboot or crash).
> > 
> > During writeback, some metadata items will temporarily appear on the
> > backing device two or more times (a new version of the item was written,
> > but an old version of the item has not been overwritten yet and remains
> > on the backing device) while other items will be completely missing (the
> > old version of the item has been overwritten, but the new version of the
> > item has not been written yet, so no version of the item exists on the
> > backing device).  The backing disk will normally be missing significant
> > portions of the filesystem's metadata as long as there are dirty pages
> > in the cache device.
> > 
> > A recovery tool reading the backing device can't simply find an old
> > version of an inode's metadata, get a location for most of its data
> > blocks, and guess the locations of remaining blocks or truncate the file
> > (as tools like e2fsck do).  The missing btrfs metadata items are not
> > present at all on the backing device, because their old versions will be
> > erased from the backing device during writeback, while the new versions
> > haven't been written yet and will exist only in the cache device.
> > 
> > If the cache had a non-trivial number of dirty blocks when it failed, then
> > the above losses occur many thousands of times in the metadata trees, and
> > each lost page may contain metadata for hundreds of files.  The backing
> > disk will contain a severely damaged (some might say "destroyed")
> > filesystem.  Recovery tools would be able to delete incomplete objects
> > from the filesystem and make the filesystem mountable, but with
> > significant (if not total) data losses.
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Help recovering filesystem (if possible)
  2021-11-24  4:43         ` Zygo Blaxell
@ 2021-11-24  5:11           ` Matthew Dawson
  0 siblings, 0 replies; 8+ messages in thread
From: Matthew Dawson @ 2021-11-24  5:11 UTC (permalink / raw)
  To: Zygo Blaxell; +Cc: Kai Krakow, linux-btrfs

On Tuesday, November 23, 2021 11:43:43 P.M. EST Zygo Blaxell wrote:
> On Thu, Nov 18, 2021 at 11:42:05PM -0500, Matthew Dawson wrote:
> > On Thursday, November 18, 2021 4:09:15 P.M. EST Zygo Blaxell wrote:
> > > On Wed, Nov 17, 2021 at 09:57:40PM -0500, Matthew Dawson wrote:
> > > > On Monday, November 15, 2021 5:46:43 A.M. EST Kai Krakow wrote:
> > > > > Am Mo., 15. Nov. 2021 um 02:55 Uhr schrieb Matthew Dawson
> > > > > 
> > > > > <matthew@mjdsystems.ca>:
> > > > > > I recently upgrade one of my machines to the 5.15.2 kernel.  on
> > > > > > the
> > > > > > first
> > > > > > reboot, I had a kernel fault during the initialization (I didn't
> > > > > > get
> > > > > > to
> > > > > > capture the printed stack trace, but I'm 99% sure it did not have
> > > > > > BTRFS
> > > > > > related calls).  I then rebooted the machine back to a 5.14
> > > > > > kernel,
> > > > > > but
> > > > > > the
> > > > > > BCache (writeback) cache was corrupted.  I then force started the
> > > > > > underlying disks, but now my BTRFS filesystem will no longer
> > > > > > mount.  I
> > > > > > realize there may be missing/corrupted data, but I would like to
> > > > > > ideally
> > > > > > get any data I can off the disks.
> > > > > 
> > > > > I had a similar issue lately where the system didn't reboot cleanly
> > > > > (there's some issue in the BIOS or with the SSD firmware where it
> > > > > would disconnect the SSD from SATA a few seconds after boot, forcing
> > > > > bcache into detaching dirty caches).
> > > > > 
> > > > > Since you are seeing transaction IDs lacking behind expectations, I
> > > > > think you've lost dirty writeback data from bcache. Do fix this in
> > > > > the
> > > > > future, you should use bcache only in writearound or writethrough
> > > > > mode.
> > > > 
> > > > Considering I started the bcache devices without the cache, I don't
> > > > doubt
> > > > I've lost writeback data and I have no doubts there will be issues. 
> > > > At
> > > > this point I'm just in data recovery, trying to get what I can.
> > > 
> > > The word "issues" is not adequate to describe the catastrophic damage
> > > to metadata that occurs if the contents of a writeback cache are lost.
> > > 
> > > If writeback failure happens to only one btrfs device's cache, you
> > > can recover with btrfs raid1 self-healing using intact copies stored
> > > on working devices.  If it happens on multiple btrfs devices at once
> > > (e.g. due to misconfiguration of bcache with more than one btrfs device
> > > per pool or more than one bcache pool per SSD, or due to a kernel bug
> > > that affects all bcache instances at once, or a firmware bug that
> > > affects
> > > each SSD device the same way during a crash) then recovery isn't
> > > possible.
> > > 
> > > Writeback cache failures are _bad_, falling between "many thousands of
> > > bad sectors" and "total disk failure" in terms of difficulty of
> > > recovery.
> > > 
> > > > Hopefully someone has a different idea?  I am posting here because I
> > > > feel
> > > > any luck is going to start using more dangerous options and those
> > > > usually
> > > > say to ask the mailing list first.
> > > 
> > > Your best option would be to get the caches running again, at least in
> > > read-only mode.  It's not a good option, but all your other options
> > > depend
> > > on having access to as many cached dirty pages as possible.  If all you
> > > have is the backing devices, then now is the time to scrape what you
> > > can from the drives with 'btrfs restore' then make use of your backups.
> > 
> > At this point I think I'm stuck with just the backing devices (with GB of
> > lost dirty data on the cache).  And I'm primarily in data recovery,
> > trying to get whatever good data I can to help supplement the backed up
> > data.
> 
> I don't use words like "catastrophic" casually.  Recovery typically
> isn't possible with the backing disks after a writeback cache failure.
> 
> The writeback cache algorithm will prefer to keep the most critical
> metadata in cache, while writing out-of-date metadata pages out to the
> backing devices.  This process effectively wipes btrfs metadata off
> the backing disks as the cache fills up, and puts it back as the cache
> flushes out.  If a large dirty cache dies, it can leave nothing behind.
> 
> > As mentioned in my first email though, btrfs restore fails with the
> > following error message:
> > # btrfs restore -l /dev/dm-2
> > parent transid verify failed on 132806584614912 wanted 3240123 found
> > 3240119 parent transid verify failed on 132806584614912 wanted 3240123
> > found 3240119 parent transid verify failed on 132806584614912 wanted
> > 3240123 found 3240119 parent transid verify failed on 132806584614912
> > wanted 3240123 found 3240119 Ignoring transid failure
> > Couldn't setup extent tree
> > Couldn't setup device tree
> > Could not open root, trying backup super
> > warning, device 6 is missing
> > warning, device 13 is missing
> > warning, device 12 is missing
> > warning, device 11 is missing
> > warning, device 7 is missing
> > warning, device 9 is missing
> > warning, device 14 is missing
> > bytenr mismatch, want=136920576753664, have=0
> > ERROR: cannot read chunk root
> > Could not open root, trying backup super
> > warning, device 6 is missing
> > warning, device 13 is missing
> > warning, device 12 is missing
> > warning, device 11 is missing
> > warning, device 7 is missing
> > warning, device 9 is missing
> > warning, device 14 is missing
> > bytenr mismatch, want=136920576753664, have=0
> > ERROR: cannot read chunk root
> > Could not open root, trying backup super
> > When all devices are up and reported to the kernel.  I was looking for
> > help to try and move beyond these errors and get whatever may still be
> > available.
> The general btrfs recovery process is:
> 
> 	1.  Restore device and chunk trees.  Without these, btrfs
> 	can't translate logical to physical block addresses, or even
> 	recognize its own devices, so you get "device is missing" errors.
> 	The above log shows that device and chunk tree data is now in the
> 	cache--or at least, not on the backing disks.	'btrfs rescue
> 	chunk-recover' may locate an older copy of this data by brute
> 	force search of the disk, if an older copy still exists.
> 
> 	2.  Find subvol roots to read data.  'btrfs-find-root' will
> 	do a brute-force search of the disks to locate subvol roots,
> 	which you can pass to 'btrfs restore -l' to try to read files.
> 	Normally this produces hundreds of candidates and you'll have
> 	to try each one.  If you have an old snapshot (one that predates
> 	the last full cache flush, and no balance, device shrink, device
> 	remove, defrag, or dedupe operation has occurred since) then you
> 	might be able to read its entire tree.	Subvols that are modified
> 	recently will be unusable as they will be missing many or all
> 	of their pages (they will be in the cache, not the backing disks).
> 
> 	3.  Verify the data you get back.  The csum tree is no longer
> 	usable, so you'll have no way to know if any data that you get
> 	from the filesystem is correct or garbage.  This is true even if
> 	you are reading from an old snapshot, as the csum tree is global
> 	to all subvols and will be modified (and moved into the cache)
> 	by any write to the filesystem.
> 
> In the logs above we see that you have missing pages in extent, chunk,
> and device trees.  In a writeback cache setup, new versions of these
> trees will be written to the cache, while the old versions are partially
> or completely erased on the backing devices in the process of flushing
> out previous dirty pages.  This pattern will repeat for subvol and csum
> trees, leaving you with severely damaged or unusable metadata on the
> backing disks as long as there are dirty pages in cache.
> 
> > If further recovery is impossible that's fine I'll wipe and start over,
> > but I rather try some risky things to get what I can before I do so.
> 
> I wouldn't say it's impossible in theory, but in practice it is a level
> of effort comparable to unshredding a phone book--after someone has
> grabbed a handful of the shredded paper and burned it.
> 
> High-risk interventions like 'check --repair --init-extent-tree' are
> likely to have no effect in the best case (they'll give up due to lack
> of usable metadata), and will destroy even more data in the worst case
> (they'll try modifying the filesystem and overwrite some of the surviving
> data).  They depend on having intact device and subvol trees to work,
> so if you can't get those back, there's no need to try anything else.
> 
> In theory, if you can infer the file structure from the contents of the
> files, you might be able to guess some of the missing metadata.  e.g. the
> logical-to-physical translation in the device tree only provides about
> 16 bits of an extent byte address, so you could theoretically build
> a tool which tries all 65536 most likely disk locations for a block
> until it finds a plausible content match for a file, and use that tool
> to reconstruct the device tree.  It might even be possible to automate
> this using fragments of the csum tree (assuming the relevant parts of
> the csum tree exist on the backing devices and not only in the cache).
> This is only the theory--practical tools to do this kind of recovery
> don't yet exist.
Thanks for the suggestions!  I'll give them a try over the next bit (I'm 
getting some extra storage, then I'll try using device mapper's snapshot 
target to avoid destroying what there).

I also might try writing a recovery tool for the bcache cache, doing something 
similar to the dm snapshot system.

Thanks for the pointers!
--
Matthew



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Help recovering filesystem (if possible)
@ 2021-11-15  1:23 Matthew Dawson
  0 siblings, 0 replies; 8+ messages in thread
From: Matthew Dawson @ 2021-11-15  1:23 UTC (permalink / raw)
  To: linux-btrfs

Hi all,

I recently upgrade one of my machines to the 5.15.2 kernel.  on the first 
reboot, I had a kernel fault during the initialization (I didn't get to 
capture the printed stack trace, but I'm 99% sure it did not have BTRFS 
related calls).  I then rebooted the machine back to a 5.14 kernel, but the 
BCache (writeback) cache was corrupted.  I then force started the underlying 
disks, but now my BTRFS filesystem will no longer mount.  I realize there may 
be missing/corrupted data, but I would like to ideally get any data I can off 
the disks.

This system involves 10 8TB disk, some are doing BCache -> LUKS -> BTRFS, some 
are doing LUKS -> BTRFS.

When I try to mount the filesystem, I get the following in dmesg:
[117632.798339] BTRFS info (device dm-0): flagging fs with big metadata feature
[117632.798344] BTRFS info (device dm-0): disk space caching is enabled
[117632.798346] BTRFS info (device dm-0): has skinny extents
[117632.873186] BTRFS error (device dm-0): parent transid verify failed on 
132806584614912 wanted 3240123 found 3240119
[117632.873542] BTRFS error (device dm-0): parent transid verify failed on 
132806584614912 wanted 3240123 found 3240119
[117632.873592] BTRFS warning (device dm-0): couldn't read tree root
[117632.883662] BTRFS error (device dm-0): open_ctree failed

I then tried using rescue=all,ro to mount the filesystem, but got:
[117658.264048] BTRFS info (device dm-0): flagging fs with big metadata feature
[117658.264056] BTRFS info (device dm-0): enabling all of the rescue options
[117658.264057] BTRFS info (device dm-0): ignoring data csums
[117658.264059] BTRFS info (device dm-0): ignoring bad roots
[117658.264060] BTRFS info (device dm-0): disabling log replay at mount time
[117658.264061] BTRFS info (device dm-0): disk space caching is enabled
[117658.264062] BTRFS info (device dm-0): has skinny extents
[117658.286252] BTRFS error (device dm-0): parent transid verify failed on 
132806584614912 wanted 3240123 found 3240119
[117658.286573] BTRFS error (device dm-0): parent transid verify failed on 
132806584614912 wanted 3240123 found 3240119
[117658.286614] BTRFS warning (device dm-0): couldn't read tree root
[117658.294632] BTRFS error (device dm-0): open_ctree failed

Running btrfs check (not repair) to see if it had anything else printed:
parent transid verify failed on 132806546751488 wanted 3240122 found 3239869
parent transid verify failed on 132806546751488 wanted 3240122 found 3239869
parent transid verify failed on 132806571458560 wanted 3240122 found 3239869
parent transid verify failed on 132806571458560 wanted 3240122 found 3239869
parent transid verify failed on 132806571458560 wanted 3240122 found 3239869
parent transid verify failed on 132806571458560 wanted 3240122 found 3239869
Ignoring transid failure
leaf parent key incorrect 132806571458560
Couldn't setup extent tree
ERROR: cannot open file system

Running btrfs restore to see if data could be recovered prints:
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
parent transid verify failed on 132806584614912 wanted 3240123 found 3240119
Ignoring transid failure
Couldn't setup extent tree
Couldn't setup device tree
Could not open root, trying backup super
warning, device 6 is missing
warning, device 13 is missing
warning, device 12 is missing
warning, device 11 is missing
warning, device 7 is missing
warning, device 9 is missing
warning, device 14 is missing
bytenr mismatch, want=136920576753664, have=0
ERROR: cannot read chunk root
Could not open root, trying backup super
warning, device 6 is missing
warning, device 13 is missing
warning, device 12 is missing
warning, device 11 is missing
warning, device 7 is missing
warning, device 9 is missing
warning, device 14 is missing
bytenr mismatch, want=136920576753664, have=0
ERROR: cannot read chunk root
Could not open root, trying backup super

(All disks are present in the system)

Is there any hope in recovering this data?  Or should I give up on it at this 
point and reformat?  Most of the data is backed up (or are backups 
themselves), but I'd like to get what I can.

Thanks,
-- 
Matthew



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-11-24  5:11 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-15  1:52 Help recovering filesystem (if possible) Matthew Dawson
2021-11-15 10:46 ` Kai Krakow
2021-11-18  2:57   ` Matthew Dawson
2021-11-18 21:09     ` Zygo Blaxell
2021-11-19  4:42       ` Matthew Dawson
2021-11-24  4:43         ` Zygo Blaxell
2021-11-24  5:11           ` Matthew Dawson
  -- strict thread matches above, loose matches on Subject: below --
2021-11-15  1:23 Matthew Dawson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.