All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Christian Pernegger <pernegger@gmail.com>,
	Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: first it froze, now the (btrfs) root fs won't mount ...
Date: Wed, 23 Oct 2019 07:31:18 -0400	[thread overview]
Message-ID: <0d6683ee-4a2c-f2ab-857b-c7cd44442dce@gmail.com> (raw)
In-Reply-To: <CAKbQEqGPY0qwrSLMT03H=s5Tg=C-UCscyUMXK-oLrt5+YjFMqQ@mail.gmail.com>

On 2019-10-22 18:56, Christian Pernegger wrote:
> [Please CC me, I'm not on the list.]
> 
> Am Mo., 21. Okt. 2019 um 15:34 Uhr schrieb Qu Wenruo <quwenruo.btrfs@gmx.com>:
>> [...] just fstrim wiped some old tree blocks. But maybe it's some unfortunate race, that fstrim trimmed some tree blocks still in use.
> 
> Forgive me for asking, but assuming that's what happened, why are the
> backup blocks "not in use" from fstrim's perspective in the first
> place? I'd consider backup (meta)data to be valuable payload data,
> something to be stored extra carefully. No use making them if they're
> no goo when you need them, after all. In other words, does fstrim by
> default trim btrfs metadata (in which case fstrim's broken) or does
> btrfs in effect store backup data in "unused" space (in which case
> btrfs is broken)?
Because they aren't in use unless you've mounted the volume using them. 
BTRFS doesn't go out of it's way to get rid of them, but it really isn't 
using them either once the active tree is fully committed.

Note, however, that you're not guaranteed to have working backup 
metadata trees even if you aren't using TRIM, because BTRFS _will_ 
overwrite them eventually, and that might happen as soon as BTRFS starts 
preparing the next commit.

There has been some discussion about how to deal with this sanely, but 
AFAIK, it hasn't produced any patches yet.
> 
>> [...] One good compromise is, only trim unallocated space.
> 
> It had never occurred to me that anything would purposely try to trim
> allocated space ...
I believe Qu is referring specifically to space not allocated at the 
chunk level, not at the block level.  Nothing should be discarding space 
that's allocated at the block level right now, but the current 
implementation will discard space within chunks that is not allocated at 
the block level, which may include old metadata trees.
> 
>> As your corruption is only in extent tree. With my patchset, you should be able to mount it, so it's not that screwed up.
> 
> To be clear, we're talking data recovery, not (progress towards) fs
> repair, even if I manage to boot your rescue patchset?
> 
> A few more random observations from playing with the drive image:
> $ btrfs check --init-extent-tree patient
> Opening filesystem to check...
> Checking filesystem on patient
> UUID: c2bd83d6-2261-47bb-8d18-5aba949651d7
> repair mode will force to clear out log tree, are you sure? [y/N]: y
> ERROR: Corrupted fs, no valid METADATA block group found
> ERROR: failed to zero log tree: -117
> ERROR: attempt to start transaction over already running one
> # rollback
> 
> $ btrfs rescue zero-log patient
> checksum verify failed on 284041084928 found E4E3BDB6 wanted 00000000
> checksum verify failed on 284041084928 found E4E3BDB6 wanted 00000000
> bad tree block 284041084928, bytenr mismatch, want=284041084928, have=0
> ERROR: could not open ctree
> # rollback
> 
> # hm, super 0 has log_root 284056535040, super 1 and 2 have log_root 0 ...
> $ btrfs check -s1 --init-extent-tree patient
> [...]
> ERROR: errors found in fs roots
> No device size related problem found
> cache and super generation don't match, space cache will be invalidated
> found 431478808576 bytes used, error(s) found
> total csum bytes: 417926772
> total tree bytes: 2203549696
> total fs tree bytes: 1754415104
> total extent tree bytes: 49152
> btree space waste bytes: 382829965
> file data blocks allocated: 1591388033024
>   referenced 539237134336
> 
> That ran a good while, generating a couple of hundred MB of output
> (available on request, of course). In any case, it didn't help.
> 
> $ ~/local/bin/btrfs check -s1 --repair patient
> using SB copy 1, bytenr 67108864
> enabling repair mode
> Opening filesystem to check...
> checksum verify failed on 427311104 found 000000C8 wanted FFFFFF99
> checksum verify failed on 427311104 found 000000C8 wanted FFFFFF99
> Csum didn't match
> ERROR: cannot open file system
> 
> I don't suppose the roots found by btrfs-find-root and/or subvolumes
> identified by btrfs restore -l would be any help? It's not like the
> real fs root contained anything, just @ [/], @home [/home], and the
> Timeshift subvolumes. If btrfs restore -D is to be believed, the
> casualties under @home, for example, are inconsequential, caches and
> the like, stuff that was likely open for writing at the time.
> 
> I don't know, it just seems strange that with all the (meta)data
> that's obviously still there, it shouldn't be possible to restore the
> fs to some sort of consistent state.
Not all metadata is created equally...

Losing the extent tree shouldn't break things this bad in most cases, 
but there are certain parts of the metadata that if lost mean you've got 
a dead FS with no way to rebuild (the chunk tree for example).

  parent reply	other threads:[~2019-10-23 11:31 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAKbQEqE7xN1q3byFL7-_pD=_pGJ0Vm9pj7d-g+rRgtONeH-GrA@mail.gmail.com>
2019-10-19 22:34 ` first it froze, now the (btrfs) root fs won't mount Christian Pernegger
2019-10-20  0:38   ` Qu Wenruo
2019-10-20 10:11     ` Christian Pernegger
2019-10-20 10:22       ` Christian Pernegger
2019-10-20 10:28         ` Qu Wenruo
2019-10-21 10:47           ` Christian Pernegger
2019-10-21 10:55             ` Qu Wenruo
2019-10-21 11:47             ` Austin S. Hemmelgarn
2019-10-21 13:02               ` Christian Pernegger
2019-10-21 13:34                 ` Qu Wenruo
2019-10-22 22:56                   ` Christian Pernegger
2019-10-23  0:25                     ` Qu Wenruo
2019-10-23 11:31                     ` Austin S. Hemmelgarn [this message]
2019-10-24 10:41                       ` Christian Pernegger
2019-10-24 11:26                         ` Qu Wenruo
2019-10-24 11:40                         ` Austin S. Hemmelgarn
2019-10-25 16:43                           ` Christian Pernegger
2019-10-25 17:05                             ` Christian Pernegger
2019-10-25 17:16                               ` Austin S. Hemmelgarn
2019-10-25 17:12                             ` Austin S. Hemmelgarn
2019-10-26  0:01                             ` Qu Wenruo
2019-10-26  9:23                               ` Christian Pernegger
2019-10-26  9:41                                 ` Qu Wenruo
2019-10-26 13:52                                   ` Christian Pernegger
2019-10-26 14:06                                     ` Qu Wenruo
2019-10-26 16:30                                       ` Christian Pernegger
2019-10-27  0:46                                         ` Qu Wenruo
     [not found]                                           ` <CAKbQEqFne8eohE3gvCMm8LqA-KimFrwwvE5pUBTn-h-VBhJq1A@mail.gmail.com>
2019-10-27 13:38                                             ` Qu Wenruo
2019-10-21 14:02                 ` Austin S. Hemmelgarn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0d6683ee-4a2c-f2ab-857b-c7cd44442dce@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=pernegger@gmail.com \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.