linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Erik Jensen <erikjensen@rkjnsn.net>
Cc: Su Yue <l@damenly.su>, Hugo Mills <hugo@carfax.org.uk>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: "bad tree block start" when trying to mount on ARM
Date: Sat, 20 Feb 2021 14:01:52 +0800	[thread overview]
Message-ID: <aaf9f863-9f16-704f-9682-ac52626d0acc@gmx.com> (raw)
In-Reply-To: <CAMj6ewON-ADoVKRL8yhy+vYaKoxGd=YwdpZkrDRRRG_8aOMjeA@mail.gmail.com>



On 2021/2/20 下午12:28, Erik Jensen wrote:
> On Fri, Feb 19, 2021 at 7:16 PM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>> On 2021/2/20 上午10:47, Erik Jensen wrote:
>>> Given that it sounds like the issue is the metadata address space, and
>>> given that I surely don't actually have 16TiB of metadata on a 24TiB
>>> file system (indeed, Metadata, RAID1: total=30.00GiB, used=28.91GiB),
>>> is there any way I could compact the metadata offsets into the lower
>>> 16TiB of the virtual metadata inode? Perhaps that could be something
>>> balance could be taught to do? (Obviously, the initial run of such a
>>> balance would have to be performed using a 64-bit system.)
>>
>> Unfortunately, no.
>>
>> Btrfs relies on increasing bytenr in the logical address space for
>> things like balance, thus we can't relocate chunks to smaller bytenr.
>
> That's… unfortunate. How much relies on the assumption that bytenr is monotonic?

IIRC mostly balance itself.

>
> Brainstorming some ideas, is compacting the address space something
> that could be done offline? E.g., maybe some two-pass process: first
> something balance-like that bumps all of the metadata up to a compact
> region of address space, starting at a new 16TiB boundary, and then a
> follow up pass that just strips the top bits off?

We need btrfs-progs support for off-line balancing.

I used to have this idea, but see very limited usage.

This would be the safest bet, but needs a lot of work, although in user
space.

>
> Or maybe once all of the bytenrs are brought within 16TiB of each
> other by balance, btrfs could just keep track of an offset that needs
> to be applied when mapping page cache indexes?

But further balance/new chunk allocation can still go beyond the limit.

This is biggest problem other fs don't need to bother.
We can dynamically allocate chunks while others can't.

>
> Or maybe btrfs could use multiple virtual inodes on 32-bit systems,
> one for each 16TiB block of address space with metadata in it? If this
> were to ever grow to need more than a handful of virtual inodes, it
> seems like a balance *would* actually help in this case by compacting
> the metadata higher in the address space, allowing the virtual inodes
> for lower in the address space to be dropped.

This may be a good idea.

But the problem of test coverage is always here.

We can spend tons of lines, but at the end it will not really be well
tested, as it's really hard
>
> Or maybe btrfs could just not use the page cache for the metadata
> inode once the offset exceeds 16TiB, and only cache at the block
> layer? This would surely hurt performance, but at least the filesystem
> could still be accessed.

I don't believe it's really possible, unless we override the XArray
thing provided by MM completely and implemented a btrfs only structure.

That's too costy.

>
> Given that this issue appears to be not due to the size of the
> filesystem, but merely how much I've used it, having the only solution
> be to copy all of the data off, reformat the drives, and then restore
> every time filesystem usage exceeds a certain thresholds is… not very
> satisfying.

Yeah, definitely not a good experience.

>
> Finally, I've never done kernel dev before, but I do have some C
> experience, so if there is a solution that falls into the category of
> seeming reasonable, likely to be accepted if implemented, but being
> unlikely to get implemented given the low priority of supporting
> 32-bit systems, let me know and maybe I can carve out some time to
> give it a try.
>
BTW, if you want things like 64K page size, while still keep the 4K
sector size of your existing btrfs, then I guess you may be interested
in the recent subpage support.

Which allow btrfs to mount 4K sector size fs with 64K page size.

Unfortunately it's still WIP, but may fit your usecase, as ARM support
multiple page sizes (4K, 16K, 64K).
(Although we are only going to support 64K page for now)

Thanks,
Qu

  reply	other threads:[~2021-02-20  6:03 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-21  8:34 "bad tree block start" when trying to mount on ARM Erik Jensen
2019-05-21  8:56 ` Patrik Lundquist
2019-05-21  9:01   ` Erik Jensen
2019-05-21  9:18 ` Hugo Mills
2019-05-22 16:02   ` Erik Jensen
2019-06-26  7:04     ` Erik Jensen
2019-06-26  8:10       ` Qu Wenruo
     [not found]         ` <CAMj6ewO229vq6=s+T7GhUegwDADv4dzhqPiM0jo10QiKujvytA@mail.gmail.com>
2019-06-28  8:15           ` Qu Wenruo
2021-01-18 10:50             ` Erik Jensen
     [not found]             ` <CAMj6ewMqXLtrBQgTJuz04v3MBZ0W95fU4pT0jP6kFhuP830TuA@mail.gmail.com>
2021-01-18 11:07               ` Qu Wenruo
2021-01-18 11:55                 ` Erik Jensen
2021-01-18 12:01                   ` Qu Wenruo
2021-01-18 12:12                     ` Erik Jensen
2021-01-19  5:22                       ` Erik Jensen
2021-01-19  9:28                         ` Erik Jensen
2021-01-20  8:21                           ` Qu Wenruo
2021-01-20  8:30                             ` Qu Wenruo
     [not found]                               ` <CAMj6ewOqCJTGjykDijun9_LWYELA=92HrE+KjGo-ehJTutR_+w@mail.gmail.com>
2021-01-26  4:54                                 ` Erik Jensen
2021-01-29  6:39                                   ` Erik Jensen
2021-02-01  2:35                                     ` Qu Wenruo
2021-02-01  5:49                                       ` Su Yue
2021-02-04  6:16                                         ` Erik Jensen
2021-02-06  1:57                                           ` Erik Jensen
2021-02-10  5:47                                             ` Qu Wenruo
2021-02-10 22:17                                               ` Erik Jensen
2021-02-10 23:47                                                 ` Qu Wenruo
2021-02-18  1:24                                                   ` Qu Wenruo
2021-02-18  4:03                                                     ` Erik Jensen
2021-02-18  5:24                                                       ` Qu Wenruo
2021-02-18  5:49                                                         ` Erik Jensen
2021-02-18  6:09                                                           ` Qu Wenruo
2021-02-18  6:59                                                             ` Erik Jensen
2021-02-18  7:24                                                               ` Qu Wenruo
2021-02-18  7:59                                                                 ` Erik Jensen
2021-02-18  8:38                                                                   ` Qu Wenruo
2021-02-18  8:52                                                                     ` Erik Jensen
2021-02-18  8:59                                                                       ` Qu Wenruo
2021-02-20  2:47                                                                         ` Erik Jensen
2021-02-20  3:16                                                                           ` Qu Wenruo
2021-02-20  4:28                                                                             ` Erik Jensen
2021-02-20  6:01                                                                               ` Qu Wenruo [this message]
2021-02-21  5:36                                                                                 ` Erik Jensen
2021-02-18  7:25                                                               ` Erik Jensen
2019-05-21 10:17 ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aaf9f863-9f16-704f-9682-ac52626d0acc@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=erikjensen@rkjnsn.net \
    --cc=hugo@carfax.org.uk \
    --cc=l@damenly.su \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).