All of lore.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Roman Mamedov <rm@romanrm.net>,
	Marco Lorenzo Crociani <marcoc@prismatelecomtesting.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: mount time for big filesystems
Date: Thu, 31 Aug 2017 22:13:46 +0800	[thread overview]
Message-ID: <aecfb058-83bd-9ebd-b043-add6e245051b@gmx.com> (raw)
In-Reply-To: <20170831163656.6be88191@natsu>



On 2017年08月31日 19:36, Roman Mamedov wrote:
> On Thu, 31 Aug 2017 12:43:19 +0200
> Marco Lorenzo Crociani <marcoc@prismatelecomtesting.com> wrote:
> 
>> Hi,
>> this 37T filesystem took some times to mount. It has 47
>> subvolumes/snapshots and is mounted with
>> noatime,compress=zlib,space_cache. Is it normal, due to its size?

Just like Han said, this is caused by BLOCK_GROUP_ITEM scattered in the 
large extent tree.
So, it's hard to improve soon.

Some ideas like delay BLOCK_GROUP_ITEM loading can greatly enhance the 
mount speed.
But such enhancement may affect extent allocator (that's to say we can't 
do any write before at least some BLOCK_GROUP_ITEM is loaded) and may 
cause more bugs.

Other ideas like per-chunk extent tree may also greatly reduce mount 
time but need on-disk format change.
(Well, in fact btrfs on-disk format is never well designed anyway, so if 
anyone is really doing this, please make a comprehensive wiki/white 
paper for this)

> 
> If you could implement SSD caching in front of your FS (such as lvmcache or
> bcache), that would work wonders for performance in general, and especially
> for mount times. I have seen amazing results with lvmcache (of just 32 GB) for
> a 14 TB FS.

That's impressive.
Since extent tree is a super hot tree (any CoW will modify extent tree), 
it makes sense.

> 
> As for in general, with your FS size perhaps you should be using
> "space_cache=v2" for better performance, but I'm not sure if that will have
> any effect on mount time (aside from slowing down the first mount with that).
> 
Unfortunately, space tree is not loaded until used (at least for v1), so 
space_cache may not help much.

Thanks,
Qu

  parent reply	other threads:[~2017-08-31 14:13 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-31 10:43 mount time for big filesystems Marco Lorenzo Crociani
2017-08-31 11:00 ` Hans van Kranenburg
2017-08-31 11:22   ` Austin S. Hemmelgarn
2017-08-31 11:36 ` Roman Mamedov
2017-08-31 11:45   ` Austin S. Hemmelgarn
2017-08-31 12:16     ` Roman Mamedov
2017-08-31 14:13   ` Qu Wenruo [this message]
2017-09-01 13:52   ` Juan Orti Alcaine
2017-09-01 13:59     ` Austin S. Hemmelgarn
     [not found]       ` <CAC+fKQWFbdF6b3jGO_6hG_pNNzKobBYMeSNyEi5XRCf5YKa81Q@mail.gmail.com>
2017-09-01 15:20         ` Austin S. Hemmelgarn
2017-09-01 22:41           ` Dan Merillat

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aecfb058-83bd-9ebd-b043-add6e245051b@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=marcoc@prismatelecomtesting.com \
    --cc=rm@romanrm.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.