linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
To: Martin Raiber <martin@urbackup.org>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: Multiple btrfs-cleaner threads per volume
Date: Thu, 2 Nov 2017 17:56:30 +0100	[thread overview]
Message-ID: <9f93fa6f-c5af-3513-3e15-e966bd128b34@mendix.com> (raw)
In-Reply-To: <0102015f7d576dad-688908ab-87d9-4715-9626-fb37cfff32e8-000000@eu-west-1.amazonses.com>

On 11/02/2017 04:26 PM, Martin Raiber wrote:
> On 02.11.2017 16:10 Hans van Kranenburg wrote:
>> On 11/02/2017 04:02 PM, Martin Raiber wrote:
>>> snapshot cleanup is a little slow in my case (50TB volume). Would it
>>> help to have multiple btrfs-cleaner threads? The block layer underneath
>>> would have higher throughput with more simultaneous read/write requests.
>> Just curious:
>> * How many subvolumes/snapshots are you removing, and what's the
>> complexity level (like, how many other subvolumes/snapshots reference
>> the same data extents?)
>> * Do you see a lot of cpu usage, or mainly a lot of disk I/O? If it's
>> disk IO, is it mainly random read IO, or is it a lot of write traffic?
>> * What mount options are you running with (from /proc/mounts)?

Can you paste the output from /proc/mounts for your filesystem? The
reason I'm asking is that the nossd/ssd/ssd_spread related mount options
can have a huge impact on subvolume removal performance for very large
filesystems, like your 50TB one.

> It is a single block device, so not a multi-device btrfs, so
> optimizations in that area wouldn't help. It is a UrBackup system with
> about 200 snapshots per client. 20009 snapshots total. UrBackup reflinks
> files between them, but btrfs-cleaner doesn't use much CPU (so it
> doesn't seem like the backref walking is the problem). btrfs-cleaner is
> probably limited mainly by random read/write IO.

Do you have some graphs, or iostat output? The question is what the
biggest part of the IO consists of. Is it on 100% random read IO and not
many writes, or is it 100% utilized because of many MiB/s of writes?

> The device has a cache,
> so parallel accesses would help, as some of them may hit the cache.
> Looking at the code it seems easy enough to do. Question is if there are
> any obvious reasons why this wouldn't work (like some lock etc.).

-- 
Hans van Kranenburg

      reply	other threads:[~2017-11-02 16:56 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-02 15:02 Multiple btrfs-cleaner threads per volume Martin Raiber
2017-11-02 15:07 ` Austin S. Hemmelgarn
2017-11-02 15:10 ` Hans van Kranenburg
2017-11-02 15:26   ` Martin Raiber
2017-11-02 16:56     ` Hans van Kranenburg [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9f93fa6f-c5af-3513-3e15-e966bd128b34@mendix.com \
    --to=hans.van.kranenburg@mendix.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=martin@urbackup.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).