linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Multiple btrfs-cleaner threads per volume
@ 2017-11-02 15:02 Martin Raiber
  2017-11-02 15:07 ` Austin S. Hemmelgarn
  2017-11-02 15:10 ` Hans van Kranenburg
  0 siblings, 2 replies; 5+ messages in thread
From: Martin Raiber @ 2017-11-02 15:02 UTC (permalink / raw)
  To: linux-btrfs

Hi,

snapshot cleanup is a little slow in my case (50TB volume). Would it
help to have multiple btrfs-cleaner threads? The block layer underneath
would have higher throughput with more simultaneous read/write requests.

Regards,
Martin Raiber


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multiple btrfs-cleaner threads per volume
  2017-11-02 15:02 Multiple btrfs-cleaner threads per volume Martin Raiber
@ 2017-11-02 15:07 ` Austin S. Hemmelgarn
  2017-11-02 15:10 ` Hans van Kranenburg
  1 sibling, 0 replies; 5+ messages in thread
From: Austin S. Hemmelgarn @ 2017-11-02 15:07 UTC (permalink / raw)
  To: Martin Raiber, linux-btrfs

On 2017-11-02 11:02, Martin Raiber wrote:
> Hi,
> 
> snapshot cleanup is a little slow in my case (50TB volume). Would it
> help to have multiple btrfs-cleaner threads? The block layer underneath
> would have higher throughput with more simultaneous read/write requests.
I think a bigger impact would be proper parallelization of IO requests. 
Right now, writes are serialized (they first get sent to one device, 
then the next, then the next, until they've been sent to all devices), 
and reads aren't inherently load-balanced across devices (ideally, other 
things than load, like where the last read came from on a rotational 
device, would be factored in, but even just regular load balancing would 
be an improvement right now).  As a result, multi-device BTRFS volumes 
in general don't perform as well as possible, which is then compounded 
by other issues (such as snapshot cleanup being somewhat expensive).


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multiple btrfs-cleaner threads per volume
  2017-11-02 15:02 Multiple btrfs-cleaner threads per volume Martin Raiber
  2017-11-02 15:07 ` Austin S. Hemmelgarn
@ 2017-11-02 15:10 ` Hans van Kranenburg
  2017-11-02 15:26   ` Martin Raiber
  1 sibling, 1 reply; 5+ messages in thread
From: Hans van Kranenburg @ 2017-11-02 15:10 UTC (permalink / raw)
  To: Martin Raiber, linux-btrfs

Hi Martin,

On 11/02/2017 04:02 PM, Martin Raiber wrote:
> 
> snapshot cleanup is a little slow in my case (50TB volume). Would it
> help to have multiple btrfs-cleaner threads? The block layer underneath
> would have higher throughput with more simultaneous read/write requests.

Just curious:
* How many subvolumes/snapshots are you removing, and what's the
complexity level (like, how many other subvolumes/snapshots reference
the same data extents?)
* Do you see a lot of cpu usage, or mainly a lot of disk I/O? If it's
disk IO, is it mainly random read IO, or is it a lot of write traffic?
* What mount options are you running with (from /proc/mounts)?

-- 
Hans van Kranenburg

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multiple btrfs-cleaner threads per volume
  2017-11-02 15:10 ` Hans van Kranenburg
@ 2017-11-02 15:26   ` Martin Raiber
  2017-11-02 16:56     ` Hans van Kranenburg
  0 siblings, 1 reply; 5+ messages in thread
From: Martin Raiber @ 2017-11-02 15:26 UTC (permalink / raw)
  To: Hans van Kranenburg, linux-btrfs

On 02.11.2017 16:10 Hans van Kranenburg wrote:
> On 11/02/2017 04:02 PM, Martin Raiber wrote:
>> snapshot cleanup is a little slow in my case (50TB volume). Would it
>> help to have multiple btrfs-cleaner threads? The block layer underneath
>> would have higher throughput with more simultaneous read/write requests.
> Just curious:
> * How many subvolumes/snapshots are you removing, and what's the
> complexity level (like, how many other subvolumes/snapshots reference
> the same data extents?)
> * Do you see a lot of cpu usage, or mainly a lot of disk I/O? If it's
> disk IO, is it mainly random read IO, or is it a lot of write traffic?
> * What mount options are you running with (from /proc/mounts)?

It is a single block device, so not a multi-device btrfs, so
optimizations in that area wouldn't help. It is a UrBackup system with
about 200 snapshots per client. 20009 snapshots total. UrBackup reflinks
files between them, but btrfs-cleaner doesn't use much CPU (so it
doesn't seem like the backref walking is the problem). btrfs-cleaner is
probably limited mainly by random read/write IO. The device has a cache,
so parallel accesses would help, as some of them may hit the cache.
Looking at the code it seems easy enough to do. Question is if there are
any obvious reasons why this wouldn't work (like some lock etc.).

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multiple btrfs-cleaner threads per volume
  2017-11-02 15:26   ` Martin Raiber
@ 2017-11-02 16:56     ` Hans van Kranenburg
  0 siblings, 0 replies; 5+ messages in thread
From: Hans van Kranenburg @ 2017-11-02 16:56 UTC (permalink / raw)
  To: Martin Raiber, linux-btrfs

On 11/02/2017 04:26 PM, Martin Raiber wrote:
> On 02.11.2017 16:10 Hans van Kranenburg wrote:
>> On 11/02/2017 04:02 PM, Martin Raiber wrote:
>>> snapshot cleanup is a little slow in my case (50TB volume). Would it
>>> help to have multiple btrfs-cleaner threads? The block layer underneath
>>> would have higher throughput with more simultaneous read/write requests.
>> Just curious:
>> * How many subvolumes/snapshots are you removing, and what's the
>> complexity level (like, how many other subvolumes/snapshots reference
>> the same data extents?)
>> * Do you see a lot of cpu usage, or mainly a lot of disk I/O? If it's
>> disk IO, is it mainly random read IO, or is it a lot of write traffic?
>> * What mount options are you running with (from /proc/mounts)?

Can you paste the output from /proc/mounts for your filesystem? The
reason I'm asking is that the nossd/ssd/ssd_spread related mount options
can have a huge impact on subvolume removal performance for very large
filesystems, like your 50TB one.

> It is a single block device, so not a multi-device btrfs, so
> optimizations in that area wouldn't help. It is a UrBackup system with
> about 200 snapshots per client. 20009 snapshots total. UrBackup reflinks
> files between them, but btrfs-cleaner doesn't use much CPU (so it
> doesn't seem like the backref walking is the problem). btrfs-cleaner is
> probably limited mainly by random read/write IO.

Do you have some graphs, or iostat output? The question is what the
biggest part of the IO consists of. Is it on 100% random read IO and not
many writes, or is it 100% utilized because of many MiB/s of writes?

> The device has a cache,
> so parallel accesses would help, as some of them may hit the cache.
> Looking at the code it seems easy enough to do. Question is if there are
> any obvious reasons why this wouldn't work (like some lock etc.).

-- 
Hans van Kranenburg

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-11-02 16:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-02 15:02 Multiple btrfs-cleaner threads per volume Martin Raiber
2017-11-02 15:07 ` Austin S. Hemmelgarn
2017-11-02 15:10 ` Hans van Kranenburg
2017-11-02 15:26   ` Martin Raiber
2017-11-02 16:56     ` Hans van Kranenburg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).