All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
To: Qu Wenruo <quwenruo@cn.fujitsu.com>,
	"Austin S. Hemmelgarn" <ahferroin7@gmail.com>,
	linux-btrfs@vger.kernel.org
Subject: Re: Btrfs Heatmap - v2 - block group internals!
Date: Fri, 18 Nov 2016 16:08:38 +0100	[thread overview]
Message-ID: <9945f788-2516-007b-43ce-dabcff263cd8@mendix.com> (raw)
In-Reply-To: <02310b0f-5fed-746a-ca05-d655dce7d57a@cn.fujitsu.com>

On 11/18/2016 03:08 AM, Qu Wenruo wrote:
> 
> Just found one small problem.
> After specifying --size 16 to output a given block group (small block
> group, I need large size to make output visible), it takes a full cpu
> and takes a long long long time to run.
> So long I don't even want to wait.
> 
> I changed size to 10, and it finished much faster.
> 
> Is that expected?

Yes, hilbert curve size increases exponentially when increasing order.

2**16 = 65536, 65536x65536 = 4294967296 pixels in the png image.

So even if you would have a petabyte filesystem, that would still mean
that a single pixel in the image represents only ~ 256kB. I don't think
your small block groups are 256kB big?

Specifying values like this does not make any sense at all, and it
expected not to work well.

>>> I don't see what displaying a blockgroup-level aggregate usage number
>>> has to do with multi-device, except that the same %usage will appear
>>> another time when using RAID1*.
> 
> Although in fact, for profiles like RAID0/5/6/10, it's completely
> possible that one dev_extent contains all the data, while another
> dev_extent is almost empty.

When using something like RAID0 profile, I would expect 50% of the data
to end up in one dev_extent and 50% in the other?

> Strictly speaking, at full fs or dev level, we should output things at
> dev_extent level, then greyscale should be representing dev_extent
> usage(which is not possible or quite hard to calculate)

That's what it's doing now?

> Anyway, the greyscale is mostly OK, just as a good addition output for
> full fs graph.

I don't follow.

> Although if it could output the fs or specific dev without gray scale, I
> think it would be better.
> It will be much clearer about the dev_extent level fragments.

I have no idea what you mean, sorry.

>>> When generating a picture of a file system with multiple devices,
>>> boundaries between the separate devices are not visible now.
>>>
>>> If someone has a brilliant idea about how to do this without throwing
>>> out actual usage data...
>>>
>> The first thought that comes to mind for me is to make each device be a
>> different color, and otherwise obey the same intensity mapping
>> correlating to how much data is there.  For example, if you've got a 3
>> device FS, the parts of the image that correspond to device 1 would go
>> from 0x000000 to 0xFF0000, the parts for device 2 could be 0x000000 to
>> 0x00FF00, and the parts for device 3 could be 0x000000 to 0x0000FF. This
>> is of course not perfect (you can't tell what device each segment of
>> empty space corresponds to), but would probably cover most use cases.
>> (for example, with such a scheme, you could look at an image and tell
>> whether the data is relatively well distributed across all the devices
>> or you might need to re-balance).
> 
> What about linear output separated with lines(or just black)?

Linear output does not produce useful images, except for really small
filesystems.

-- 
Hans van Kranenburg

  reply	other threads:[~2016-11-18 15:08 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-12 23:19 Btrfs Heatmap - visualize your filesystem Hans van Kranenburg
2016-11-16 20:30 ` Btrfs Heatmap - v2 - block group internals! Hans van Kranenburg
2016-11-17  1:27   ` Qu Wenruo
2016-11-17 18:51     ` Hans van Kranenburg
2016-11-17 19:27       ` Austin S. Hemmelgarn
2016-11-17 21:08         ` Hans van Kranenburg
2016-11-18 12:36           ` Austin S. Hemmelgarn
2016-11-18 14:37             ` Hans van Kranenburg
2016-11-18 15:33               ` Austin S. Hemmelgarn
2016-11-18  2:08         ` Qu Wenruo
2016-11-18 15:08           ` Hans van Kranenburg [this message]
2016-11-18 15:30             ` Austin S. Hemmelgarn
2016-11-18 16:18               ` Hans van Kranenburg
2016-11-19  0:57             ` Qu Wenruo
2016-11-19  1:38               ` Hans van Kranenburg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9945f788-2516-007b-43ce-dabcff263cd8@mendix.com \
    --to=hans.van.kranenburg@mendix.com \
    --cc=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.