All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Jorge Guerra <jorge.guerra@gmail.com>
Cc: linux-xfs@vger.kernel.org, osandov@osandov.com,
	Jorge Guerra <jorgeguerra@fb.com>
Subject: Re: [PATCH] xfs_db: add extent count and file size histograms
Date: Wed, 15 May 2019 09:31:19 +1000	[thread overview]
Message-ID: <20190514233119.GS29573@dread.disaster.area> (raw)
In-Reply-To: <20190514185026.73788-1-jorgeguerra@gmail.com>

On Tue, May 14, 2019 at 11:50:26AM -0700, Jorge Guerra wrote:
> From: Jorge Guerra <jorgeguerra@fb.com>
> 
> In this change we add two feature to the xfs_db 'frag' command:
> 
> 1) Extent count histogram [-e]: This option enables tracking the
>    number of extents per inode (file) as the we traverse the file
>    system.  The end result is a histogram of the number of extents per
>    file in power of 2 buckets.
> 
> 2) File size histogram and file system internal fragmentation stats
>    [-s]: This option enables tracking file sizes both in terms of what
>    has been physically allocated and how much has been written to the
>    file.  In addition, we track the amount of internal fragmentation
>    seen per file.  This is particularly useful in the case of real
>    time devices where space is allocated in units of fixed sized
>    extents.

I can see the usefulness of having such information, but xfs_db is
the wrong tool/interface for generating such usage reports.

> The man page for xfs_db has been updated to reflect these new command
> line arguments.
> 
> Tests:
> 
> We tested this change on several XFS file systems with different
> configurations:
> 
> 1) regular XFS:
> 
> [root@m1 ~]# xfs_info /mnt/d0
> meta-data=/dev/sdb1              isize=256    agcount=10, agsize=268435455 blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=0        finobt=0, sparse=0, rmapbt=0
>          =                       reflink=0
> data     =                       bsize=4096   blocks=2441608704, imaxpct=100
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> log      =internal log           bsize=4096   blocks=521728, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> [root@m1 ~]# echo "frag -e -s" | xfs_db -r /dev/sdb1
> xfs_db> actual 494393, ideal 489246, fragmentation factor 1.04%

For example, xfs_db is not the right tool for probing online, active
filesystems. It is not coherent with the active kernel filesystem,
and is quite capable of walking off into la-la land as a result of
mis-parsing the inconsistent filesystem that is on disk underneath
active mounted filesystems. This does not make for a robust, usable
tool, let alone one that can make use of things like rmap for
querying usage and ownership information really quickly.

To solve this problem, we now have the xfs_spaceman tool and the
GETFSMAP ioctl for running usage queries on mounted filesystems.
That avoids all the coherency and crash problems, and for rmap
enabled filesystems it does not require scanning the entire
filesystem to work out this information (i.e. it can all be derived
from the contents of the rmap tree).

So I'd much prefer that new online filesystem queries go into
xfs-spaceman and use GETFSMAP so they can be accelerated on rmap
configured filesystems rather than hoping xfs_db will parse the
entire mounted filesystem correctly while it is being actively
changed...

> Maximum extents in a file 14
> Histogram of number of extents per file:
>     bucket =       count        % of total
> <=       1 =      350934        97.696 %
> <=       2 =        6231        1.735 %
> <=       4 =        1001        0.279 %
> <=       8 =         953        0.265 %
> <=      16 =          92        0.026 %
> Maximum file size 26.508 MB
> Histogram of file size:
>     bucket =    allocated           used        overhead(bytes)
> <=    4 KB =           0              62           314048512 0.13%
> <=    8 KB =           0          119911        127209263104 53.28%
> <=   16 KB =           0           14543         15350194176 6.43%
> <=   32 KB =         909           12330         11851161600 4.96%
> <=   64 KB =          92            6704          6828642304 2.86%
> <=  128 KB =           1            7132          6933372928 2.90%
> <=  256 KB =           0           10013          8753799168 3.67%
> <=  512 KB =           0           13616          9049227264 3.79%
> <=    1 MB =           1           15056          4774912000 2.00%
> <=    2 MB =      198662           17168          9690226688 4.06%
> <=    4 MB =       28639           21073         11806654464 4.94%
> <=    8 MB =       35169           29878         14200553472 5.95%
> <=   16 MB =       95667           91633         11939287040 5.00%
> <=   32 MB =          71              62            28471742 0.01%
> capacity used (bytes): 1097735533058 (1022.346 GB)
> capacity allocated (bytes): 1336497410048 (1.216 TB)
> block overhead (bytes): 238761885182 (21.750 %)

BTW, "bytes" as a display unit is stupidly verbose and largely
unnecessary. The byte count is /always/ going to be a multiple of
the filesystem block size, and the first thing anyone who wants to
use this for diagnosis is going to have to do is return the byte
count to filesystem blocks (which is what the filesystem itself
tracks everything in. ANd then when you have PB scale filesystems,
anything more than 3 significant digits is just impossible to read
and compare - that "overhead" column (what the "overhead" even
mean?) is largely impossible to read and determine what the actual
capacity used is without counting individual digits in each number.

FWIW, we already have extent histogram code in xfs_spaceman
(in spaceman/freesp.c) and in xfs_db (db/freesp.c) so we really
don't need re-implementation of the same functionality we already
have duplicate copies of. I'd suggest that the histogram code should
be factored and moved to libfrog/ and then enhanced if new histogram
functionality is required...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2019-05-14 23:31 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-14 18:50 [PATCH] xfs_db: add extent count and file size histograms Jorge Guerra
2019-05-14 19:52 ` Eric Sandeen
2019-05-14 20:02 ` Eric Sandeen
2019-05-15 15:57   ` Jorge Guerra
2019-05-15 16:02     ` Eric Sandeen
2019-05-14 23:31 ` Dave Chinner [this message]
2019-05-15  0:06   ` Eric Sandeen
2019-05-15  2:05     ` Dave Chinner
2019-05-15 16:39       ` Jorge Guerra
2019-05-15 22:55         ` Dave Chinner
2019-05-15 16:15   ` Jorge Guerra
2019-05-15 16:24     ` Eric Sandeen
2019-05-15 16:47       ` Jorge Guerra
2019-05-15 16:51         ` Eric Sandeen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190514233119.GS29573@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=jorge.guerra@gmail.com \
    --cc=jorgeguerra@fb.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=osandov@osandov.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.