linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* leaf size and compression
@ 2019-05-30 19:09 Chris Murphy
  0 siblings, 0 replies; only message in thread
From: Chris Murphy @ 2019-05-30 19:09 UTC (permalink / raw)
  To: Btrfs BTRFS

Hi,

I recently setup a test ppc64le VM, and incidentally noticed some
things. Since the page size is 64KiB, the sector size and node size
with a default mkfs.btrfs also became 64KiB. As a result, I saw quite
a lot of small files stuffed into a single leaf (inline data) compared
to the default 16KiB, where often the inline process seems to give up
and put even very small files less than 1KiB into a data block group,
thus taking up a full 4KiB extent (that's on x86).

Further, due to how compression works on 128KiB "segments" (not sure
the proper term for it), there is quite a metadata explosion that
happens with compression enabled. It takes quite a lot more 16KiB
leaves to contain that metadata, compared to 64KiB leaves.

I'm wondering if anyone has done benchmarking that demonstrates any
meaningful difference for 16 KiB, 32 KiB and 64 KiB node/leaf size
when using compression? All the prior benchmarks back in ancient times
when default node size was changed from 4KiB to 16KIB I'm pretty sure
were predicated on uncompressed data.

It could be a question of how many angels can dance on the head of a
pin, but there it is.


-- 
Chris Murphy

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2019-05-30 19:09 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-30 19:09 leaf size and compression Chris Murphy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).