All of lore.kernel.org
 help / color / mirror / Atom feed
* Confusing output from fi us/df
@ 2016-06-20 23:30 Marc Grondin
  2016-06-21  0:25 ` Hans van Kranenburg
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Marc Grondin @ 2016-06-20 23:30 UTC (permalink / raw)
  To: linux-btrfs

Hi everyone,


I have a btrfs filesystem ontop of a 4x1tb mdraid raid5 array and I've
been getting confusing output on metadata usage. Seems that even tho
both data and metadata are in single profile metadata is reporting
double the space(as if it was in dupe profile)



root@thebeach /h/marcg> uname -a
Linux thebeach 4.6.2-gentoo-GMAN #1 SMP Sat Jun 11 22:32:27 ADT 2016
x86_64 Intel(R) Core(TM) i5-2320 CPU @ 3.00GHz GenuineIntel GNU/Linux
root@thebeach /h/marcg> btrfs --version
btrfs-progs v4.5.3
root@thebeach /h/marcg> btrfs fi us /media/Storage2
Overall:
Device size: 2.73TiB
Device allocated: 1.71TiB
Device unallocated: 1.02TiB
Device missing: 0.00B
Used: 1.38TiB
Free (estimated): 1.34TiB (min: 1.34TiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)


Data,single: Size:1.71TiB, Used:1.38TiB
/dev/mapper/storage2 1.71TiB


Metadata,single: Size:3.00GiB, Used:1.53GiB
/dev/mapper/storage2 3.00GiB


System,single: Size:32.00MiB, Used:208.00KiB
/dev/mapper/storage2 32.00MiB


Unallocated:
/dev/mapper/storage2 1.02TiB
root@thebeach /h/marcg> btrfs fi df /media/Storage2
Data, single: total=1.71TiB, used=1.38TiB
System, single: total=32.00MiB, used=208.00KiB
Metadata, single: total=3.00GiB, used=1.53GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
root@thebeach /h/marcg>


I'm not sure if this is known and if it's btrfs-progs related or if it
is actually allocating that space.


Thank you for reading.


Marc


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Confusing output from fi us/df
  2016-06-20 23:30 Confusing output from fi us/df Marc Grondin
@ 2016-06-21  0:25 ` Hans van Kranenburg
  2016-06-21  9:40   ` Duncan
  2016-06-21  0:55 ` Satoru Takeuchi
  2016-06-22 18:52 ` Chris Murphy
  2 siblings, 1 reply; 5+ messages in thread
From: Hans van Kranenburg @ 2016-06-21  0:25 UTC (permalink / raw)
  To: Marc Grondin, linux-btrfs

Hi,

On 06/21/2016 01:30 AM, Marc Grondin wrote:
>
> I have a btrfs filesystem ontop of a 4x1tb mdraid raid5 array and I've
> been getting confusing output on metadata usage. Seems that even tho
> both data and metadata are in single profile metadata is reporting
> double the space(as if it was in dupe profile)

I guess that's a coincidence.

 From the total amount of space you have (on top of the mdraid), there's 
3 GiB allocated/reserved for use as metadata. Inside that 3 GiB, 1.53GiB 
of actual metadata is present.

>[...]
> Metadata,single: Size:3.00GiB, Used:1.53GiB
> /dev/mapper/storage2 3.00GiB

> Metadata, single: total=3.00GiB, used=1.53GiB

If you'd change to DUP, it would look like:

for fi usage:

Metadata,single: Size:3.00GiB, Used:1.53GiB
/dev/mapper/storage2 6.00GiB

for fi df:

Metadata, single: total=3.00GiB, used=1.53GiB

Except for the 6.00GiB which would show the actual usage on disk, the 
other metadata numbers all hide the profile ratio.

Hans

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Confusing output from fi us/df
  2016-06-20 23:30 Confusing output from fi us/df Marc Grondin
  2016-06-21  0:25 ` Hans van Kranenburg
@ 2016-06-21  0:55 ` Satoru Takeuchi
  2016-06-22 18:52 ` Chris Murphy
  2 siblings, 0 replies; 5+ messages in thread
From: Satoru Takeuchi @ 2016-06-21  0:55 UTC (permalink / raw)
  To: Marc Grondin, linux-btrfs

On 2016/06/21 8:30, Marc Grondin wrote:
> Hi everyone,
> 
> 
> I have a btrfs filesystem ontop of a 4x1tb mdraid raid5 array and I've
> been getting confusing output on metadata usage. Seems that even tho
> both data and metadata are in single profile metadata is reporting
> double the space(as if it was in dupe profile)
> 
> 
> 
> root@thebeach /h/marcg> uname -a
> Linux thebeach 4.6.2-gentoo-GMAN #1 SMP Sat Jun 11 22:32:27 ADT 2016
> x86_64 Intel(R) Core(TM) i5-2320 CPU @ 3.00GHz GenuineIntel GNU/Linux
> root@thebeach /h/marcg> btrfs --version
> btrfs-progs v4.5.3
> root@thebeach /h/marcg> btrfs fi us /media/Storage2
> Overall:
> Device size: 2.73TiB
> Device allocated: 1.71TiB
> Device unallocated: 1.02TiB
> Device missing: 0.00B
> Used: 1.38TiB
> Free (estimated): 1.34TiB (min: 1.34TiB)
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 512.00MiB (used: 0.00B)
> 
> 
> Data,single: Size:1.71TiB, Used:1.38TiB
> /dev/mapper/storage2 1.71TiB
> 
> 
> Metadata,single: Size:3.00GiB, Used:1.53GiB
> /dev/mapper/storage2 3.00GiB
> 
> 
> System,single: Size:32.00MiB, Used:208.00KiB
> /dev/mapper/storage2 32.00MiB
> 
> 
> Unallocated:
> /dev/mapper/storage2 1.02TiB
> root@thebeach /h/marcg> btrfs fi df /media/Storage2
> Data, single: total=1.71TiB, used=1.38TiB
> System, single: total=32.00MiB, used=208.00KiB
> Metadata, single: total=3.00GiB, used=1.53GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
> root@thebeach /h/marcg>
> 
> 
> I'm not sure if this is known and if it's btrfs-progs related or if it
> is actually allocating that space.

Could you tell me the location where you think
metadata is reporting double the space?

from fi us:
> Metadata,single: Size:3.00GiB, Used:1.53GiB
> /dev/mapper/storage2 3.00GiB

from fi df:
> Metadata,single: Size:3.00GiB, Used:1.53GiB
> /dev/mapper/storage2 3.00GiB

As far as I can see, Btrfs just allocates 3.00 GiB
from /dev/mapper/storage2, Metadata,single size is
the same as it (not double), and 1.53 GiB is used.


The following is in my case where data is single
and meta is dup.

from fi us:
Metadata,DUP: Size:384.00MiB, Used:221.36MiB
   /dev/vda3     768.00MiB

from fi df:
Metadata, DUP: total=384.00MiB, used=221.36MiB

Here Btrfs allocates 768.0MiB from /dev/vda3 and it's
twice as large as the size of Metadata,DUP(384.00MiB).
I guess it means that "metadata is reporting double the space"
as you said and your case it not the case.

CMIIW.

Thanks,
Satoru

> 
> 
> Thank you for reading.
> 
> 
> Marc
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Confusing output from fi us/df
  2016-06-21  0:25 ` Hans van Kranenburg
@ 2016-06-21  9:40   ` Duncan
  0 siblings, 0 replies; 5+ messages in thread
From: Duncan @ 2016-06-21  9:40 UTC (permalink / raw)
  To: linux-btrfs

Hans van Kranenburg posted on Tue, 21 Jun 2016 02:25:20 +0200 as
excerpted:

> On 06/21/2016 01:30 AM, Marc Grondin wrote:
>>
>> I have a btrfs filesystem ontop of a 4x1tb mdraid raid5 array and I've
>> been getting confusing output on metadata usage. Seems that even tho
>> both data and metadata are in single profile metadata is reporting
>> double the space(as if it was in dupe profile)
> 
> I guess that's a coincidence.

Yes.

>  From the total amount of space you have (on top of the mdraid), there's
> 3 GiB allocated/reserved for use as metadata. Inside that 3 GiB, 1.53GiB
> of actual metadata is present.
> 
>>[...]
>> Metadata,single: Size:3.00GiB, Used:1.53GiB
>> /dev/mapper/storage2 3.00GiB
> 
>> Metadata, single: total=3.00GiB, used=1.53GiB

[Explaining a bit more than HvK or ST did.]

Btrfs does two-stage allocation.  First it allocates chunks, separately 
for data vs. metadata.  Then it uses the space in those chunks, until it 
needs to allocate more.

It's simply coincidence that the actual used metadata space within the 
allocation happens to be approximately half of the allocated metadata 
chunk space.

Tho unlike data, metadata space should never get completely full -- it'll 
need to to allocate a new chunk before it reports full, because the 
global reserve space (which is always zero usage unless the filesystem is 
in severe straits, if you ever see global reserve usage above zero you 
know the filesystem is in serious trouble, space-wise) comes from 
metadata as well.

So in reality, you have 3 gigs of metadata chunks allocated, 1.53 gigs of 
it used for actual metadata, and half a gig (512 MiB) reserved as global-
reserve (none of which is used =:^), so in actuality, approximately 2.03 
GiB of the 3.00 GiB of metadata chunks are used, with 0.97 GiB of 
metadata free.

Now metadata chunks are nominally 256 MiB (quarter GiB) each, while data 
chunks are nominally 1 GiB each.  However, that's just the _nominal_ 
size.  On TB-scale filesystems they may be larger.

In any case, you could balance the metadata and possibly reclaim a bit of 
space, but with usage including the reserve slightly over 2 GiB, you 
could only get down to 2.25 GiB metadata allocation best-case, and may be 
stuck with 2.5 or even the same 3 GiB, depending on actual metadata chunk 
size.

But I'd not worry about it yet.  Once unallocated space gets down to 
about half a TB, or either data or metadata size becomes multiple times 
actual usage, a balance will arguably be useful.  But the numbers look 
pretty healthy ATM.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Confusing output from fi us/df
  2016-06-20 23:30 Confusing output from fi us/df Marc Grondin
  2016-06-21  0:25 ` Hans van Kranenburg
  2016-06-21  0:55 ` Satoru Takeuchi
@ 2016-06-22 18:52 ` Chris Murphy
  2 siblings, 0 replies; 5+ messages in thread
From: Chris Murphy @ 2016-06-22 18:52 UTC (permalink / raw)
  To: Marc Grondin; +Cc: Btrfs BTRFS

On Mon, Jun 20, 2016 at 5:30 PM, Marc Grondin <marcfgrondin@gmail.com> wrote:

> Metadata,single: Size:3.00GiB, Used:1.53GiB

Read this as:

3GiB of space on the device is reserved for metadata block group, and
1.53GiB of that is being used. The reservation means that this space
can't be used for other block group types, such as system or data. So
in some sense it is "used" but strictly speaking it's just a
reservation.




-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-06-22 18:52 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-20 23:30 Confusing output from fi us/df Marc Grondin
2016-06-21  0:25 ` Hans van Kranenburg
2016-06-21  9:40   ` Duncan
2016-06-21  0:55 ` Satoru Takeuchi
2016-06-22 18:52 ` Chris Murphy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.