* Out of space and incorrect size reported
@ 2018-03-21 21:53 Shane Walton
2018-03-21 21:56 ` Hugo Mills
0 siblings, 1 reply; 4+ messages in thread
From: Shane Walton @ 2018-03-21 21:53 UTC (permalink / raw)
To: linux-btrfs
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 1143 bytes --]
> uname -a
Linux rockstor 4.4.5-1.el7.elrepo.x86_64 #1 SMP Thu Mar 10 11:45:51 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
> btrfs âversion
btrfs-progs v4.4.1
> btrfs fi df /mnt2/pool_homes
Data, RAID1: total=240.00GiB, used=239.78GiB
System, RAID1: total=8.00MiB, used=64.00KiB
Metadata, RAID1: total=8.00GiB, used=5.90GiB
GlobalReserve, single: total=512.00MiB, used=59.31MiB
> btrfs filesystem show /mnt2/pool_homes
Label: 'pool_homes' uuid: 0987930f-8c9c-49cc-985e-de6383863070
Total devices 2 FS bytes used 245.75GiB
devid 1 size 465.76GiB used 248.01GiB path /dev/sda
devid 2 size 465.76GiB used 248.01GiB path /dev/sdb
Why is the line above "Data, RAID1: total=240.00GiB, used=239.78GiBâ almost full and limited to 240 GiB when there is I have 2x 500 GB HDD? This is all create/implemented with the Rockstor platform and it says the âshareâ should be 400 GB.
What can I do to make this larger or closer to the full size of 465 GiB (minus the System and Metadata overhead)?
Thanks!
Shaneÿôèº{.nÇ+·®+%Ëÿ±éݶ\x17¥wÿº{.nÇ+·¥{±ý»k~ÏâØ^nr¡ö¦zË\x1aëh¨èÚ&£ûàz¿äz¹Þú+Ê+zf£¢·h§~Ûiÿÿïêÿêçz_è®\x0fæj:+v¨þ)ߣøm
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Out of space and incorrect size reported
2018-03-21 21:53 Out of space and incorrect size reported Shane Walton
@ 2018-03-21 21:56 ` Hugo Mills
2018-03-22 0:56 ` Shane Walton
0 siblings, 1 reply; 4+ messages in thread
From: Hugo Mills @ 2018-03-21 21:56 UTC (permalink / raw)
To: Shane Walton; +Cc: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 1530 bytes --]
On Wed, Mar 21, 2018 at 09:53:39PM +0000, Shane Walton wrote:
> > uname -a
> Linux rockstor 4.4.5-1.el7.elrepo.x86_64 #1 SMP Thu Mar 10 11:45:51 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
>
> > btrfs —version
> btrfs-progs v4.4.1
>
> > btrfs fi df /mnt2/pool_homes
> Data, RAID1: total=240.00GiB, used=239.78GiB
> System, RAID1: total=8.00MiB, used=64.00KiB
> Metadata, RAID1: total=8.00GiB, used=5.90GiB
> GlobalReserve, single: total=512.00MiB, used=59.31MiB
>
> > btrfs filesystem show /mnt2/pool_homes
> Label: 'pool_homes' uuid: 0987930f-8c9c-49cc-985e-de6383863070
> Total devices 2 FS bytes used 245.75GiB
> devid 1 size 465.76GiB used 248.01GiB path /dev/sda
> devid 2 size 465.76GiB used 248.01GiB path /dev/sdb
>
> Why is the line above "Data, RAID1: total=240.00GiB, used=239.78GiB” almost full and limited to 240 GiB when there is I have 2x 500 GB HDD? This is all create/implemented with the Rockstor platform and it says the “share” should be 400 GB.
>
> What can I do to make this larger or closer to the full size of 465 GiB (minus the System and Metadata overhead)?
Most likely, you need to ugrade your kernel to get past the known
bug (fixed in about 4.6 or so, if I recall correctly), and then mount
with -o clear_cache to force the free space cache to be rebuilt.
Hugo.
--
Hugo Mills | Q: What goes, "Pieces of seven! Pieces of seven!"?
hugo@... carfax.org.uk | A: A parroty error.
http://carfax.org.uk/ |
PGP: E2AB1DE4 |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Out of space and incorrect size reported
2018-03-21 21:56 ` Hugo Mills
@ 2018-03-22 0:56 ` Shane Walton
2018-03-22 8:54 ` Duncan
0 siblings, 1 reply; 4+ messages in thread
From: Shane Walton @ 2018-03-22 0:56 UTC (permalink / raw)
To: Hugo Mills; +Cc: linux-btrfs
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 3049 bytes --]
Unfortunately this didnât seem to correct the problem. Please see below:
> uname -a
Linux rockstor 4.12.4-1.el7.elrepo.x86_64 #1 SMP Thu Jul 27 20:03:28 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
> btrfs âversion
btrfs-progs v4.12
> btrfs fi df -H /mnt2/pool_homes
Data, RAID1: total=257.70GB, used=257.46GB
System, RAID1: total=8.39MB, used=65.54kB
Metadata, RAID1: total=7.52GB, used=6.35GB
GlobalReserve, single: total=498.27MB, used=0.00B
> btrfs fi show /mnt2/pool_homes
Label: 'pool_homes' uuid: 0987930f-8c9c-49cc-985e-de6383863070
Total devices 2 FS bytes used 245.69GiB
devid 1 size 465.76GiB used 247.01GiB path /dev/sda
devid 2 size 465.76GiB used 247.01GiB path /dev/sdb
rockstor mounts everything over and over, even if I manually unmount, so I did the following:
> umount /mnt2/pool_homes; mount -o clear_cache /dev/sda /mnt2/pool_home
dmesg shows the following:
[ 3473.848389] BTRFS info (device sda): use no compression
[ 3473.848393] BTRFS info (device sda): disk space caching is enabled
[ 3473.848394] BTRFS info (device sda): has skinny extents
[ 3548.337574] BTRFS info (device sda): force clearing of disk cache
[ 3548.337578] BTRFS info (device sda): disk space caching is enabled
[ 3548.337580] BTRFS info (device sda): has skinny extents
Any help is appreciated!
> On Mar 21, 2018, at 5:56 PM, Hugo Mills <hugo@carfax.org.uk> wrote:
>
> On Wed, Mar 21, 2018 at 09:53:39PM +0000, Shane Walton wrote:
>>> uname -a
>> Linux rockstor 4.4.5-1.el7.elrepo.x86_64 #1 SMP Thu Mar 10 11:45:51 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
>>
>>> btrfs âversion
>> btrfs-progs v4.4.1
>>
>>> btrfs fi df /mnt2/pool_homes
>> Data, RAID1: total=240.00GiB, used=239.78GiB
>> System, RAID1: total=8.00MiB, used=64.00KiB
>> Metadata, RAID1: total=8.00GiB, used=5.90GiB
>> GlobalReserve, single: total=512.00MiB, used=59.31MiB
>>
>>> btrfs filesystem show /mnt2/pool_homes
>> Label: 'pool_homes' uuid: 0987930f-8c9c-49cc-985e-de6383863070
>> Total devices 2 FS bytes used 245.75GiB
>> devid 1 size 465.76GiB used 248.01GiB path /dev/sda
>> devid 2 size 465.76GiB used 248.01GiB path /dev/sdb
>>
>> Why is the line above "Data, RAID1: total=240.00GiB, used=239.78GiBâ almost full and limited to 240 GiB when there is I have 2x 500 GB HDD? This is all create/implemented with the Rockstor platform and it says the âshareâ should be 400 GB.
>>
>> What can I do to make this larger or closer to the full size of 465 GiB (minus the System and Metadata overhead)?
>
> Most likely, you need to ugrade your kernel to get past the known
> bug (fixed in about 4.6 or so, if I recall correctly), and then mount
> with -o clear_cache to force the free space cache to be rebuilt.
>
> Hugo.
>
> --
> Hugo Mills | Q: What goes, "Pieces of seven! Pieces of seven!"?
> hugo@... carfax.org.uk | A: A parroty error.
> http://carfax.org.uk/ |
> PGP: E2AB1DE4 |
ÿôèº{.nÇ+·®+%Ëÿ±éݶ\x17¥wÿº{.nÇ+·¥{±ý»k~ÏâØ^nr¡ö¦zË\x1aëh¨èÚ&£ûàz¿äz¹Þú+Ê+zf£¢·h§~Ûiÿÿïêÿêçz_è®\x0fæj:+v¨þ)ߣøm
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Out of space and incorrect size reported
2018-03-22 0:56 ` Shane Walton
@ 2018-03-22 8:54 ` Duncan
0 siblings, 0 replies; 4+ messages in thread
From: Duncan @ 2018-03-22 8:54 UTC (permalink / raw)
To: linux-btrfs
Shane Walton posted on Thu, 22 Mar 2018 00:56:05 +0000 as excerpted:
>>>> btrfs fi df /mnt2/pool_homes
>>> Data, RAID1: total=240.00GiB, used=239.78GiB
>>> System, RAID1: total=8.00MiB, used=64.00KiB
>>> Metadata, RAID1: total=8.00GiB, used=5.90GiB
>>> GlobalReserve, single: total=512.00MiB, used=59.31MiB
>>>
>>>> btrfs filesystem show /mnt2/pool_homes
>>> Label: 'pool_homes' uuid: 0987930f-8c9c-49cc-985e-de6383863070
>>> Total devices 2 FS bytes used 245.75GiB
>>> devid 1 size 465.76GiB used 248.01GiB path /dev/sdaThe output
from the (relatively new and thus possibly not yet in the old 4.4 you
posted with above and upgraded from) btrfs filesystem usage command makes
this somewhat clearer, tho
>>> devid 2 size 465.76GiB used 248.01GiB path /dev/sdb
>>>
>>> Why is the line above "Data, RAID1: total=240.00GiB, used=239.78GiB
>>> almost full and limited to 240 GiB when there is I have 2x 500 GB HDD?
>>> What can I do to make this larger or closer to the full size of 465
>>> GiB (minus the System and Metadata overhead)?
By my read, Hugo answered correctly, but (I think) not the question you
asked.
The upgrade was certainly a good idea, 4.4 being quite old now and not
really supported well here now, as this is a development list and we tend
to be focused on new, not long ago history, but it didn't change the
report output as you expected, because based on your question you're
misreading it and it doesn't say what you are interpreting it as saying.
BTW, you might like the output from btrfs filesystem usage a bit better
as it's somewhat clearer than the previously required (usage is a
relatively new subcommand that might not have been in 4.4 yet) btrfs fi
df and btrfs fi show, but understanding how btrfs works and what the
reported numbers mean is still useful.
Btrfs does two-stage allocation. First, it allocates chunks of a
specific type, normally data or metadata (system is special, normally
only one chunk so no more allocated, and global reserve is actually
reserved from metadata and counts as part of it) from unused/unallocated
space (which isn't shown by show/df, but usage shows it separately), then
when necessary, btrfs actually uses space from the chunks it allocated
previously.
So what the above df line is saying is that 240 GiB of space have been
allocated as data chunks, and 239.78 GiB of that, almost all of it, is
used.
But you should still have 200+ GiB of unallocated space on each of the
devices, as here shown by the individual device lines of the show command
(465 total, 248 used), tho as I said, btrfs filesystem usage makes that
rather clearer.
And btrfs should normally allocate additional space from that 200+ gigs
unallocated, to data or metadata chunks, as necessary. Further, because
btrfs can't directly take chunks allocated as data and reallocate them as
metadata, you *WANT* lots of unallocated space. You do NOT want all that
extra space allocated as data chunks, because then they wouldn't be
available to allocate as metadata if needed.
Now with 200+ GiB of space on each of the two devices unallocated, you
shouldn't yet be running into ENOSPC (error no space) errors. If you
are, that's a bug, and there have actually been a couple bugs like that
recently, but that doesn't mean you want btrfs to unnecessarily allocate
all that unallocated space as data space, which would be what it did if
it reported all that as data. Rather, you need btrfs to allocate data,
and metadata, chunks as needed, and any space related errors you are
seeing would be bugs related to that.
Now that you have a newer btrfs-progs and kernel, and have read my
attempt at an explanation above, try btrfs filesystem usage and see if
things are clearer. If not, maybe Hugo or someone else can do better
now, answering /that/ question. And of course if with the newer 4.12
kernel you're getting ENOSPC errors, please report that too, tho be aware
that 4.14 is the latest LTS series, with 4.9 the LTS before that, and as
a normal non-LTS series kernel 4.12 support has ended as well, so you
might wish to either upgrade to a current 4.14 LTS or downgrade to the
older 4.9 LTS, for best support.
Or of course you could go with a current non-LTS. Normally the latest
two release series in both normal and LTS are best supported, so with
4.15 out and 4.16 nearing release, that's the latest 4.15 stable release
now, or 4.14, to be 4.16 and 4.15 at 4.16 release, or on the LTS track
the previously mentioned 4.14 and 4.9 series, tho at a year old plus, 4.9
is already getting rather harder to support, and 4.14 is old enough now
it's preferred for LTS track.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-03-22 8:56 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-21 21:53 Out of space and incorrect size reported Shane Walton
2018-03-21 21:56 ` Hugo Mills
2018-03-22 0:56 ` Shane Walton
2018-03-22 8:54 ` Duncan
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.