* With Linux 5.5: Filesystem full while still 90 GiB free
@ 2020-01-29 19:33 Martin Steigerwald
2020-01-29 20:04 ` Martin Raiber
0 siblings, 1 reply; 25+ messages in thread
From: Martin Steigerwald @ 2020-01-29 19:33 UTC (permalink / raw)
To: linux-btrfs
Hi!
I thought this would not happen anymore, but see yourself:
% LANG=en df -hT /daten
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/sata-daten btrfs 400G 310G 0 100% /daten
I removed some larger files but to no avail.
However, also according to btrfs fi usage it is perfectly good:
% btrfs fi usage -T /daten
Overall:
Device size: 400.00GiB
Device allocated: 311.04GiB
Device unallocated: 88.96GiB
Device missing: 0.00B
Used: 309.50GiB
Free (estimated): 90.16GiB (min: 90.16GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 364.03MiB (used: 0.00B)
Data Metadata System
Id Path single single single Unallocated
-- ---------------------- --------- --------- -------- -----------
1 /dev/mapper/sata-daten 310.00GiB 1.01GiB 32.00MiB 88.96GiB
-- ---------------------- --------- --------- -------- -----------
Total 310.00GiB 1.01GiB 32.00MiB 88.96GiB
Used 308.80GiB 714.67MiB 64.00KiB
Even in a state where I think that no balance is required. I started one
for some duration and it was able to relocate some block groups just
fine:
[88593.639061] BTRFS info (device dm-4): relocating block group
352409616384 flags data
[88594.076177] BTRFS info (device dm-4): found 1 extents
[88594.147171] BTRFS info (device dm-4): found 1 extents
[88594.185681] BTRFS info (device dm-4): relocating block group
351335874560 flags data
[88597.089032] BTRFS info (device dm-4): found 559 extents
[88597.170087] BTRFS info (device dm-4): found 550 extents
[88597.206519] BTRFS info (device dm-4): relocating block group
350262132736 flags data
[88601.850606] BTRFS info (device dm-4): found 1908 extents
[88601.988330] BTRFS info (device dm-4): found 1908 extents
I aborted it after some time ago, cause from the values above a balance
is not required, I'd say.
Also btrfs check seems to be happy:
% btrfs check /dev/sata/daten
Opening filesystem to check...
Checking filesystem on /dev/sata/daten
UUID: […]
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 332322398208 bytes used, no error found
total csum bytes: 323723112
total tree bytes: 749453312
total fs tree bytes: 367476736
total extent tree bytes: 34422784
btree space waste bytes: 98074732
file data blocks allocated: 468393074688
referenced 331312160768
I believe I switched this filesystem to space cache v2 some time ago.
However looking at the mount options I am not entirely sure:
/dev/mapper/sata-daten on /daten type btrfs
(rw,noatime,ssd,space_cache,subvolid=257,subvol=/daten)
I just cleared the space_cache:
% mount -o remount,clear_cache,space_cache=v2 /daten
% dmesg | tail -3
[89385.503291] BTRFS info (device dm-4): force clearing of disk cache
[89385.503302] BTRFS info (device dm-4): enabling free space tree
[89385.503305] BTRFS info (device dm-4): using free space tree
Still to no avail:
% LANG=en df -hT /daten
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/sata-daten btrfs 400G 310G 0 100% /daten
There are no permanent snapshots on the filesystem. Backup scripts
create a snapshot, but delete it after the backup is completely.
A scrub of the data, which I started just in case, reports no errors.
There is nothing unusual about the filesystem in /var/log/kern.log*
ThinkPad T520 running Debian Sid with self-compiled Linux kernel 5.5.
BTRFS is on Samsung SSD 860 Pro 1 TB. btrfs-progs v5.4.1.
The filesystem mostly has large files I store there once and then they
stay there. However recently I copied a bunch of root filesystems of a
Proxmox VE cluster to them via rsync -aAHXS, so there are more smaller
files on it now as well. There really is no frequent write activity on
it. The generation number is just 5857. The filesystem is also quite
new, having been created at 2018-08-11. No compression, no raid, no
fancy features used.
So what is going on here?
Any way to recover the lost space… without redoing the filesystem from
scratch? I can redo it easily enough, but if I can spare the time,I'd
appreciate it.
I am really surprised that this bogus out of space thing can still
happen. Especially on such an underutilized filesystem.
Thanks,
--
Martin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-29 19:33 With Linux 5.5: Filesystem full while still 90 GiB free Martin Steigerwald
@ 2020-01-29 20:04 ` Martin Raiber
2020-01-29 21:20 ` Martin Steigerwald
0 siblings, 1 reply; 25+ messages in thread
From: Martin Raiber @ 2020-01-29 20:04 UTC (permalink / raw)
To: Martin Steigerwald, linux-btrfs
On 29.01.2020 20:33 Martin Steigerwald wrote:
> Hi!
>
> I thought this would not happen anymore, but see yourself:
>
> % LANG=en df -hT /daten
> Filesystem Type Size Used Avail Use% Mounted on
> /dev/mapper/sata-daten btrfs 400G 310G 0 100% /daten
>
> I removed some larger files but to no avail.
I have the same issue since 5.4. This patch should fix it:
https://lore.kernel.org/linux-btrfs/f1f1a2ab-ed09-d841-6a93-a44a8fb2312f@gmx.com/T/
Confirm by writing to the file system. It shouldn't say that it is out
of space (only df report says zero).
As far as I know, it is unfortunately not fixed in any released kernel yet.
>
>
> However, also according to btrfs fi usage it is perfectly good:
>
> % btrfs fi usage -T /daten
> Overall:
> Device size: 400.00GiB
> Device allocated: 311.04GiB
> Device unallocated: 88.96GiB
> Device missing: 0.00B
> Used: 309.50GiB
> Free (estimated): 90.16GiB (min: 90.16GiB)
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 364.03MiB (used: 0.00B)
>
> Data Metadata System
> Id Path single single single Unallocated
> -- ---------------------- --------- --------- -------- -----------
> 1 /dev/mapper/sata-daten 310.00GiB 1.01GiB 32.00MiB 88.96GiB
> -- ---------------------- --------- --------- -------- -----------
> Total 310.00GiB 1.01GiB 32.00MiB 88.96GiB
> Used 308.80GiB 714.67MiB 64.00KiB
>
>
> Even in a state where I think that no balance is required. I started one
> for some duration and it was able to relocate some block groups just
> fine:
>
> [88593.639061] BTRFS info (device dm-4): relocating block group
> 352409616384 flags data
> [88594.076177] BTRFS info (device dm-4): found 1 extents
> [88594.147171] BTRFS info (device dm-4): found 1 extents
> [88594.185681] BTRFS info (device dm-4): relocating block group
> 351335874560 flags data
> [88597.089032] BTRFS info (device dm-4): found 559 extents
> [88597.170087] BTRFS info (device dm-4): found 550 extents
> [88597.206519] BTRFS info (device dm-4): relocating block group
> 350262132736 flags data
> [88601.850606] BTRFS info (device dm-4): found 1908 extents
> [88601.988330] BTRFS info (device dm-4): found 1908 extents
>
> I aborted it after some time ago, cause from the values above a balance
> is not required, I'd say.
>
>
> Also btrfs check seems to be happy:
>
> % btrfs check /dev/sata/daten
> Opening filesystem to check...
> Checking filesystem on /dev/sata/daten
> UUID: […]
> [1/7] checking root items
> [2/7] checking extents
> [3/7] checking free space cache
> [4/7] checking fs roots
> [5/7] checking only csums items (without verifying data)
> [6/7] checking root refs
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 332322398208 bytes used, no error found
> total csum bytes: 323723112
> total tree bytes: 749453312
> total fs tree bytes: 367476736
> total extent tree bytes: 34422784
> btree space waste bytes: 98074732
> file data blocks allocated: 468393074688
> referenced 331312160768
>
>
> I believe I switched this filesystem to space cache v2 some time ago.
> However looking at the mount options I am not entirely sure:
>
> /dev/mapper/sata-daten on /daten type btrfs
> (rw,noatime,ssd,space_cache,subvolid=257,subvol=/daten)
>
> I just cleared the space_cache:
>
> % mount -o remount,clear_cache,space_cache=v2 /daten
>
> % dmesg | tail -3
> [89385.503291] BTRFS info (device dm-4): force clearing of disk cache
> [89385.503302] BTRFS info (device dm-4): enabling free space tree
> [89385.503305] BTRFS info (device dm-4): using free space tree
>
> Still to no avail:
>
> % LANG=en df -hT /daten
> Filesystem Type Size Used Avail Use% Mounted on
> /dev/mapper/sata-daten btrfs 400G 310G 0 100% /daten
>
>
> There are no permanent snapshots on the filesystem. Backup scripts
> create a snapshot, but delete it after the backup is completely.
>
> A scrub of the data, which I started just in case, reports no errors.
>
> There is nothing unusual about the filesystem in /var/log/kern.log*
>
>
> ThinkPad T520 running Debian Sid with self-compiled Linux kernel 5.5.
> BTRFS is on Samsung SSD 860 Pro 1 TB. btrfs-progs v5.4.1.
>
>
> The filesystem mostly has large files I store there once and then they
> stay there. However recently I copied a bunch of root filesystems of a
> Proxmox VE cluster to them via rsync -aAHXS, so there are more smaller
> files on it now as well. There really is no frequent write activity on
> it. The generation number is just 5857. The filesystem is also quite
> new, having been created at 2018-08-11. No compression, no raid, no
> fancy features used.
>
>
> So what is going on here?
>
> Any way to recover the lost space… without redoing the filesystem from
> scratch? I can redo it easily enough, but if I can spare the time,I'd
> appreciate it.
>
>
> I am really surprised that this bogus out of space thing can still
> happen. Especially on such an underutilized filesystem.
>
> Thanks,
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-29 20:04 ` Martin Raiber
@ 2020-01-29 21:20 ` Martin Steigerwald
2020-01-29 22:55 ` Chris Murphy
0 siblings, 1 reply; 25+ messages in thread
From: Martin Steigerwald @ 2020-01-29 21:20 UTC (permalink / raw)
To: Martin Raiber; +Cc: linux-btrfs
Martin Raiber - 29.01.20, 21:04:41 CET:
> On 29.01.2020 20:33 Martin Steigerwald wrote:
> > I thought this would not happen anymore, but see yourself:
> >
> > % LANG=en df -hT /daten
> > Filesystem Type Size Used Avail Use% Mounted on
> > /dev/mapper/sata-daten btrfs 400G 310G 0 100% /daten
> >
> > I removed some larger files but to no avail.
>
> I have the same issue since 5.4. This patch should fix it:
> https://lore.kernel.org/linux-btrfs/f1f1a2ab-ed09-d841-6a93-a44a8fb231
> 2f@gmx.com/T/ Confirm by writing to the file system. It shouldn't say
> that it is out of space (only df report says zero).
>
> As far as I know, it is unfortunately not fixed in any released kernel
> yet.
Indeed remaining metadata space in the one 1 GiB big metadata chunk is
less than global reserve:
> > However, also according to btrfs fi usage it is perfectly good:
> >
> > % btrfs fi usage -T /daten
> >
> > Overall:
> > Device size: 400.00GiB
> > Device allocated: 311.04GiB
> > Device unallocated: 88.96GiB
> > Device missing: 0.00B
> > Used: 309.50GiB
> > Free (estimated): 90.16GiB (min: 90.16GiB)
> > Data ratio: 1.00
> > Metadata ratio: 1.00
> > Global reserve: 364.03MiB (used: 0.00B)
> >
> > Data Metadata System
> >
> > Id Path single single single Unallocated
> > -- ---------------------- --------- --------- -------- -----------
> >
> > 1 /dev/mapper/sata-daten 310.00GiB 1.01GiB 32.00MiB 88.96GiB
> >
> > -- ---------------------- --------- --------- -------- -----------
> >
> > Total 310.00GiB 1.01GiB 32.00MiB 88.96GiB
> > Used 308.80GiB 714.67MiB 64.00KiB
Hmmm, Dolphin file manager said out of space, but it may be cause it
meanwhile checks for enough available space *before* initiating the copy
or move operation.
Consequently using "mv" worked to move the files to that filesystem.
Thank you for that hint.
So if its just a cosmetic issue then I can wait for the patch to land in
linux-stable. Or does it still need testing?
Best,
--
Martin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-29 21:20 ` Martin Steigerwald
@ 2020-01-29 22:55 ` Chris Murphy
2020-01-30 10:41 ` Martin Steigerwald
2020-01-30 17:19 ` David Sterba
0 siblings, 2 replies; 25+ messages in thread
From: Chris Murphy @ 2020-01-29 22:55 UTC (permalink / raw)
To: Martin Steigerwald; +Cc: Martin Raiber, Btrfs BTRFS
On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald <martin@lichtvoll.de> wrote:
>
> So if its just a cosmetic issue then I can wait for the patch to land in
> linux-stable. Or does it still need testing?
I'm not seeing it in linux-next. A reasonable short term work around
is mount option 'metadata_ratio=1' and that's what needs more testing,
because it seems decently likely mortal users will need an easy work
around until a fix gets backported to stable. And that's gonna be a
while, me thinks.
Is that mount option sufficient? Or does it take a filtered balance?
What's the most minimal balance needed? I'm hoping -dlimit=1
I can't figure out a way to trigger this though, otherwise I'd be
doing more testing.
--
Chris Murphy
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-29 22:55 ` Chris Murphy
@ 2020-01-30 10:41 ` Martin Steigerwald
2020-01-30 16:37 ` Chris Murphy
2020-01-30 17:19 ` David Sterba
1 sibling, 1 reply; 25+ messages in thread
From: Martin Steigerwald @ 2020-01-30 10:41 UTC (permalink / raw)
To: Chris Murphy; +Cc: Martin Raiber, Btrfs BTRFS
Chris Murphy - 29.01.20, 23:55:06 CET:
> On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald
<martin@lichtvoll.de> wrote:
> > So if its just a cosmetic issue then I can wait for the patch to
> > land in linux-stable. Or does it still need testing?
>
> I'm not seeing it in linux-next. A reasonable short term work around
> is mount option 'metadata_ratio=1' and that's what needs more testing,
> because it seems decently likely mortal users will need an easy work
> around until a fix gets backported to stable. And that's gonna be a
> while, me thinks.
>
> Is that mount option sufficient? Or does it take a filtered balance?
> What's the most minimal balance needed? I'm hoping -dlimit=1
Does not make a difference. I did:
- mount -o remount,metadata_ratio=1 /daten
- touch /daten/somefile
- dd if=/dev/zero of=/daten/someotherfile bs=1M count=500
- sync
- df still reporting zero space free
> I can't figure out a way to trigger this though, otherwise I'd be
> doing more testing.
Sure.
I am doing the balance -dlimit=1 thing next. With metadata_ratio=0
again.
% btrfs balance start -dlimit=1 /daten
Done, had to relocate 1 out of 312 chunks
% LANG=en df -hT /daten
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
Okay, doing with metadata_ratio=1:
% mount -o remount,metadata_ratio=1 /daten
% btrfs balance start -dlimit=1 /daten
Done, had to relocate 1 out of 312 chunks
% LANG=en df -hT /daten
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
Okay, other suggestions? I'd like to avoid shuffling 311 GiB data around
using a full balance.
Thanks,
--
Martin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 10:41 ` Martin Steigerwald
@ 2020-01-30 16:37 ` Chris Murphy
2020-01-30 20:02 ` Martin Steigerwald
0 siblings, 1 reply; 25+ messages in thread
From: Chris Murphy @ 2020-01-30 16:37 UTC (permalink / raw)
To: Martin Steigerwald; +Cc: Chris Murphy, Martin Raiber, Btrfs BTRFS
On Thu, Jan 30, 2020 at 3:41 AM Martin Steigerwald <martin@lichtvoll.de> wrote:
>
> Chris Murphy - 29.01.20, 23:55:06 CET:
> > On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald
> <martin@lichtvoll.de> wrote:
> > > So if its just a cosmetic issue then I can wait for the patch to
> > > land in linux-stable. Or does it still need testing?
> >
> > I'm not seeing it in linux-next. A reasonable short term work around
> > is mount option 'metadata_ratio=1' and that's what needs more testing,
> > because it seems decently likely mortal users will need an easy work
> > around until a fix gets backported to stable. And that's gonna be a
> > while, me thinks.
> >
> > Is that mount option sufficient? Or does it take a filtered balance?
> > What's the most minimal balance needed? I'm hoping -dlimit=1
>
> Does not make a difference. I did:
>
> - mount -o remount,metadata_ratio=1 /daten
> - touch /daten/somefile
> - dd if=/dev/zero of=/daten/someotherfile bs=1M count=500
> - sync
> - df still reporting zero space free
>
> > I can't figure out a way to trigger this though, otherwise I'd be
> > doing more testing.
>
> Sure.
>
> I am doing the balance -dlimit=1 thing next. With metadata_ratio=0
> again.
>
> % btrfs balance start -dlimit=1 /daten
> Done, had to relocate 1 out of 312 chunks
>
> % LANG=en df -hT /daten
> Filesystem Type Size Used Avail Use% Mounted on
> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
>
> Okay, doing with metadata_ratio=1:
>
> % mount -o remount,metadata_ratio=1 /daten
>
> % btrfs balance start -dlimit=1 /daten
> Done, had to relocate 1 out of 312 chunks
>
> % LANG=en df -hT /daten
> Filesystem Type Size Used Avail Use% Mounted on
> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
>
>
> Okay, other suggestions? I'd like to avoid shuffling 311 GiB data around
> using a full balance.
There's earlier anecdotal evidence that -dlimit=10 will work. But you
can just keep using -dlimit=1 and it'll balance a different block
group each time (you can confirm/deny this with the block group
address and extent count in dmesg for each balance). Count how many it
takes to get df to stop misreporting. It may be a file system specific
value.
--
Chris Murphy
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-29 22:55 ` Chris Murphy
2020-01-30 10:41 ` Martin Steigerwald
@ 2020-01-30 17:19 ` David Sterba
2020-01-30 19:31 ` Chris Murphy
1 sibling, 1 reply; 25+ messages in thread
From: David Sterba @ 2020-01-30 17:19 UTC (permalink / raw)
To: Chris Murphy; +Cc: Martin Steigerwald, Martin Raiber, Btrfs BTRFS
On Wed, Jan 29, 2020 at 03:55:06PM -0700, Chris Murphy wrote:
> On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald <martin@lichtvoll.de> wrote:
> >
> > So if its just a cosmetic issue then I can wait for the patch to land in
> > linux-stable. Or does it still need testing?
>
> I'm not seeing it in linux-next. A reasonable short term work around
> is mount option 'metadata_ratio=1' and that's what needs more testing,
> because it seems decently likely mortal users will need an easy work
> around until a fix gets backported to stable. And that's gonna be a
> while, me thinks.
We're looking into some fix that could be backported, as it affects a
long-term kernel (5.4).
The fix
https://lore.kernel.org/linux-btrfs/20200115034128.32889-1-wqu@suse.com/
IMHO works by accident and is not good even as a workaround, only papers
over the problem in some cases. The size of metadata over-reservation
(caused by a change in the logic that estimates the 'over-' part) adds
up to the global block reserve (that's permanent and as last resort
reserve for deletion).
In other words "we're making this larger by number A, so let's subtract
some number B". The fix is to use A.
> Is that mount option sufficient? Or does it take a filtered balance?
> What's the most minimal balance needed? I'm hoping -dlimit=1
>
> I can't figure out a way to trigger this though, otherwise I'd be
> doing more testing.
I haven't checked but I think the suggested workarounds affect statfs as
a side effect. Also as the reservations are temporary, the numbers
change again after a sync.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 17:19 ` David Sterba
@ 2020-01-30 19:31 ` Chris Murphy
2020-01-30 19:58 ` Martin Steigerwald
2020-01-31 3:00 ` Zygo Blaxell
0 siblings, 2 replies; 25+ messages in thread
From: Chris Murphy @ 2020-01-30 19:31 UTC (permalink / raw)
To: David Sterba, Chris Murphy, Martin Steigerwald, Martin Raiber,
Btrfs BTRFS
On Thu, Jan 30, 2020 at 10:20 AM David Sterba <dsterba@suse.cz> wrote:
>
> On Wed, Jan 29, 2020 at 03:55:06PM -0700, Chris Murphy wrote:
> > On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald <martin@lichtvoll.de> wrote:
> > >
> > > So if its just a cosmetic issue then I can wait for the patch to land in
> > > linux-stable. Or does it still need testing?
> >
> > I'm not seeing it in linux-next. A reasonable short term work around
> > is mount option 'metadata_ratio=1' and that's what needs more testing,
> > because it seems decently likely mortal users will need an easy work
> > around until a fix gets backported to stable. And that's gonna be a
> > while, me thinks.
>
> We're looking into some fix that could be backported, as it affects a
> long-term kernel (5.4).
>
> The fix
> https://lore.kernel.org/linux-btrfs/20200115034128.32889-1-wqu@suse.com/
> IMHO works by accident and is not good even as a workaround, only papers
> over the problem in some cases. The size of metadata over-reservation
> (caused by a change in the logic that estimates the 'over-' part) adds
> up to the global block reserve (that's permanent and as last resort
> reserve for deletion).
>
> In other words "we're making this larger by number A, so let's subtract
> some number B". The fix is to use A.
>
> > Is that mount option sufficient? Or does it take a filtered balance?
> > What's the most minimal balance needed? I'm hoping -dlimit=1
> >
> > I can't figure out a way to trigger this though, otherwise I'd be
> > doing more testing.
>
> I haven't checked but I think the suggested workarounds affect statfs as
> a side effect. Also as the reservations are temporary, the numbers
> change again after a sync.
Yeah I'm being careful to qualify to mortal users that any workarounds
are temporary and uncertain. I'm not even certain what the pattern is,
people with new file systems have hit it. A full balance seems to fix
it, and then soon after the problem happens again. I don't do any
balancing these days, for over a year now, so I wonder if that's why
I'm not seeing it.
But yeah a small number of people are hitting it, but it also stops
any program that does a free space check (presumably using statfs).
A more reliable/universal work around in the meantime is still useful;
in particular if it doesn't require changing mount options, or only
requires it temporarily (e.g. not added to /etc/fstab, where it can
be forgotten for the life of that system).
--
Chris Murphy
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 19:31 ` Chris Murphy
@ 2020-01-30 19:58 ` Martin Steigerwald
2020-01-31 3:00 ` Zygo Blaxell
1 sibling, 0 replies; 25+ messages in thread
From: Martin Steigerwald @ 2020-01-30 19:58 UTC (permalink / raw)
To: Chris Murphy; +Cc: David Sterba, Martin Raiber, linux-btrfs
Chris Murphy - 30.01.20, 20:31:40 CET:
> > > Is that mount option sufficient? Or does it take a filtered
> > > balance?
> > > What's the most minimal balance needed? I'm hoping -dlimit=1
> > >
> > > I can't figure out a way to trigger this though, otherwise I'd be
> > > doing more testing.
> >
> > I haven't checked but I think the suggested workarounds affect
> > statfs as a side effect. Also as the reservations are temporary,
> > the numbers change again after a sync.
>
> Yeah I'm being careful to qualify to mortal users that any workarounds
> are temporary and uncertain. I'm not even certain what the pattern
> is, people with new file systems have hit it. A full balance seems to
> fix it, and then soon after the problem happens again. I don't do any
> balancing these days, for over a year now, so I wonder if that's why
> I'm not seeing it.
>
> But yeah a small number of people are hitting it, but it also stops
> any program that does a free space check (presumably using statfs).
>
> A more reliable/universal work around in the meantime is still useful;
> in particular if it doesn't require changing mount options, or only
> requires it temporarily (e.g. not added to /etc/fstab, where it can
> be forgotten for the life of that system).
I did not balance either. Except maybe for a very short time during
holding trainings in order to show to people how it works.
I never bought into balancing regularily.
Thanks,
--
Martin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 16:37 ` Chris Murphy
@ 2020-01-30 20:02 ` Martin Steigerwald
2020-01-30 20:18 ` Chris Murphy
0 siblings, 1 reply; 25+ messages in thread
From: Martin Steigerwald @ 2020-01-30 20:02 UTC (permalink / raw)
To: Chris Murphy; +Cc: Martin Raiber, Btrfs BTRFS
Chris Murphy - 30.01.20, 17:37:42 CET:
> On Thu, Jan 30, 2020 at 3:41 AM Martin Steigerwald
<martin@lichtvoll.de> wrote:
> > Chris Murphy - 29.01.20, 23:55:06 CET:
> > > On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald
> >
> > <martin@lichtvoll.de> wrote:
> > > > So if its just a cosmetic issue then I can wait for the patch to
> > > > land in linux-stable. Or does it still need testing?
> > >
> > > I'm not seeing it in linux-next. A reasonable short term work
> > > around
> > > is mount option 'metadata_ratio=1' and that's what needs more
> > > testing, because it seems decently likely mortal users will need
> > > an easy work around until a fix gets backported to stable. And
> > > that's gonna be a while, me thinks.
> > >
> > > Is that mount option sufficient? Or does it take a filtered
> > > balance?
> > > What's the most minimal balance needed? I'm hoping -dlimit=1
> >
> > Does not make a difference. I did:
> >
> > - mount -o remount,metadata_ratio=1 /daten
> > - touch /daten/somefile
> > - dd if=/dev/zero of=/daten/someotherfile bs=1M count=500
> > - sync
> > - df still reporting zero space free
> >
> > > I can't figure out a way to trigger this though, otherwise I'd be
> > > doing more testing.
> >
> > Sure.
> >
> > I am doing the balance -dlimit=1 thing next. With metadata_ratio=0
> > again.
> >
> > % btrfs balance start -dlimit=1 /daten
> > Done, had to relocate 1 out of 312 chunks
> >
> > % LANG=en df -hT /daten
> > Filesystem Type Size Used Avail Use% Mounted on
> > /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> >
> > Okay, doing with metadata_ratio=1:
> >
> > % mount -o remount,metadata_ratio=1 /daten
> >
> > % btrfs balance start -dlimit=1 /daten
> > Done, had to relocate 1 out of 312 chunks
> >
> > % LANG=en df -hT /daten
> > Filesystem Type Size Used Avail Use% Mounted on
> > /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> >
> >
> > Okay, other suggestions? I'd like to avoid shuffling 311 GiB data
> > around using a full balance.
>
> There's earlier anecdotal evidence that -dlimit=10 will work. But you
> can just keep using -dlimit=1 and it'll balance a different block
> group each time (you can confirm/deny this with the block group
> address and extent count in dmesg for each balance). Count how many it
> takes to get df to stop misreporting. It may be a file system
> specific value.
Lost the patience after 25 attempts:
date; let I=I+1; echo "Balance $I"; btrfs balance start -dlimit=1 /daten
; LANG=en df -hT /daten
Do 30. Jan 20:59:17 CET 2020
Balance 25
Done, had to relocate 1 out of 312 chunks
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
Doing the -dlimit=10 balance now:
% btrfs balance start -dlimit=10 /daten ; LANG=en df -hT /daten
Done, had to relocate 10 out of 312 chunks
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
Okay, enough of balancing for today.
I bet I just wait for a proper fix, instead of needlessly shuffling data
around.
Thanks for the suggestions tough.
--
Martin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 20:02 ` Martin Steigerwald
@ 2020-01-30 20:18 ` Chris Murphy
2020-01-30 20:59 ` Josef Bacik
2020-01-30 21:10 ` Martin Steigerwald
0 siblings, 2 replies; 25+ messages in thread
From: Chris Murphy @ 2020-01-30 20:18 UTC (permalink / raw)
To: Martin Steigerwald; +Cc: Chris Murphy, Martin Raiber, Btrfs BTRFS
On Thu, Jan 30, 2020 at 1:02 PM Martin Steigerwald <martin@lichtvoll.de> wrote:
>
> Chris Murphy - 30.01.20, 17:37:42 CET:
> > On Thu, Jan 30, 2020 at 3:41 AM Martin Steigerwald
> <martin@lichtvoll.de> wrote:
> > > Chris Murphy - 29.01.20, 23:55:06 CET:
> > > > On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald
> > >
> > > <martin@lichtvoll.de> wrote:
> > > > > So if its just a cosmetic issue then I can wait for the patch to
> > > > > land in linux-stable. Or does it still need testing?
> > > >
> > > > I'm not seeing it in linux-next. A reasonable short term work
> > > > around
> > > > is mount option 'metadata_ratio=1' and that's what needs more
> > > > testing, because it seems decently likely mortal users will need
> > > > an easy work around until a fix gets backported to stable. And
> > > > that's gonna be a while, me thinks.
> > > >
> > > > Is that mount option sufficient? Or does it take a filtered
> > > > balance?
> > > > What's the most minimal balance needed? I'm hoping -dlimit=1
> > >
> > > Does not make a difference. I did:
> > >
> > > - mount -o remount,metadata_ratio=1 /daten
> > > - touch /daten/somefile
> > > - dd if=/dev/zero of=/daten/someotherfile bs=1M count=500
> > > - sync
> > > - df still reporting zero space free
> > >
> > > > I can't figure out a way to trigger this though, otherwise I'd be
> > > > doing more testing.
> > >
> > > Sure.
> > >
> > > I am doing the balance -dlimit=1 thing next. With metadata_ratio=0
> > > again.
> > >
> > > % btrfs balance start -dlimit=1 /daten
> > > Done, had to relocate 1 out of 312 chunks
> > >
> > > % LANG=en df -hT /daten
> > > Filesystem Type Size Used Avail Use% Mounted on
> > > /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> > >
> > > Okay, doing with metadata_ratio=1:
> > >
> > > % mount -o remount,metadata_ratio=1 /daten
> > >
> > > % btrfs balance start -dlimit=1 /daten
> > > Done, had to relocate 1 out of 312 chunks
> > >
> > > % LANG=en df -hT /daten
> > > Filesystem Type Size Used Avail Use% Mounted on
> > > /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> > >
> > >
> > > Okay, other suggestions? I'd like to avoid shuffling 311 GiB data
> > > around using a full balance.
> >
> > There's earlier anecdotal evidence that -dlimit=10 will work. But you
> > can just keep using -dlimit=1 and it'll balance a different block
> > group each time (you can confirm/deny this with the block group
> > address and extent count in dmesg for each balance). Count how many it
> > takes to get df to stop misreporting. It may be a file system
> > specific value.
>
> Lost the patience after 25 attempts:
>
> date; let I=I+1; echo "Balance $I"; btrfs balance start -dlimit=1 /daten
> ; LANG=en df -hT /daten
> Do 30. Jan 20:59:17 CET 2020
> Balance 25
> Done, had to relocate 1 out of 312 chunks
> Filesystem Type Size Used Avail Use% Mounted on
> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
>
>
> Doing the -dlimit=10 balance now:
>
> % btrfs balance start -dlimit=10 /daten ; LANG=en df -hT /daten
> Done, had to relocate 10 out of 312 chunks
> Filesystem Type Size Used Avail Use% Mounted on
> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
>
> Okay, enough of balancing for today.
>
> I bet I just wait for a proper fix, instead of needlessly shuffling data
> around.
What about unmounting and remounting?
There is a proposed patch that David referenced in this thread, but
it's looking like it papers over the real problem. But even if so,
that'd get your file system working sooner than a proper fix, which I
think (?) needs to be demonstrated to at least cause no new
regressions in 5.6, before it'll be backported to stable.
--
Chris Murphy
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 20:18 ` Chris Murphy
@ 2020-01-30 20:59 ` Josef Bacik
2020-01-30 21:09 ` Chris Murphy
2020-01-30 21:12 ` Martin Steigerwald
2020-01-30 21:10 ` Martin Steigerwald
1 sibling, 2 replies; 25+ messages in thread
From: Josef Bacik @ 2020-01-30 20:59 UTC (permalink / raw)
To: Chris Murphy, Martin Steigerwald; +Cc: Martin Raiber, Btrfs BTRFS
On 1/30/20 3:18 PM, Chris Murphy wrote:
> On Thu, Jan 30, 2020 at 1:02 PM Martin Steigerwald <martin@lichtvoll.de> wrote:
>>
>> Chris Murphy - 30.01.20, 17:37:42 CET:
>>> On Thu, Jan 30, 2020 at 3:41 AM Martin Steigerwald
>> <martin@lichtvoll.de> wrote:
>>>> Chris Murphy - 29.01.20, 23:55:06 CET:
>>>>> On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald
>>>>
>>>> <martin@lichtvoll.de> wrote:
>>>>>> So if its just a cosmetic issue then I can wait for the patch to
>>>>>> land in linux-stable. Or does it still need testing?
>>>>>
>>>>> I'm not seeing it in linux-next. A reasonable short term work
>>>>> around
>>>>> is mount option 'metadata_ratio=1' and that's what needs more
>>>>> testing, because it seems decently likely mortal users will need
>>>>> an easy work around until a fix gets backported to stable. And
>>>>> that's gonna be a while, me thinks.
>>>>>
>>>>> Is that mount option sufficient? Or does it take a filtered
>>>>> balance?
>>>>> What's the most minimal balance needed? I'm hoping -dlimit=1
>>>>
>>>> Does not make a difference. I did:
>>>>
>>>> - mount -o remount,metadata_ratio=1 /daten
>>>> - touch /daten/somefile
>>>> - dd if=/dev/zero of=/daten/someotherfile bs=1M count=500
>>>> - sync
>>>> - df still reporting zero space free
>>>>
>>>>> I can't figure out a way to trigger this though, otherwise I'd be
>>>>> doing more testing.
>>>>
>>>> Sure.
>>>>
>>>> I am doing the balance -dlimit=1 thing next. With metadata_ratio=0
>>>> again.
>>>>
>>>> % btrfs balance start -dlimit=1 /daten
>>>> Done, had to relocate 1 out of 312 chunks
>>>>
>>>> % LANG=en df -hT /daten
>>>> Filesystem Type Size Used Avail Use% Mounted on
>>>> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
>>>>
>>>> Okay, doing with metadata_ratio=1:
>>>>
>>>> % mount -o remount,metadata_ratio=1 /daten
>>>>
>>>> % btrfs balance start -dlimit=1 /daten
>>>> Done, had to relocate 1 out of 312 chunks
>>>>
>>>> % LANG=en df -hT /daten
>>>> Filesystem Type Size Used Avail Use% Mounted on
>>>> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
>>>>
>>>>
>>>> Okay, other suggestions? I'd like to avoid shuffling 311 GiB data
>>>> around using a full balance.
>>>
>>> There's earlier anecdotal evidence that -dlimit=10 will work. But you
>>> can just keep using -dlimit=1 and it'll balance a different block
>>> group each time (you can confirm/deny this with the block group
>>> address and extent count in dmesg for each balance). Count how many it
>>> takes to get df to stop misreporting. It may be a file system
>>> specific value.
>>
>> Lost the patience after 25 attempts:
>>
>> date; let I=I+1; echo "Balance $I"; btrfs balance start -dlimit=1 /daten
>> ; LANG=en df -hT /daten
>> Do 30. Jan 20:59:17 CET 2020
>> Balance 25
>> Done, had to relocate 1 out of 312 chunks
>> Filesystem Type Size Used Avail Use% Mounted on
>> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
>>
>>
>> Doing the -dlimit=10 balance now:
>>
>> % btrfs balance start -dlimit=10 /daten ; LANG=en df -hT /daten
>> Done, had to relocate 10 out of 312 chunks
>> Filesystem Type Size Used Avail Use% Mounted on
>> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
>>
>> Okay, enough of balancing for today.
>>
>> I bet I just wait for a proper fix, instead of needlessly shuffling data
>> around.
>
> What about unmounting and remounting?
>
> There is a proposed patch that David referenced in this thread, but
> it's looking like it papers over the real problem. But even if so,
> that'd get your file system working sooner than a proper fix, which I
> think (?) needs to be demonstrated to at least cause no new
> regressions in 5.6, before it'll be backported to stable.
>
>
The file system is fine, you don't need to balance or anything, this is purely a
cosmetic bug. _Always_ trust what btrfs filesystem usage tells you, and it's
telling you that there's 88gib of unallocated space. df is just wrong because 5
years ago we arbitrarily decided to set b_avail to 0 if we didn't have enough
metadata space for the whole global reserve, despite how much unallocated space
we had left. A recent changed means that we are more likely to not have enough
free metadata space for the whole global reserve if there's unallocated space,
specifically because we can use that unallocated space if we absolutely have to.
The fix will be to adjust the statfs() madness and then df will tell you the
right thing (well as right as it can ever tell you anyway.) Thanks,
Josef
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 20:59 ` Josef Bacik
@ 2020-01-30 21:09 ` Chris Murphy
2020-01-30 21:32 ` Martin Raiber
2020-01-30 21:12 ` Martin Steigerwald
1 sibling, 1 reply; 25+ messages in thread
From: Chris Murphy @ 2020-01-30 21:09 UTC (permalink / raw)
To: Josef Bacik; +Cc: Chris Murphy, Martin Steigerwald, Martin Raiber, Btrfs BTRFS
On Thu, Jan 30, 2020 at 1:59 PM Josef Bacik <josef@toxicpanda.com> wrote:
>
> The file system is fine, you don't need to balance or anything, this is purely a
> cosmetic bug.
It's not entirely cosmetic if a program uses statfs to check free
space and then errors when it finds none. Some people are running into
that.
https://lore.kernel.org/linux-btrfs/alpine.DEB.2.21.99999.375.2001131514010.21037@trent.utfs.org/
I guess right now the most reliable work around is to revert to 5.3.18.
--
Chris Murphy
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 20:18 ` Chris Murphy
2020-01-30 20:59 ` Josef Bacik
@ 2020-01-30 21:10 ` Martin Steigerwald
2020-01-30 21:20 ` Remi Gauvin
1 sibling, 1 reply; 25+ messages in thread
From: Martin Steigerwald @ 2020-01-30 21:10 UTC (permalink / raw)
To: Chris Murphy; +Cc: Martin Raiber, Btrfs BTRFS
Chris Murphy - 30.01.20, 21:18:41 CET:
> On Thu, Jan 30, 2020 at 1:02 PM Martin Steigerwald
<martin@lichtvoll.de> wrote:
> > Chris Murphy - 30.01.20, 17:37:42 CET:
> > > On Thu, Jan 30, 2020 at 3:41 AM Martin Steigerwald
> >
> > <martin@lichtvoll.de> wrote:
> > > > Chris Murphy - 29.01.20, 23:55:06 CET:
> > > > > On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald
> > > >
> > > > <martin@lichtvoll.de> wrote:
> > > > > > So if its just a cosmetic issue then I can wait for the
> > > > > > patch to
> > > > > > land in linux-stable. Or does it still need testing?
> > > > >
> > > > > I'm not seeing it in linux-next. A reasonable short term work
> > > > > around
> > > > > is mount option 'metadata_ratio=1' and that's what needs more
> > > > > testing, because it seems decently likely mortal users will
> > > > > need
> > > > > an easy work around until a fix gets backported to stable. And
> > > > > that's gonna be a while, me thinks.
> > > > >
> > > > > Is that mount option sufficient? Or does it take a filtered
> > > > > balance?
> > > > > What's the most minimal balance needed? I'm hoping -dlimit=1
> > > >
> > > > Does not make a difference. I did:
> > > >
> > > > - mount -o remount,metadata_ratio=1 /daten
> > > > - touch /daten/somefile
> > > > - dd if=/dev/zero of=/daten/someotherfile bs=1M count=500
> > > > - sync
> > > > - df still reporting zero space free
> > > >
> > > > > I can't figure out a way to trigger this though, otherwise I'd
> > > > > be
> > > > > doing more testing.
> > > >
> > > > Sure.
> > > >
> > > > I am doing the balance -dlimit=1 thing next. With
> > > > metadata_ratio=0
> > > > again.
> > > >
> > > > % btrfs balance start -dlimit=1 /daten
> > > > Done, had to relocate 1 out of 312 chunks
> > > >
> > > > % LANG=en df -hT /daten
> > > > Filesystem Type Size Used Avail Use% Mounted on
> > > > /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> > > >
> > > > Okay, doing with metadata_ratio=1:
> > > >
> > > > % mount -o remount,metadata_ratio=1 /daten
> > > >
> > > > % btrfs balance start -dlimit=1 /daten
> > > > Done, had to relocate 1 out of 312 chunks
> > > >
> > > > % LANG=en df -hT /daten
> > > > Filesystem Type Size Used Avail Use% Mounted on
> > > > /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> > > >
> > > >
> > > > Okay, other suggestions? I'd like to avoid shuffling 311 GiB
> > > > data
> > > > around using a full balance.
> > >
> > > There's earlier anecdotal evidence that -dlimit=10 will work. But
> > > you
> > > can just keep using -dlimit=1 and it'll balance a different block
> > > group each time (you can confirm/deny this with the block group
> > > address and extent count in dmesg for each balance). Count how
> > > many it takes to get df to stop misreporting. It may be a file
> > > system specific value.
> >
> > Lost the patience after 25 attempts:
> >
> > date; let I=I+1; echo "Balance $I"; btrfs balance start -dlimit=1
> > /daten ; LANG=en df -hT /daten
> > Do 30. Jan 20:59:17 CET 2020
> > Balance 25
> > Done, had to relocate 1 out of 312 chunks
> > Filesystem Type Size Used Avail Use% Mounted on
> > /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> >
> >
> > Doing the -dlimit=10 balance now:
> >
> > % btrfs balance start -dlimit=10 /daten ; LANG=en df -hT /daten
> > Done, had to relocate 10 out of 312 chunks
> > Filesystem Type Size Used Avail Use% Mounted on
> > /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> >
> > Okay, enough of balancing for today.
> >
> > I bet I just wait for a proper fix, instead of needlessly shuffling
> > data around.
>
> What about unmounting and remounting?
Does not help.
> There is a proposed patch that David referenced in this thread, but
> it's looking like it papers over the real problem. But even if so,
> that'd get your file system working sooner than a proper fix, which I
> think (?) needs to be demonstrated to at least cause no new
> regressions in 5.6, before it'll be backported to stable.
I am done with re-balancing experiments.
--
Martin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 20:59 ` Josef Bacik
2020-01-30 21:09 ` Chris Murphy
@ 2020-01-30 21:12 ` Martin Steigerwald
1 sibling, 0 replies; 25+ messages in thread
From: Martin Steigerwald @ 2020-01-30 21:12 UTC (permalink / raw)
To: Josef Bacik; +Cc: Chris Murphy, Martin Raiber, Btrfs BTRFS
Josef Bacik - 30.01.20, 21:59:31 CET:
> On 1/30/20 3:18 PM, Chris Murphy wrote:
> > On Thu, Jan 30, 2020 at 1:02 PM Martin Steigerwald
<martin@lichtvoll.de> wrote:
> >> Chris Murphy - 30.01.20, 17:37:42 CET:
> >>> On Thu, Jan 30, 2020 at 3:41 AM Martin Steigerwald
> >>
> >> <martin@lichtvoll.de> wrote:
> >>>> Chris Murphy - 29.01.20, 23:55:06 CET:
> >>>>> On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald
> >>>>
> >>>> <martin@lichtvoll.de> wrote:
> >>>>>> So if its just a cosmetic issue then I can wait for the patch
> >>>>>> to
> >>>>>> land in linux-stable. Or does it still need testing?
> >>>>>
> >>>>> I'm not seeing it in linux-next. A reasonable short term work
> >>>>> around
> >>>>> is mount option 'metadata_ratio=1' and that's what needs more
> >>>>> testing, because it seems decently likely mortal users will need
> >>>>> an easy work around until a fix gets backported to stable. And
> >>>>> that's gonna be a while, me thinks.
> >>>>>
> >>>>> Is that mount option sufficient? Or does it take a filtered
> >>>>> balance?
> >>>>> What's the most minimal balance needed? I'm hoping -dlimit=1
> >>>>
> >>>> Does not make a difference. I did:
> >>>>
> >>>> - mount -o remount,metadata_ratio=1 /daten
> >>>> - touch /daten/somefile
> >>>> - dd if=/dev/zero of=/daten/someotherfile bs=1M count=500
> >>>> - sync
> >>>> - df still reporting zero space free
> >>>>
> >>>>> I can't figure out a way to trigger this though, otherwise I'd
> >>>>> be
> >>>>> doing more testing.
> >>>>
> >>>> Sure.
> >>>>
> >>>> I am doing the balance -dlimit=1 thing next. With
> >>>> metadata_ratio=0
> >>>> again.
> >>>>
> >>>> % btrfs balance start -dlimit=1 /daten
> >>>> Done, had to relocate 1 out of 312 chunks
> >>>>
> >>>> % LANG=en df -hT /daten
> >>>> Filesystem Type Size Used Avail Use% Mounted on
> >>>> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> >>>>
> >>>> Okay, doing with metadata_ratio=1:
> >>>>
> >>>> % mount -o remount,metadata_ratio=1 /daten
> >>>>
> >>>> % btrfs balance start -dlimit=1 /daten
> >>>> Done, had to relocate 1 out of 312 chunks
> >>>>
> >>>> % LANG=en df -hT /daten
> >>>> Filesystem Type Size Used Avail Use% Mounted on
> >>>> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> >>>>
> >>>>
> >>>> Okay, other suggestions? I'd like to avoid shuffling 311 GiB data
> >>>> around using a full balance.
> >>>
> >>> There's earlier anecdotal evidence that -dlimit=10 will work. But
> >>> you
> >>> can just keep using -dlimit=1 and it'll balance a different block
> >>> group each time (you can confirm/deny this with the block group
> >>> address and extent count in dmesg for each balance). Count how
> >>> many it takes to get df to stop misreporting. It may be a file
> >>> system specific value.
> >>
> >> Lost the patience after 25 attempts:
> >>
> >> date; let I=I+1; echo "Balance $I"; btrfs balance start -dlimit=1
> >> /daten ; LANG=en df -hT /daten
> >> Do 30. Jan 20:59:17 CET 2020
> >> Balance 25
> >> Done, had to relocate 1 out of 312 chunks
> >> Filesystem Type Size Used Avail Use% Mounted on
> >> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> >>
> >>
> >> Doing the -dlimit=10 balance now:
> >>
> >> % btrfs balance start -dlimit=10 /daten ; LANG=en df -hT /daten
> >> Done, had to relocate 10 out of 312 chunks
> >> Filesystem Type Size Used Avail Use% Mounted on
> >> /dev/mapper/sata-daten btrfs 400G 311G 0 100% /daten
> >>
> >> Okay, enough of balancing for today.
> >>
> >> I bet I just wait for a proper fix, instead of needlessly shuffling
> >> data around.
> >
> > What about unmounting and remounting?
> >
> > There is a proposed patch that David referenced in this thread, but
> > it's looking like it papers over the real problem. But even if so,
> > that'd get your file system working sooner than a proper fix, which
> > I
> > think (?) needs to be demonstrated to at least cause no new
> > regressions in 5.6, before it'll be backported to stable.
>
> The file system is fine, you don't need to balance or anything, this
> is purely a cosmetic bug. _Always_ trust what btrfs filesystem usage
> tells you, and it's telling you that there's 88gib of unallocated
> space. df is just wrong because 5 years ago we arbitrarily decided
> to set b_avail to 0 if we didn't have enough metadata space for the
> whole global reserve, despite how much unallocated space we had left.
Okay, that it what I got initially.
However then Chris suggested doing some balances thinking I was helping
to test something that could help other users. I did not question
whether the balances would make sense or not.
> A recent changed means that we are more likely to not have enough
> free metadata space for the whole global reserve if there's
> unallocated space, specifically because we can use that unallocated
> space if we absolutely have to. The fix will be to adjust the
> statfs() madness and then df will tell you the right thing (well as
> right as it can ever tell you anyway.) Thanks,
Works for me.
Thank you,
--
Martin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 21:10 ` Martin Steigerwald
@ 2020-01-30 21:20 ` Remi Gauvin
2020-01-30 23:12 ` Martin Steigerwald
0 siblings, 1 reply; 25+ messages in thread
From: Remi Gauvin @ 2020-01-30 21:20 UTC (permalink / raw)
To: Martin Steigerwald; +Cc: linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 858 bytes --]
On 2020-01-30 4:10 p.m., Martin Steigerwald wrote:
>
> I am done with re-balancing experiments.
>
It should be pretty easy to fix.. use the metadata_ratio=1 mount option,
then write enough to force the allocation of more data space,,
In your earlier attempt, you wrote 500MB, but from your btrfs filesystem
usage, you had over 1GB of allocated but unused space.
If you wrote and deleted, say, 20GB of zeroes, that should force the
allocation of metatada space to get you past the global reserve size
that is causing this bug,, (Assuming this bug is even impacting you. I
was unclear from your messages if you are seeing any ill effects besides
the misreporting in df.)
Note: Make sure you don't have anything taking automatic snapshots
during the 20GB write/delete. I would create a new subvolume for it to
avoid that.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 21:09 ` Chris Murphy
@ 2020-01-30 21:32 ` Martin Raiber
2020-01-30 21:42 ` Josef Bacik
0 siblings, 1 reply; 25+ messages in thread
From: Martin Raiber @ 2020-01-30 21:32 UTC (permalink / raw)
To: Chris Murphy, Josef Bacik; +Cc: Martin Steigerwald, Martin Raiber, Btrfs BTRFS
On 30.01.2020 22:09 Chris Murphy wrote:
> On Thu, Jan 30, 2020 at 1:59 PM Josef Bacik <josef@toxicpanda.com> wrote:
>> The file system is fine, you don't need to balance or anything, this is purely a
>> cosmetic bug.
> It's not entirely cosmetic if a program uses statfs to check free
> space and then errors when it finds none. Some people are running into
> that.
>
> https://lore.kernel.org/linux-btrfs/alpine.DEB.2.21.99999.375.2001131514010.21037@trent.utfs.org/
>
> I guess right now the most reliable work around is to revert to 5.3.18.
Yeah, my backup system does check via statfs() if a file fits on storage
before backing it up and there is also a setting (global soft file
system quota) that allows users to configure the amount of storage the
backup system should use (e.g. 90% of available storage). In both cases
with statfs() returning zero it'll delete backups till there is "enough
free space", i.e. it'll delete all the backups it is allowed to delete
and then start reporting errors, which is obviously very problematic.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 21:32 ` Martin Raiber
@ 2020-01-30 21:42 ` Josef Bacik
0 siblings, 0 replies; 25+ messages in thread
From: Josef Bacik @ 2020-01-30 21:42 UTC (permalink / raw)
To: Martin Raiber, Chris Murphy; +Cc: Martin Steigerwald, Btrfs BTRFS
On 1/30/20 4:32 PM, Martin Raiber wrote:
> On 30.01.2020 22:09 Chris Murphy wrote:
>> On Thu, Jan 30, 2020 at 1:59 PM Josef Bacik <josef@toxicpanda.com> wrote:
>>> The file system is fine, you don't need to balance or anything, this is purely a
>>> cosmetic bug.
>> It's not entirely cosmetic if a program uses statfs to check free
>> space and then errors when it finds none. Some people are running into
>> that.
>>
>> https://lore.kernel.org/linux-btrfs/alpine.DEB.2.21.99999.375.2001131514010.21037@trent.utfs.org/
>>
>> I guess right now the most reliable work around is to revert to 5.3.18.
>
> Yeah, my backup system does check via statfs() if a file fits on storage
> before backing it up and there is also a setting (global soft file
> system quota) that allows users to configure the amount of storage the
> backup system should use (e.g. 90% of available storage). In both cases
> with statfs() returning zero it'll delete backups till there is "enough
> free space", i.e. it'll delete all the backups it is allowed to delete
> and then start reporting errors, which is obviously very problematic.
>
Yup I'm not meaning to say your application causing you problems isn't real ;).
But balancing and such or using metadata_ratio isn't necessary, we just need to
fix statfs for you so the application stops misbehaving. Thanks,
Josef
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 21:20 ` Remi Gauvin
@ 2020-01-30 23:12 ` Martin Steigerwald
2020-01-31 1:43 ` Matt Corallo
0 siblings, 1 reply; 25+ messages in thread
From: Martin Steigerwald @ 2020-01-30 23:12 UTC (permalink / raw)
To: Remi Gauvin; +Cc: linux-btrfs
Remi Gauvin - 30.01.20, 22:20:47 CET:
> On 2020-01-30 4:10 p.m., Martin Steigerwald wrote:
> > I am done with re-balancing experiments.
>
> It should be pretty easy to fix.. use the metadata_ratio=1 mount
> option, then write enough to force the allocation of more data
> space,,
>
> In your earlier attempt, you wrote 500MB, but from your btrfs
> filesystem usage, you had over 1GB of allocated but unused space.
>
> If you wrote and deleted, say, 20GB of zeroes, that should force the
> allocation of metatada space to get you past the global reserve size
> that is causing this bug,, (Assuming this bug is even impacting you.
> I was unclear from your messages if you are seeing any ill effects
> besides the misreporting in df.)
I thought more about writing a lot of little files as I expect that to
use more metadata, but… I can just work around it by using command line
tools instead of Dolphin to move data around. This is mostly my music,
photos and so on filesystem, I do not change data on it very often, so
that will most likely work just fine for me until there is a proper fix.
So do need to do any more things that could potentially age the
filesystem. :)
--
Martin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 23:12 ` Martin Steigerwald
@ 2020-01-31 1:43 ` Matt Corallo
2020-01-31 1:57 ` Qu Wenruo
2020-01-31 4:12 ` Etienne Champetier
0 siblings, 2 replies; 25+ messages in thread
From: Matt Corallo @ 2020-01-31 1:43 UTC (permalink / raw)
To: Martin Steigerwald; +Cc: Remi Gauvin, linux-btrfs
This is a pretty critical regression for me. I have a few applications that regularly check space available and exit if they find low available space, as well as a number of applications that, eg, rsync small files, causing this bug to appear (even with many TB free). It looks like the suggested patch isn’t moving towards stable, is there some other patch we should be testing?
> On Jan 30, 2020, at 18:12, Martin Steigerwald <martin@lichtvoll.de> wrote:
>
> Remi Gauvin - 30.01.20, 22:20:47 CET:
>>> On 2020-01-30 4:10 p.m., Martin Steigerwald wrote:
>>> I am done with re-balancing experiments.
>>
>> It should be pretty easy to fix.. use the metadata_ratio=1 mount
>> option, then write enough to force the allocation of more data
>> space,,
>>
>> In your earlier attempt, you wrote 500MB, but from your btrfs
>> filesystem usage, you had over 1GB of allocated but unused space.
>>
>> If you wrote and deleted, say, 20GB of zeroes, that should force the
>> allocation of metatada space to get you past the global reserve size
>> that is causing this bug,, (Assuming this bug is even impacting you.
>> I was unclear from your messages if you are seeing any ill effects
>> besides the misreporting in df.)
>
> I thought more about writing a lot of little files as I expect that to
> use more metadata, but… I can just work around it by using command line
> tools instead of Dolphin to move data around. This is mostly my music,
> photos and so on filesystem, I do not change data on it very often, so
> that will most likely work just fine for me until there is a proper fix.
>
> So do need to do any more things that could potentially age the
> filesystem. :)
>
> --
> Martin
>
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-31 1:43 ` Matt Corallo
@ 2020-01-31 1:57 ` Qu Wenruo
2020-03-02 1:57 ` Etienne Champetier
2020-01-31 4:12 ` Etienne Champetier
1 sibling, 1 reply; 25+ messages in thread
From: Qu Wenruo @ 2020-01-31 1:57 UTC (permalink / raw)
To: Matt Corallo, Martin Steigerwald; +Cc: Remi Gauvin, linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 2252 bytes --]
On 2020/1/31 上午9:43, Matt Corallo wrote:
> This is a pretty critical regression for me. I have a few applications that regularly check space available and exit if they find low available space, as well as a number of applications that, eg, rsync small files, causing this bug to appear (even with many TB free). It looks like the suggested patch isn’t moving towards stable, is there some other patch we should be testing?
That mentioned patch is no longer maintained, since it was original
planned for a quick fix for v5.5, but extra concern about whether we
should report 0 available space when metadata is exhausted is the
blockage for merge.
The proper fix for next release can be found here:
https://github.com/adam900710/linux/tree/per_type_avail
I hope this time, the patchset can be merged without extra blockage.
Thanks,
Qu
>
>> On Jan 30, 2020, at 18:12, Martin Steigerwald <martin@lichtvoll.de> wrote:
>>
>> Remi Gauvin - 30.01.20, 22:20:47 CET:
>>>> On 2020-01-30 4:10 p.m., Martin Steigerwald wrote:
>>>> I am done with re-balancing experiments.
>>>
>>> It should be pretty easy to fix.. use the metadata_ratio=1 mount
>>> option, then write enough to force the allocation of more data
>>> space,,
>>>
>>> In your earlier attempt, you wrote 500MB, but from your btrfs
>>> filesystem usage, you had over 1GB of allocated but unused space.
>>>
>>> If you wrote and deleted, say, 20GB of zeroes, that should force the
>>> allocation of metatada space to get you past the global reserve size
>>> that is causing this bug,, (Assuming this bug is even impacting you.
>>> I was unclear from your messages if you are seeing any ill effects
>>> besides the misreporting in df.)
>>
>> I thought more about writing a lot of little files as I expect that to
>> use more metadata, but… I can just work around it by using command line
>> tools instead of Dolphin to move data around. This is mostly my music,
>> photos and so on filesystem, I do not change data on it very often, so
>> that will most likely work just fine for me until there is a proper fix.
>>
>> So do need to do any more things that could potentially age the
>> filesystem. :)
>>
>> --
>> Martin
>>
>>
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-30 19:31 ` Chris Murphy
2020-01-30 19:58 ` Martin Steigerwald
@ 2020-01-31 3:00 ` Zygo Blaxell
1 sibling, 0 replies; 25+ messages in thread
From: Zygo Blaxell @ 2020-01-31 3:00 UTC (permalink / raw)
To: Chris Murphy; +Cc: David Sterba, Martin Steigerwald, Martin Raiber, Btrfs BTRFS
[-- Attachment #1: Type: text/plain, Size: 4219 bytes --]
On Thu, Jan 30, 2020 at 12:31:40PM -0700, Chris Murphy wrote:
> On Thu, Jan 30, 2020 at 10:20 AM David Sterba <dsterba@suse.cz> wrote:
> >
> > On Wed, Jan 29, 2020 at 03:55:06PM -0700, Chris Murphy wrote:
> > > On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald <martin@lichtvoll.de> wrote:
> > > >
> > > > So if its just a cosmetic issue then I can wait for the patch to land in
> > > > linux-stable. Or does it still need testing?
> > >
> > > I'm not seeing it in linux-next. A reasonable short term work around
> > > is mount option 'metadata_ratio=1' and that's what needs more testing,
> > > because it seems decently likely mortal users will need an easy work
> > > around until a fix gets backported to stable. And that's gonna be a
> > > while, me thinks.
> >
> > We're looking into some fix that could be backported, as it affects a
> > long-term kernel (5.4).
> >
> > The fix
> > https://lore.kernel.org/linux-btrfs/20200115034128.32889-1-wqu@suse.com/
> > IMHO works by accident and is not good even as a workaround, only papers
> > over the problem in some cases. The size of metadata over-reservation
> > (caused by a change in the logic that estimates the 'over-' part) adds
> > up to the global block reserve (that's permanent and as last resort
> > reserve for deletion).
> >
> > In other words "we're making this larger by number A, so let's subtract
> > some number B". The fix is to use A.
> >
> > > Is that mount option sufficient? Or does it take a filtered balance?
> > > What's the most minimal balance needed? I'm hoping -dlimit=1
> > >
> > > I can't figure out a way to trigger this though, otherwise I'd be
> > > doing more testing.
> >
> > I haven't checked but I think the suggested workarounds affect statfs as
> > a side effect. Also as the reservations are temporary, the numbers
> > change again after a sync.
>
> Yeah I'm being careful to qualify to mortal users that any workarounds
> are temporary and uncertain. I'm not even certain what the pattern is,
> people with new file systems have hit it. A full balance seems to fix
> it, and then soon after the problem happens again. I don't do any
> balancing these days, for over a year now, so I wonder if that's why
> I'm not seeing it.
I had to intentionally balance metadata to trigger the bug on pre-existing
test filesystems. With new filesystems it's easy, I hit it every time
the last metadata block group is half full (assuming default BG size of
1GB and max global reserved size of 512MB). It goes away when a new
metadata BG is allocated, then comes back again later. Sometimes it
appears and disappears rapidly while doing a large file tree copy.
An older filesystem will have some GB of allocated but partially empty
metadata BGs, and won't hit this condition unless you run metadata balance
(which shrinks metadata), or do something that causes explosive metadata
growth. If you normally keep a healthy amount of allocated but unused
metadata space, you probably will never hit the bug.
> But yeah a small number of people are hitting it, but it also stops
> any program that does a free space check (presumably using statfs).
>
> A more reliable/universal work around in the meantime is still useful;
> in particular if it doesn't require changing mount options, or only
> requires it temporarily (e.g. not added to /etc/fstab, where it can
> be forgotten for the life of that system).
You can create a GB of allocated but unused metadata space with something
like:
btrfs sub create sub_tmp
mkdir sub_tmp/single
head -c 2047 /dev/urandom > sub_tmp/single/inline_file
for x in $(seq 1 18); do
cp -a sub_tmp/single sub_tmp/double
mv sub_tmp/double sub_tmp/single/$x
done
sync
btrfs sub del sub_tmp
This requires the max_inline mount option to be set to the default (2048).
Random data means it works well when compression is enabled too.
Do not balance metadata until the bug is fixed. Balancing metadata will
release the allocated but unused metadata space, possibly retriggering
the bug.
(Hmmm...the above script is also a surprisingly effective commit latency
test case...)
>
> --
> Chris Murphy
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-31 1:43 ` Matt Corallo
2020-01-31 1:57 ` Qu Wenruo
@ 2020-01-31 4:12 ` Etienne Champetier
1 sibling, 0 replies; 25+ messages in thread
From: Etienne Champetier @ 2020-01-31 4:12 UTC (permalink / raw)
To: linux-btrfs; +Cc: Martin Steigerwald, Matt Corallo, Remi Gauvin
Le jeu. 30 janv. 2020 à 20:53, Matt Corallo <kernel@bluematt.me> a écrit :
>
> This is a pretty critical regression for me. I have a few applications that regularly check space available and exit if they find low available space, as well as a number of applications that, eg, rsync small files, causing this bug to appear (even with many TB free). It looks like the suggested patch isn’t moving towards stable, is there some other patch we should be testing?
I was migrating some data over to a new Fedora 31 server with btrfs
when dnf complained that my disk were full, despite having 2.61TiB
unallocated
Once a new kernel with a fix is available Fedora systems will need
some manual work to recover.
Running transaction test
The downloaded packages were saved in cache until the next successful
transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Transaction test error:
installing package libev-4.27-1.fc31.x86_64 needs 160KB on the / filesystem
...
# uname -r
5.4.7-200.fc31.x86_64
# LANG=C df -hT /
Filesystem Type Size
Used Avail Use% Mounted on
/dev/mapper/luks-17a54b4f-58b1-447f-a1e2-f927503936f2 btrfs 3.7T
1.1T 0 100% /
# btrfs fi usage -T /
Overall:
Device size: 3.63TiB
Device allocated: 1.03TiB
Device unallocated: 2.61TiB
Device missing: 0.00B
Used: 1.02TiB
Free (estimated): 2.61TiB (min: 1.31TiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data
Metadata System
Id Path RAID0
RAID1 RAID1 Unallocated
-- ----------------------------------------------------- ---------
-------- -------- -----------
1 /dev/mapper/luks-17a54b4f-58b1-447f-a1e2-f927503936f2 522.00GiB
3.00GiB 8.00MiB 1.30TiB
2 /dev/mapper/luks-ee555ebd-59ca-4ed8-9d68-7c3faf6e6a25 522.00GiB
3.00GiB 8.00MiB 1.30TiB
-- ----------------------------------------------------- ---------
-------- -------- -----------
Total 1.02TiB
3.00GiB 8.00MiB 2.61TiB
Used 1.02TiB
2.75GiB 96.00KiB
The good news is that I saw that I messed up my kickstart installed
and ended up with RAID0 for data
>
> > On Jan 30, 2020, at 18:12, Martin Steigerwald <martin@lichtvoll.de> wrote:
> >
> > Remi Gauvin - 30.01.20, 22:20:47 CET:
> >>> On 2020-01-30 4:10 p.m., Martin Steigerwald wrote:
> >>> I am done with re-balancing experiments.
> >>
> >> It should be pretty easy to fix.. use the metadata_ratio=1 mount
> >> option, then write enough to force the allocation of more data
> >> space,,
> >>
> >> In your earlier attempt, you wrote 500MB, but from your btrfs
> >> filesystem usage, you had over 1GB of allocated but unused space.
> >>
> >> If you wrote and deleted, say, 20GB of zeroes, that should force the
> >> allocation of metatada space to get you past the global reserve size
> >> that is causing this bug,, (Assuming this bug is even impacting you.
> >> I was unclear from your messages if you are seeing any ill effects
> >> besides the misreporting in df.)
> >
> > I thought more about writing a lot of little files as I expect that to
> > use more metadata, but… I can just work around it by using command line
> > tools instead of Dolphin to move data around. This is mostly my music,
> > photos and so on filesystem, I do not change data on it very often, so
> > that will most likely work just fine for me until there is a proper fix.
> >
> > So do need to do any more things that could potentially age the
> > filesystem. :)
> >
> > --
> > Martin
> >
> >
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-01-31 1:57 ` Qu Wenruo
@ 2020-03-02 1:57 ` Etienne Champetier
2020-03-02 1:59 ` Qu Wenruo
0 siblings, 1 reply; 25+ messages in thread
From: Etienne Champetier @ 2020-03-02 1:57 UTC (permalink / raw)
To: Qu Wenruo; +Cc: Matt Corallo, Martin Steigerwald, Remi Gauvin, linux-btrfs
Hello Qu,
Le jeu. 30 janv. 2020 à 20:57, Qu Wenruo <quwenruo.btrfs@gmx.com> a écrit :
>
>
>
> On 2020/1/31 上午9:43, Matt Corallo wrote:
> > This is a pretty critical regression for me. I have a few applications that regularly check space available and exit if they find low available space, as well as a number of applications that, eg, rsync small files, causing this bug to appear (even with many TB free). It looks like the suggested patch isn’t moving towards stable, is there some other patch we should be testing?
>
> That mentioned patch is no longer maintained, since it was original
> planned for a quick fix for v5.5, but extra concern about whether we
> should report 0 available space when metadata is exhausted is the
> blockage for merge.
>
> The proper fix for next release can be found here:
> https://github.com/adam900710/linux/tree/per_type_avail
>
> I hope this time, the patchset can be merged without extra blockage.
I see that there is a v8 of this patchset but can't find it in 5.6, is
it targetted at 5.7 now ?
https://patchwork.kernel.org/project/linux-btrfs/list/?series=245113
Thanks
Etienne
> Thanks,
> Qu
> >
> >> On Jan 30, 2020, at 18:12, Martin Steigerwald <martin@lichtvoll.de> wrote:
> >>
> >> Remi Gauvin - 30.01.20, 22:20:47 CET:
> >>>> On 2020-01-30 4:10 p.m., Martin Steigerwald wrote:
> >>>> I am done with re-balancing experiments.
> >>>
> >>> It should be pretty easy to fix.. use the metadata_ratio=1 mount
> >>> option, then write enough to force the allocation of more data
> >>> space,,
> >>>
> >>> In your earlier attempt, you wrote 500MB, but from your btrfs
> >>> filesystem usage, you had over 1GB of allocated but unused space.
> >>>
> >>> If you wrote and deleted, say, 20GB of zeroes, that should force the
> >>> allocation of metatada space to get you past the global reserve size
> >>> that is causing this bug,, (Assuming this bug is even impacting you.
> >>> I was unclear from your messages if you are seeing any ill effects
> >>> besides the misreporting in df.)
> >>
> >> I thought more about writing a lot of little files as I expect that to
> >> use more metadata, but… I can just work around it by using command line
> >> tools instead of Dolphin to move data around. This is mostly my music,
> >> photos and so on filesystem, I do not change data on it very often, so
> >> that will most likely work just fine for me until there is a proper fix.
> >>
> >> So do need to do any more things that could potentially age the
> >> filesystem. :)
> >>
> >> --
> >> Martin
> >>
> >>
> >
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: With Linux 5.5: Filesystem full while still 90 GiB free
2020-03-02 1:57 ` Etienne Champetier
@ 2020-03-02 1:59 ` Qu Wenruo
0 siblings, 0 replies; 25+ messages in thread
From: Qu Wenruo @ 2020-03-02 1:59 UTC (permalink / raw)
To: Etienne Champetier
Cc: Matt Corallo, Martin Steigerwald, Remi Gauvin, linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 2835 bytes --]
On 2020/3/2 上午9:57, Etienne Champetier wrote:
> Hello Qu,
>
> Le jeu. 30 janv. 2020 à 20:57, Qu Wenruo <quwenruo.btrfs@gmx.com> a écrit :
>>
>>
>>
>> On 2020/1/31 上午9:43, Matt Corallo wrote:
>>> This is a pretty critical regression for me. I have a few applications that regularly check space available and exit if they find low available space, as well as a number of applications that, eg, rsync small files, causing this bug to appear (even with many TB free). It looks like the suggested patch isn’t moving towards stable, is there some other patch we should be testing?
>>
>> That mentioned patch is no longer maintained, since it was original
>> planned for a quick fix for v5.5, but extra concern about whether we
>> should report 0 available space when metadata is exhausted is the
>> blockage for merge.
>>
>> The proper fix for next release can be found here:
>> https://github.com/adam900710/linux/tree/per_type_avail
>>
>> I hope this time, the patchset can be merged without extra blockage.
>
> I see that there is a v8 of this patchset but can't find it in 5.6, is
> it targetted at 5.7 now ?
> https://patchwork.kernel.org/project/linux-btrfs/list/?series=245113
That's to be determined by David, I hope it could reach v5.7, but I
don't have the final say.
Thanks,
Qu
>
> Thanks
> Etienne
>
>> Thanks,
>> Qu
>>>
>>>> On Jan 30, 2020, at 18:12, Martin Steigerwald <martin@lichtvoll.de> wrote:
>>>>
>>>> Remi Gauvin - 30.01.20, 22:20:47 CET:
>>>>>> On 2020-01-30 4:10 p.m., Martin Steigerwald wrote:
>>>>>> I am done with re-balancing experiments.
>>>>>
>>>>> It should be pretty easy to fix.. use the metadata_ratio=1 mount
>>>>> option, then write enough to force the allocation of more data
>>>>> space,,
>>>>>
>>>>> In your earlier attempt, you wrote 500MB, but from your btrfs
>>>>> filesystem usage, you had over 1GB of allocated but unused space.
>>>>>
>>>>> If you wrote and deleted, say, 20GB of zeroes, that should force the
>>>>> allocation of metatada space to get you past the global reserve size
>>>>> that is causing this bug,, (Assuming this bug is even impacting you.
>>>>> I was unclear from your messages if you are seeing any ill effects
>>>>> besides the misreporting in df.)
>>>>
>>>> I thought more about writing a lot of little files as I expect that to
>>>> use more metadata, but… I can just work around it by using command line
>>>> tools instead of Dolphin to move data around. This is mostly my music,
>>>> photos and so on filesystem, I do not change data on it very often, so
>>>> that will most likely work just fine for me until there is a proper fix.
>>>>
>>>> So do need to do any more things that could potentially age the
>>>> filesystem. :)
>>>>
>>>> --
>>>> Martin
>>>>
>>>>
>>>
>>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2020-03-02 2:00 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-29 19:33 With Linux 5.5: Filesystem full while still 90 GiB free Martin Steigerwald
2020-01-29 20:04 ` Martin Raiber
2020-01-29 21:20 ` Martin Steigerwald
2020-01-29 22:55 ` Chris Murphy
2020-01-30 10:41 ` Martin Steigerwald
2020-01-30 16:37 ` Chris Murphy
2020-01-30 20:02 ` Martin Steigerwald
2020-01-30 20:18 ` Chris Murphy
2020-01-30 20:59 ` Josef Bacik
2020-01-30 21:09 ` Chris Murphy
2020-01-30 21:32 ` Martin Raiber
2020-01-30 21:42 ` Josef Bacik
2020-01-30 21:12 ` Martin Steigerwald
2020-01-30 21:10 ` Martin Steigerwald
2020-01-30 21:20 ` Remi Gauvin
2020-01-30 23:12 ` Martin Steigerwald
2020-01-31 1:43 ` Matt Corallo
2020-01-31 1:57 ` Qu Wenruo
2020-03-02 1:57 ` Etienne Champetier
2020-03-02 1:59 ` Qu Wenruo
2020-01-31 4:12 ` Etienne Champetier
2020-01-30 17:19 ` David Sterba
2020-01-30 19:31 ` Chris Murphy
2020-01-30 19:58 ` Martin Steigerwald
2020-01-31 3:00 ` Zygo Blaxell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).