* No space left on device when doing "mkdir"
@ 2017-04-27 13:52 Gerard Saraber
2017-04-27 14:07 ` Roman Mamedov
0 siblings, 1 reply; 7+ messages in thread
From: Gerard Saraber @ 2017-04-27 13:52 UTC (permalink / raw)
To: linux-btrfs
Hi everyone,
I'm running: Linux shrapnel 4.11.0-rc8 #2 SMP Mon Apr 24 08:47:46 CDT
2017 x86_64 Intel(R) Xeon(R) CPU 5140 @ 2.33GHz GenuineIntel GNU/Linux
32GB memory
There's a kworker taking 100% cpu:
30856 root 20 0 0 0 0 R 100.0 0.0 19:26.90
kworker/u8:0
and writing about 3-5MB/sec according to iotop:
30856 be/4 root 0.00 B/s 5.07 M/s 0.00 % 0.00 % [kworker/u8:0]
My filesystem is a Raid1 and has plenty of space:
/dev/sdb 35T 21T 14T 61% /home/exports
and I am using NFS to write to the volume, a lot
Label: none uuid: 7af2e65c-3935-4e0d-aa63-9ef6be991cb9
Total devices 18 FS bytes used 20.92TiB
devid 2 size 3.64TiB used 2.24TiB path /dev/sdb
devid 3 size 3.64TiB used 2.46TiB path /dev/sdc
devid 4 size 3.64TiB used 2.00TiB path /dev/sde
devid 7 size 2.73TiB used 1.42TiB path /dev/sdl
devid 8 size 2.73TiB used 1.12TiB path /dev/sdn
devid 9 size 3.64TiB used 2.35TiB path /dev/sdq
devid 10 size 3.64TiB used 2.07TiB path /dev/sdr
devid 11 size 5.46TiB used 4.19TiB path /dev/sda
devid 12 size 5.46TiB used 4.23TiB path /dev/sdf
devid 13 size 5.46TiB used 4.08TiB path /dev/sdh
devid 14 size 3.64TiB used 1.98TiB path /dev/sdo
devid 15 size 2.73TiB used 1.07TiB path /dev/sdm
devid 17 size 5.46TiB used 3.80TiB path /dev/sdd
devid 18 size 2.73TiB used 1.07TiB path /dev/sdj
devid 19 size 2.73TiB used 1.07TiB path /dev/sdg
devid 20 size 2.73TiB used 1.07TiB path /dev/sdk
devid 21 size 3.64TiB used 1.98TiB path /dev/sdp
devid 22 size 5.46TiB used 3.80TiB path /dev/sds
The problem:
shrapnel gerard-store # mkdir gopro
mkdir: cannot create directory 'gopro': No space left on device
^^ that error message pops up after 10-15 seconds.
I could just reboot the system and be fine for a week or so, but is
there any way to diagnose this?
Thanks!
-Gerard
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: No space left on device when doing "mkdir"
2017-04-27 13:52 No space left on device when doing "mkdir" Gerard Saraber
@ 2017-04-27 14:07 ` Roman Mamedov
2017-04-27 15:18 ` Gerard Saraber
0 siblings, 1 reply; 7+ messages in thread
From: Roman Mamedov @ 2017-04-27 14:07 UTC (permalink / raw)
To: Gerard Saraber; +Cc: linux-btrfs
On Thu, 27 Apr 2017 08:52:30 -0500
Gerard Saraber <gsaraber@rarcoa.com> wrote:
> I could just reboot the system and be fine for a week or so, but is
> there any way to diagnose this?
`btrfs fi df` for a start.
Also obligatory questions: do you have a lot of snapshots, and do you use
qgroups?
--
With respect,
Roman
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: No space left on device when doing "mkdir"
2017-04-27 14:07 ` Roman Mamedov
@ 2017-04-27 15:18 ` Gerard Saraber
2017-04-27 16:46 ` Gerard Saraber
0 siblings, 1 reply; 7+ messages in thread
From: Gerard Saraber @ 2017-04-27 15:18 UTC (permalink / raw)
To: Roman Mamedov; +Cc: linux-btrfs
no snapshots and no qgroups, just a straight up large volume.
shrapnel gerard-store # btrfs fi df /home/exports
Data, RAID1: total=20.93TiB, used=20.86TiB
System, RAID1: total=32.00MiB, used=3.73MiB
Metadata, RAID1: total=79.00GiB, used=61.10GiB
GlobalReserve, single: total=512.00MiB, used=544.00KiB
shrapnel gerard-store # btrfs filesystem usage /home/exports
Overall:
Device size: 69.13TiB
Device allocated: 42.01TiB
Device unallocated: 27.13TiB
Device missing: 0.00B
Used: 41.84TiB
Free (estimated): 13.63TiB (min: 13.63TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 1.52MiB)
On Thu, Apr 27, 2017 at 9:07 AM, Roman Mamedov <rm@romanrm.net> wrote:
> On Thu, 27 Apr 2017 08:52:30 -0500
> Gerard Saraber <gsaraber@rarcoa.com> wrote:
>
>> I could just reboot the system and be fine for a week or so, but is
>> there any way to diagnose this?
>
> `btrfs fi df` for a start.
>
> Also obligatory questions: do you have a lot of snapshots, and do you use
> qgroups?
>
> --
> With respect,
> Roman
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: No space left on device when doing "mkdir"
2017-04-27 15:18 ` Gerard Saraber
@ 2017-04-27 16:46 ` Gerard Saraber
2017-04-27 23:35 ` Chris Murphy
0 siblings, 1 reply; 7+ messages in thread
From: Gerard Saraber @ 2017-04-27 16:46 UTC (permalink / raw)
To: Roman Mamedov; +Cc: linux-btrfs
After a reboot, I found this in the logs:
[ 322.510152] BTRFS info (device sdm): The free space cache file
(36114966511616) is invalid. skip it
[ 488.702570] btrfs_printk: 847 callbacks suppressed
On Thu, Apr 27, 2017 at 10:18 AM, Gerard Saraber <gsaraber@rarcoa.com> wrote:
> no snapshots and no qgroups, just a straight up large volume.
>
> shrapnel gerard-store # btrfs fi df /home/exports
> Data, RAID1: total=20.93TiB, used=20.86TiB
> System, RAID1: total=32.00MiB, used=3.73MiB
> Metadata, RAID1: total=79.00GiB, used=61.10GiB
> GlobalReserve, single: total=512.00MiB, used=544.00KiB
>
> shrapnel gerard-store # btrfs filesystem usage /home/exports
> Overall:
> Device size: 69.13TiB
> Device allocated: 42.01TiB
> Device unallocated: 27.13TiB
> Device missing: 0.00B
> Used: 41.84TiB
> Free (estimated): 13.63TiB (min: 13.63TiB)
> Data ratio: 2.00
> Metadata ratio: 2.00
> Global reserve: 512.00MiB (used: 1.52MiB)
>
> On Thu, Apr 27, 2017 at 9:07 AM, Roman Mamedov <rm@romanrm.net> wrote:
>> On Thu, 27 Apr 2017 08:52:30 -0500
>> Gerard Saraber <gsaraber@rarcoa.com> wrote:
>>
>>> I could just reboot the system and be fine for a week or so, but is
>>> there any way to diagnose this?
>>
>> `btrfs fi df` for a start.
>>
>> Also obligatory questions: do you have a lot of snapshots, and do you use
>> qgroups?
>>
>> --
>> With respect,
>> Roman
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: No space left on device when doing "mkdir"
2017-04-27 16:46 ` Gerard Saraber
@ 2017-04-27 23:35 ` Chris Murphy
2017-04-28 13:56 ` Gerard Saraber
0 siblings, 1 reply; 7+ messages in thread
From: Chris Murphy @ 2017-04-27 23:35 UTC (permalink / raw)
To: Gerard Saraber; +Cc: Roman Mamedov, Btrfs BTRFS
On Thu, Apr 27, 2017 at 10:46 AM, Gerard Saraber <gsaraber@rarcoa.com> wrote:
> After a reboot, I found this in the logs:
> [ 322.510152] BTRFS info (device sdm): The free space cache file
> (36114966511616) is invalid. skip it
> [ 488.702570] btrfs_printk: 847 callbacks suppressed
>
>
>
> On Thu, Apr 27, 2017 at 10:18 AM, Gerard Saraber <gsaraber@rarcoa.com> wrote:
>> no snapshots and no qgroups, just a straight up large volume.
>>
>> shrapnel gerard-store # btrfs fi df /home/exports
>> Data, RAID1: total=20.93TiB, used=20.86TiB
>> System, RAID1: total=32.00MiB, used=3.73MiB
>> Metadata, RAID1: total=79.00GiB, used=61.10GiB
>> GlobalReserve, single: total=512.00MiB, used=544.00KiB
>>
>> shrapnel gerard-store # btrfs filesystem usage /home/exports
>> Overall:
>> Device size: 69.13TiB
>> Device allocated: 42.01TiB
>> Device unallocated: 27.13TiB
>> Device missing: 0.00B
>> Used: 41.84TiB
>> Free (estimated): 13.63TiB (min: 13.63TiB)
>> Data ratio: 2.00
>> Metadata ratio: 2.00
>> Global reserve: 512.00MiB (used: 1.52MiB)
>>
>> On Thu, Apr 27, 2017 at 9:07 AM, Roman Mamedov <rm@romanrm.net> wrote:
>>> On Thu, 27 Apr 2017 08:52:30 -0500
>>> Gerard Saraber <gsaraber@rarcoa.com> wrote:
>>>
>>>> I could just reboot the system and be fine for a week or so, but is
>>>> there any way to diagnose this?
>>>
>>> `btrfs fi df` for a start.
>>>
>>> Also obligatory questions: do you have a lot of snapshots, and do you use
>>> qgroups?
>>>
A dev might find this helpful
$ grep -IR . /sys/fs/btrfs/usevolumeUUIDhere/allocation/
Also note that a lot of people on Btrfs aren't getting Gerard's
emails, because anyone using gmail and some other agents see it as
spam because of DMARC failure. Basically rarcoa.com is configured to
tell mail senders to fail to (re)send emails, they can only be sent
from raroa.com. Anyway, I think this is supposed to be fixed in
mailing list servers, they need to strip these headers and insert
their own rather than leaving them intact only later to get rejected
due to honoring the header's stated policy.
--
Chris Murphy
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: No space left on device when doing "mkdir"
2017-04-27 23:35 ` Chris Murphy
@ 2017-04-28 13:56 ` Gerard Saraber
2017-05-01 19:12 ` Gerard Saraber
0 siblings, 1 reply; 7+ messages in thread
From: Gerard Saraber @ 2017-04-28 13:56 UTC (permalink / raw)
To: Chris Murphy; +Cc: Roman Mamedov, Btrfs BTRFS
Dmarc is off, here's the output of the allocations: it's working
correctly right now, I'll update when it does it again.
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/flags:2
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/raid1/used_bytes:3948544
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/raid1/total_bytes:33554432
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_pinned:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/disk_total:67108864
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_may_use:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_readonly:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_used:3948544
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_reserved:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/disk_used:7897088
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/total_bytes_pinned:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/total_bytes:33554432
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/flags:4
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/raid1/used_bytes:65864957952
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/raid1/total_bytes:83751862272
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_pinned:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/disk_total:167503724544
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_may_use:739508224
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_readonly:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_used:65864957952
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_reserved:1835008
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/disk_used:131729915904
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/total_bytes_pinned:1884160
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/total_bytes:83751862272
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/global_rsv_size:536870912
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/flags:1
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/raid1/used_bytes:23029876707328
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/raid1/total_bytes:23175643529216
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_pinned:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/disk_total:46351287058432
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_may_use:36474880
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_readonly:1703936
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_used:23029876707328
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_reserved:15003648
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/disk_used:46059753414656
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/total_bytes_pinned:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/total_bytes:23175643529216
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/global_rsv_reserved:536870912
On Thu, Apr 27, 2017 at 6:35 PM, Chris Murphy <lists@colorremedies.com> wrote:
> On Thu, Apr 27, 2017 at 10:46 AM, Gerard Saraber <gsaraber@rarcoa.com> wrote:
>> After a reboot, I found this in the logs:
>> [ 322.510152] BTRFS info (device sdm): The free space cache file
>> (36114966511616) is invalid. skip it
>> [ 488.702570] btrfs_printk: 847 callbacks suppressed
>>
>>
>>
>> On Thu, Apr 27, 2017 at 10:18 AM, Gerard Saraber <gsaraber@rarcoa.com> wrote:
>>> no snapshots and no qgroups, just a straight up large volume.
>>>
>>> shrapnel gerard-store # btrfs fi df /home/exports
>>> Data, RAID1: total=20.93TiB, used=20.86TiB
>>> System, RAID1: total=32.00MiB, used=3.73MiB
>>> Metadata, RAID1: total=79.00GiB, used=61.10GiB
>>> GlobalReserve, single: total=512.00MiB, used=544.00KiB
>>>
>>> shrapnel gerard-store # btrfs filesystem usage /home/exports
>>> Overall:
>>> Device size: 69.13TiB
>>> Device allocated: 42.01TiB
>>> Device unallocated: 27.13TiB
>>> Device missing: 0.00B
>>> Used: 41.84TiB
>>> Free (estimated): 13.63TiB (min: 13.63TiB)
>>> Data ratio: 2.00
>>> Metadata ratio: 2.00
>>> Global reserve: 512.00MiB (used: 1.52MiB)
>>>
>>> On Thu, Apr 27, 2017 at 9:07 AM, Roman Mamedov <rm@romanrm.net> wrote:
>>>> On Thu, 27 Apr 2017 08:52:30 -0500
>>>> Gerard Saraber <gsaraber@rarcoa.com> wrote:
>>>>
>>>>> I could just reboot the system and be fine for a week or so, but is
>>>>> there any way to diagnose this?
>>>>
>>>> `btrfs fi df` for a start.
>>>>
>>>> Also obligatory questions: do you have a lot of snapshots, and do you use
>>>> qgroups?
>>>>
>
> A dev might find this helpful
> $ grep -IR . /sys/fs/btrfs/usevolumeUUIDhere/allocation/
>
>
> Also note that a lot of people on Btrfs aren't getting Gerard's
> emails, because anyone using gmail and some other agents see it as
> spam because of DMARC failure. Basically rarcoa.com is configured to
> tell mail senders to fail to (re)send emails, they can only be sent
> from raroa.com. Anyway, I think this is supposed to be fixed in
> mailing list servers, they need to strip these headers and insert
> their own rather than leaving them intact only later to get rejected
> due to honoring the header's stated policy.
>
> --
> Chris Murphy
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: No space left on device when doing "mkdir"
2017-04-28 13:56 ` Gerard Saraber
@ 2017-05-01 19:12 ` Gerard Saraber
0 siblings, 0 replies; 7+ messages in thread
From: Gerard Saraber @ 2017-05-01 19:12 UTC (permalink / raw)
To: Chris Murphy; +Cc: Roman Mamedov, Btrfs BTRFS
It did it again:
shrapnel share # touch test.txt
touch: cannot touch 'test.txt': No space left on device
shrapnel share # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 35G 19G 15G 56% /
devtmpfs 10M 0 10M 0% /dev
tmpfs 3.2G 1.2M 3.2G 1% /run
shm 16G 0 16G 0% /dev/shm
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
/dev/sdb 35T 22T 14T 62% /home/exports
shrapnel share # grep -IR .
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/flags:2
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/raid1/used_bytes:3997696
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/raid1/total_bytes:33554432
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_pinned:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/disk_total:67108864
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_may_use:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_readonly:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_used:3997696
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_reserved:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/disk_used:7995392
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/total_bytes_pinned:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/total_bytes:33554432
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/flags:4
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/raid1/used_bytes:66595684352
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/raid1/total_bytes:280246616064
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_pinned:835584
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/disk_total:560493232128
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_may_use:2014478974976
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_readonly:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_used:66595684352
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_reserved:16384
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/disk_used:133191368704
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/total_bytes_pinned:1048576
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/total_bytes:280246616064
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/global_rsv_size:536870912
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/flags:1
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/raid1/used_bytes:23249396273152
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/raid1/total_bytes:23320598675456
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_pinned:1835008
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/disk_total:46641197350912
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_may_use:262144
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_readonly:1769472
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_used:23249396273152
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_reserved:0
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/disk_used:46498792546304
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/total_bytes_pinned:2097152
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/total_bytes:23320598675456
/sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/global_rsv_reserved:536018944
On Fri, Apr 28, 2017 at 8:56 AM, Gerard Saraber <gsaraber@rarcoa.com> wrote:
> Dmarc is off, here's the output of the allocations: it's working
> correctly right now, I'll update when it does it again.
>
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/flags:2
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/raid1/used_bytes:3948544
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/raid1/total_bytes:33554432
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_pinned:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/disk_total:67108864
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_may_use:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_readonly:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_used:3948544
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/bytes_reserved:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/disk_used:7897088
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/total_bytes_pinned:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/system/total_bytes:33554432
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/flags:4
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/raid1/used_bytes:65864957952
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/raid1/total_bytes:83751862272
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_pinned:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/disk_total:167503724544
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_may_use:739508224
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_readonly:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_used:65864957952
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/bytes_reserved:1835008
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/disk_used:131729915904
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/total_bytes_pinned:1884160
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/metadata/total_bytes:83751862272
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/global_rsv_size:536870912
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/flags:1
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/raid1/used_bytes:23029876707328
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/raid1/total_bytes:23175643529216
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_pinned:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/disk_total:46351287058432
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_may_use:36474880
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_readonly:1703936
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_used:23029876707328
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/bytes_reserved:15003648
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/disk_used:46059753414656
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/total_bytes_pinned:0
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/data/total_bytes:23175643529216
> /sys/fs/btrfs/7af2e65c-3935-4e0d-aa63-9ef6be991cb9/allocation/global_rsv_reserved:536870912
>
>
> On Thu, Apr 27, 2017 at 6:35 PM, Chris Murphy <lists@colorremedies.com> wrote:
>> On Thu, Apr 27, 2017 at 10:46 AM, Gerard Saraber <gsaraber@rarcoa.com> wrote:
>>> After a reboot, I found this in the logs:
>>> [ 322.510152] BTRFS info (device sdm): The free space cache file
>>> (36114966511616) is invalid. skip it
>>> [ 488.702570] btrfs_printk: 847 callbacks suppressed
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 10:18 AM, Gerard Saraber <gsaraber@rarcoa.com> wrote:
>>>> no snapshots and no qgroups, just a straight up large volume.
>>>>
>>>> shrapnel gerard-store # btrfs fi df /home/exports
>>>> Data, RAID1: total=20.93TiB, used=20.86TiB
>>>> System, RAID1: total=32.00MiB, used=3.73MiB
>>>> Metadata, RAID1: total=79.00GiB, used=61.10GiB
>>>> GlobalReserve, single: total=512.00MiB, used=544.00KiB
>>>>
>>>> shrapnel gerard-store # btrfs filesystem usage /home/exports
>>>> Overall:
>>>> Device size: 69.13TiB
>>>> Device allocated: 42.01TiB
>>>> Device unallocated: 27.13TiB
>>>> Device missing: 0.00B
>>>> Used: 41.84TiB
>>>> Free (estimated): 13.63TiB (min: 13.63TiB)
>>>> Data ratio: 2.00
>>>> Metadata ratio: 2.00
>>>> Global reserve: 512.00MiB (used: 1.52MiB)
>>>>
>>>> On Thu, Apr 27, 2017 at 9:07 AM, Roman Mamedov <rm@romanrm.net> wrote:
>>>>> On Thu, 27 Apr 2017 08:52:30 -0500
>>>>> Gerard Saraber <gsaraber@rarcoa.com> wrote:
>>>>>
>>>>>> I could just reboot the system and be fine for a week or so, but is
>>>>>> there any way to diagnose this?
>>>>>
>>>>> `btrfs fi df` for a start.
>>>>>
>>>>> Also obligatory questions: do you have a lot of snapshots, and do you use
>>>>> qgroups?
>>>>>
>>
>> A dev might find this helpful
>> $ grep -IR . /sys/fs/btrfs/usevolumeUUIDhere/allocation/
>>
>>
>> Also note that a lot of people on Btrfs aren't getting Gerard's
>> emails, because anyone using gmail and some other agents see it as
>> spam because of DMARC failure. Basically rarcoa.com is configured to
>> tell mail senders to fail to (re)send emails, they can only be sent
>> from raroa.com. Anyway, I think this is supposed to be fixed in
>> mailing list servers, they need to strip these headers and insert
>> their own rather than leaving them intact only later to get rejected
>> due to honoring the header's stated policy.
>>
>> --
>> Chris Murphy
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2017-05-01 19:12 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-27 13:52 No space left on device when doing "mkdir" Gerard Saraber
2017-04-27 14:07 ` Roman Mamedov
2017-04-27 15:18 ` Gerard Saraber
2017-04-27 16:46 ` Gerard Saraber
2017-04-27 23:35 ` Chris Murphy
2017-04-28 13:56 ` Gerard Saraber
2017-05-01 19:12 ` Gerard Saraber
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.