All of lore.kernel.org
 help / color / mirror / Atom feed
* space still allocated post deletion
@ 2022-04-19 10:41 Alex Powell
  2022-04-19 11:21 ` Graham Cobb
  0 siblings, 1 reply; 6+ messages in thread
From: Alex Powell @ 2022-04-19 10:41 UTC (permalink / raw)
  To: linux-btrfs

Hi team,
I have deleted hundreds of GB of files however space still remains the
same, even after a full balance and a dusage=0 balance. The location I
am deleting from is usually a mount point but I found some files had
saved there while the array was unmounted, which I then removed.

root@bean:/home/bean# du -h --max-depth=1 /mnt/data/triage
6.4G /mnt/data/triage/complete
189G /mnt/data/triage/incomplete
195G /mnt/data/triage

root@bean:/home/bean# rm -rf /mnt/data/triage/complete/*
root@bean:/home/bean# rm -rf /mnt/data/triage/incomplete/*
root@bean:/home/bean# du -h --max-depth=1 /mnt/data/triage
0 /mnt/data/triage/complete
0 /mnt/data/triage/incomplete
0 /mnt/data/triage

root@bean:/home/bean# btrfs filesystem show
Label: none  uuid: 24933208-0a7a-42ff-90d8-f0fc2028dec9
Total devices 1 FS bytes used 209.03GiB
devid    1 size 223.07GiB used 211.03GiB path /dev/sdh2

root@bean:/home/bean# du -h --max-depth=1 /
244M /boot
91M /home
7.5M /etc
0 /media
0 /dev
0 /mnt
0 /opt
0 /proc
2.7G /root
1.6M /run
0 /srv
0 /sys
0 /tmp
3.6G /usr
13G /var
710M /snap
22G /

Linux bean 5.15.0-25-generic #25-Ubuntu SMP Wed Mar 30 15:54:22 UTC
2022 x86_64 x86_64 x86_64 GNU/Linux
btrfs-progs v5.16.2

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: space still allocated post deletion
  2022-04-19 10:41 space still allocated post deletion Alex Powell
@ 2022-04-19 11:21 ` Graham Cobb
  2022-04-19 12:42   ` Hugo Mills
  2022-04-20  7:17   ` Alex Powell
  0 siblings, 2 replies; 6+ messages in thread
From: Graham Cobb @ 2022-04-19 11:21 UTC (permalink / raw)
  To: Alex Powell, linux-btrfs

On 19/04/2022 11:41, Alex Powell wrote:
> Hi team,
> I have deleted hundreds of GB of files however space still remains the
> same, even after a full balance and a dusage=0 balance. The location I
> am deleting from is usually a mount point but I found some files had
> saved there while the array was unmounted, which I then removed.

Most likely you have files in subvolumes which are not currently mounted
anywhere. You need to mount the root subvolume of the filesystem to see
all the files. Many distros default to putting the system root into a
non-root subvolume.

I think you can see them all if you use:

btrfs subv list -a /

To access them...

mkdir /mnt/1
mount -o subvolid=5 /dev/sdh2 /mnt/1

Graham

> 
> root@bean:/home/bean# du -h --max-depth=1 /mnt/data/triage
> 6.4G /mnt/data/triage/complete
> 189G /mnt/data/triage/incomplete
> 195G /mnt/data/triage
> 
> root@bean:/home/bean# rm -rf /mnt/data/triage/complete/*
> root@bean:/home/bean# rm -rf /mnt/data/triage/incomplete/*
> root@bean:/home/bean# du -h --max-depth=1 /mnt/data/triage
> 0 /mnt/data/triage/complete
> 0 /mnt/data/triage/incomplete
> 0 /mnt/data/triage
> 
> root@bean:/home/bean# btrfs filesystem show
> Label: none  uuid: 24933208-0a7a-42ff-90d8-f0fc2028dec9
> Total devices 1 FS bytes used 209.03GiB
> devid    1 size 223.07GiB used 211.03GiB path /dev/sdh2
> 
> root@bean:/home/bean# du -h --max-depth=1 /
> 244M /boot
> 91M /home
> 7.5M /etc
> 0 /media
> 0 /dev
> 0 /mnt
> 0 /opt
> 0 /proc
> 2.7G /root
> 1.6M /run
> 0 /srv
> 0 /sys
> 0 /tmp
> 3.6G /usr
> 13G /var
> 710M /snap
> 22G /
> 
> Linux bean 5.15.0-25-generic #25-Ubuntu SMP Wed Mar 30 15:54:22 UTC
> 2022 x86_64 x86_64 x86_64 GNU/Linux
> btrfs-progs v5.16.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: space still allocated post deletion
  2022-04-19 11:21 ` Graham Cobb
@ 2022-04-19 12:42   ` Hugo Mills
  2022-04-20  7:17   ` Alex Powell
  1 sibling, 0 replies; 6+ messages in thread
From: Hugo Mills @ 2022-04-19 12:42 UTC (permalink / raw)
  To: Graham Cobb; +Cc: Alex Powell, linux-btrfs

On Tue, Apr 19, 2022 at 12:21:31PM +0100, Graham Cobb wrote:
> On 19/04/2022 11:41, Alex Powell wrote:
> > Hi team,
> > I have deleted hundreds of GB of files however space still remains the
> > same, even after a full balance and a dusage=0 balance. The location I
> > am deleting from is usually a mount point but I found some files had
> > saved there while the array was unmounted, which I then removed.
> 
> Most likely you have files in subvolumes which are not currently mounted
> anywhere. You need to mount the root subvolume of the filesystem to see
> all the files. Many distros default to putting the system root into a
> non-root subvolume.
> 
> I think you can see them all if you use:
> 
> btrfs subv list -a /
> 
> To access them...
> 
> mkdir /mnt/1
> mount -o subvolid=5 /dev/sdh2 /mnt/1

   As well as the above, the deleted files may also exist in
snapshots, in which case the space is still needed to store the
snapshot copy of it, and won't be released until all of those
snapshots have been removed as well (this may simply be a case of
waiting for the snapshots to time-out through your normal cleanup
schedule).

   If the things you deleted were themselves subvolumes, then it's
possible that there was an open file on the subvol (or that it was
monted elsewhere), in which case it won't be cleaned up until the open
file/mount is closed. You can check that with "btrfs sub list -d", but
I don't think that's likely to be the case here.

   Hugo.

> Graham
> 
> > 
> > root@bean:/home/bean# du -h --max-depth=1 /mnt/data/triage
> > 6.4G /mnt/data/triage/complete
> > 189G /mnt/data/triage/incomplete
> > 195G /mnt/data/triage
> > 
> > root@bean:/home/bean# rm -rf /mnt/data/triage/complete/*
> > root@bean:/home/bean# rm -rf /mnt/data/triage/incomplete/*
> > root@bean:/home/bean# du -h --max-depth=1 /mnt/data/triage
> > 0 /mnt/data/triage/complete
> > 0 /mnt/data/triage/incomplete
> > 0 /mnt/data/triage
> > 
> > root@bean:/home/bean# btrfs filesystem show
> > Label: none  uuid: 24933208-0a7a-42ff-90d8-f0fc2028dec9
> > Total devices 1 FS bytes used 209.03GiB
> > devid    1 size 223.07GiB used 211.03GiB path /dev/sdh2
> > 
> > root@bean:/home/bean# du -h --max-depth=1 /
> > 244M /boot
> > 91M /home
> > 7.5M /etc
> > 0 /media
> > 0 /dev
> > 0 /mnt
> > 0 /opt
> > 0 /proc
> > 2.7G /root
> > 1.6M /run
> > 0 /srv
> > 0 /sys
> > 0 /tmp
> > 3.6G /usr
> > 13G /var
> > 710M /snap
> > 22G /
> > 
> > Linux bean 5.15.0-25-generic #25-Ubuntu SMP Wed Mar 30 15:54:22 UTC
> > 2022 x86_64 x86_64 x86_64 GNU/Linux
> > btrfs-progs v5.16.2
> 

-- 
Hugo Mills             | What's a Nazgûl like you doing in a place like this?
hugo@... carfax.org.uk |
http://carfax.org.uk/  |
PGP: E2AB1DE4          |                                                Illiad

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: space still allocated post deletion
  2022-04-19 11:21 ` Graham Cobb
  2022-04-19 12:42   ` Hugo Mills
@ 2022-04-20  7:17   ` Alex Powell
  2022-04-20  8:27     ` Graham Cobb
  2022-04-23  6:04     ` Andrei Borzenkov
  1 sibling, 2 replies; 6+ messages in thread
From: Alex Powell @ 2022-04-20  7:17 UTC (permalink / raw)
  To: Graham Cobb; +Cc: linux-btrfs

Thank you,
The issue was veeam keeping snapshots in .veeam-snapshots above the
root subvolume. I have no idea why this is the expected behaviour,
because the backups are set to go to /mnt/data.

I'm confident that if I wait 7 days, the backup retention period, this
will fix itself. So this itself is not a bug in btrfs, it is more a
misconfiguration by Ubuntu or Veeam.

On Tue, Apr 19, 2022 at 8:51 PM Graham Cobb <g.btrfs@cobb.me.uk> wrote:
>
> On 19/04/2022 11:41, Alex Powell wrote:
> > Hi team,
> > I have deleted hundreds of GB of files however space still remains the
> > same, even after a full balance and a dusage=0 balance. The location I
> > am deleting from is usually a mount point but I found some files had
> > saved there while the array was unmounted, which I then removed.
>
> Most likely you have files in subvolumes which are not currently mounted
> anywhere. You need to mount the root subvolume of the filesystem to see
> all the files. Many distros default to putting the system root into a
> non-root subvolume.
>
> I think you can see them all if you use:
>
> btrfs subv list -a /
>
> To access them...
>
> mkdir /mnt/1
> mount -o subvolid=5 /dev/sdh2 /mnt/1
>
> Graham
>
> >
> > root@bean:/home/bean# du -h --max-depth=1 /mnt/data/triage
> > 6.4G /mnt/data/triage/complete
> > 189G /mnt/data/triage/incomplete
> > 195G /mnt/data/triage
> >
> > root@bean:/home/bean# rm -rf /mnt/data/triage/complete/*
> > root@bean:/home/bean# rm -rf /mnt/data/triage/incomplete/*
> > root@bean:/home/bean# du -h --max-depth=1 /mnt/data/triage
> > 0 /mnt/data/triage/complete
> > 0 /mnt/data/triage/incomplete
> > 0 /mnt/data/triage
> >
> > root@bean:/home/bean# btrfs filesystem show
> > Label: none  uuid: 24933208-0a7a-42ff-90d8-f0fc2028dec9
> > Total devices 1 FS bytes used 209.03GiB
> > devid    1 size 223.07GiB used 211.03GiB path /dev/sdh2
> >
> > root@bean:/home/bean# du -h --max-depth=1 /
> > 244M /boot
> > 91M /home
> > 7.5M /etc
> > 0 /media
> > 0 /dev
> > 0 /mnt
> > 0 /opt
> > 0 /proc
> > 2.7G /root
> > 1.6M /run
> > 0 /srv
> > 0 /sys
> > 0 /tmp
> > 3.6G /usr
> > 13G /var
> > 710M /snap
> > 22G /
> >
> > Linux bean 5.15.0-25-generic #25-Ubuntu SMP Wed Mar 30 15:54:22 UTC
> > 2022 x86_64 x86_64 x86_64 GNU/Linux
> > btrfs-progs v5.16.2
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: space still allocated post deletion
  2022-04-20  7:17   ` Alex Powell
@ 2022-04-20  8:27     ` Graham Cobb
  2022-04-23  6:04     ` Andrei Borzenkov
  1 sibling, 0 replies; 6+ messages in thread
From: Graham Cobb @ 2022-04-20  8:27 UTC (permalink / raw)
  To: Alex Powell; +Cc: linux-btrfs

On 20/04/2022 08:17, Alex Powell wrote:
> Thank you,
> The issue was veeam keeping snapshots in .veeam-snapshots above the
> root subvolume. I have no idea why this is the expected behaviour,
> because the backups are set to go to /mnt/data.

I don't use either veeam or Ubuntu but I note that btrbk (which I use)
has a similar concept of storing some snapshots on the disk itself and
only moving a subset of them to the backup disk. I use this to keep
hourly snapshots on the source disk itself and move one of them to the
backup disk per day. The hourly snapshots have proved occasionally
useful when I manage to accidentally delete a chunk of a document I am
working on and then save the file - DOH!

Of course, it isn't so useful if you never knew it existed :-)


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: space still allocated post deletion
  2022-04-20  7:17   ` Alex Powell
  2022-04-20  8:27     ` Graham Cobb
@ 2022-04-23  6:04     ` Andrei Borzenkov
  1 sibling, 0 replies; 6+ messages in thread
From: Andrei Borzenkov @ 2022-04-23  6:04 UTC (permalink / raw)
  To: Alex Powell, Graham Cobb; +Cc: linux-btrfs

On 20.04.2022 10:17, Alex Powell wrote:
> Thank you,
> The issue was veeam keeping snapshots in .veeam-snapshots above the
> root subvolume. I have no idea why this is the expected behaviour,
> because the backups are set to go to /mnt/data.
> 

Snapshots are the source of backup data. Veeam always creates snapshot
to get stable state, either using own driver or (on btrfs) using btrfs
native snapshots. Then Veeam reads data from those snapshots and stores
in location where you have specified it. See Veeam Agent for Linux
documentation (I assume this is what you are using).

> I'm confident that if I wait 7 days, the backup retention period, this
> will fix itself. So this itself is not a bug in btrfs, it is more a
> misconfiguration by Ubuntu or Veeam.
> 

I fail to see any misconfiguration. If you imply that btrfs snapshots
should be located on external disk - that is simply not possible.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-04-23  6:04 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-19 10:41 space still allocated post deletion Alex Powell
2022-04-19 11:21 ` Graham Cobb
2022-04-19 12:42   ` Hugo Mills
2022-04-20  7:17   ` Alex Powell
2022-04-20  8:27     ` Graham Cobb
2022-04-23  6:04     ` Andrei Borzenkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.