* Recursive delete file from all subvolumes (snapshots)
@ 2016-01-15 8:33 Wolfgang Mader
2016-01-15 9:05 ` Roman Mamedov
0 siblings, 1 reply; 6+ messages in thread
From: Wolfgang Mader @ 2016-01-15 8:33 UTC (permalink / raw)
To: Btrfs BTRFS
Dear all,
I have a btrfs raid 10 from which I take hourly snapshots using snapper. Now,
I wonder, if there is a way to delete a file together with all its occurrences
in all snapshots.
My use case is, that the file I want to delete is large, and I want to free its
space on disk. Thus, I have to get rid of its "live" version but also of all
references to it in snapshots.
Thank,
Wolfgang
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Recursive delete file from all subvolumes (snapshots)
2016-01-15 8:33 Recursive delete file from all subvolumes (snapshots) Wolfgang Mader
@ 2016-01-15 9:05 ` Roman Mamedov
2016-01-15 9:11 ` Wolfgang Mader
2016-01-15 11:48 ` Duncan
0 siblings, 2 replies; 6+ messages in thread
From: Roman Mamedov @ 2016-01-15 9:05 UTC (permalink / raw)
To: Wolfgang Mader; +Cc: Btrfs BTRFS
[-- Attachment #1: Type: text/plain, Size: 988 bytes --]
On Fri, 15 Jan 2016 09:33:14 +0100
Wolfgang Mader <Wolfgang_Mader@brain-frog.de> wrote:
> I have a btrfs raid 10 from which I take hourly snapshots using snapper. Now,
> I wonder, if there is a way to delete a file together with all its occurrences
> in all snapshots.
>
> My use case is, that the file I want to delete is large, and I want to free its
> space on disk. Thus, I have to get rid of its "live" version but also of all
> references to it in snapshots.
E.g. if your file is at /path/to/file.dat, and your snapshot structure is
/snapshots/YYYY-MM-DD@time/, you would simply do:
rm /snapshots/*/path/to/file.dat
In fact this is what I often do with my timed snapshots when deleting some
files and wanting to recover free space immediately, not waiting for all their
snapshots to expire and get deleted by the usual time-based deletion rules.
If your snapshots are read-only it becomes more complex, but still doable.
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Recursive delete file from all subvolumes (snapshots)
2016-01-15 9:05 ` Roman Mamedov
@ 2016-01-15 9:11 ` Wolfgang Mader
2016-01-15 11:48 ` Duncan
1 sibling, 0 replies; 6+ messages in thread
From: Wolfgang Mader @ 2016-01-15 9:11 UTC (permalink / raw)
To: Btrfs BTRFS
[-- Attachment #1: Type: text/plain, Size: 1059 bytes --]
On Friday, January 15, 2016 2:05:39 PM CET Roman Mamedov wrote:
> On Fri, 15 Jan 2016 09:33:14 +0100
>
> Wolfgang Mader <Wolfgang_Mader@brain-frog.de> wrote:
> > I have a btrfs raid 10 from which I take hourly snapshots using snapper.
> > Now, I wonder, if there is a way to delete a file together with all its
> > occurrences in all snapshots.
> >
> > My use case is, that the file I want to delete is large, and I want to
> > free its space on disk. Thus, I have to get rid of its "live" version but
> > also of all references to it in snapshots.
>
> E.g. if your file is at /path/to/file.dat, and your snapshot structure is
> /snapshots/YYYY-MM-DD@time/, you would simply do:
>
> rm /snapshots/*/path/to/file.dat
>
> In fact this is what I often do with my timed snapshots when deleting some
> files and wanting to recover free space immediately, not waiting for all
> their snapshots to expire and get deleted by the usual time-based deletion
> rules.
>
> If your snapshots are read-only it becomes more complex, but still doable.
Neat idea.
Thx.
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Recursive delete file from all subvolumes (snapshots)
2016-01-15 9:05 ` Roman Mamedov
2016-01-15 9:11 ` Wolfgang Mader
@ 2016-01-15 11:48 ` Duncan
2016-01-15 13:33 ` Wolfgang Mader
1 sibling, 1 reply; 6+ messages in thread
From: Duncan @ 2016-01-15 11:48 UTC (permalink / raw)
To: linux-btrfs
Roman Mamedov posted on Fri, 15 Jan 2016 14:05:39 +0500 as excerpted:
> On Fri, 15 Jan 2016 09:33:14 +0100 Wolfgang Mader
> <Wolfgang_Mader@brain-frog.de> wrote:
>
>> I have a btrfs raid 10 from which I take hourly snapshots using
>> snapper.
Hopefully you have snapper setup with a good thinning program as well, as
ideally, you want to keep your snapshots to 250-ish per subvolume and a
thousand or two per filesystem, maximum, for scaling reasons, and hourly
snapshots will eat up that 250 target in 10 days, 10 hours...
But it's quite possible to start with hourly snapshots, thinning to say 6-
hourly (four per day, deleting 5/6) after a couple days, and continuing
to thin down to say weekly after several weeks, then keeping weekly
snapshots for a year before deleting them to recover the space and
resorting to proper backups if anything older needs recovered, and stay
under 200 snapshots per subvolume.
That'll help keep your btrfs healthy and your btrfs maintenance commands
(scrub, balance, check, etc), if needed, running in something like
reasonable time. =:^)
>> Now, I wonder, if there is a way to delete a file together
>> with all its occurrences in all snapshots.
If the file was there at the time of the snapshot, it's part of the
snapshot and to remove it when deleting a file from the working set
rather defeats the purpose, tho of course with writable snapshots, you
can delete it from the snapshot manually (see RM's method below), and as
RM says, with read-only snapshots it's more complex but still possible
(see my reply under that bit, further below).
>> My use case is, that the file I want to delete is large, and I want to
>> free its space on disk. Thus, I have to get rid of its "live" version
>> but also of all references to it in snapshots.
>
> E.g. if your file is at /path/to/file.dat, and your snapshot structure
> is /snapshots/YYYY-MM-DD@time/, you would simply do:
>
> rm /snapshots/*/path/to/file.dat
>
> In fact this is what I often do with my timed snapshots when deleting
> some files and wanting to recover free space immediately, not waiting
> for all their snapshots to expire and get deleted by the usual
> time-based deletion rules.
Of course that implies that the root containing all the snapshots is
itself mounted or otherwise nested in the mounted tree, somewhere. The
recommendation is to keep it unmounted and not directly accessible, by
default, only mounting the snapshots root when you are directly working
with the snapshots.
Among other reasons, there's a security issue if you're snapshots contain
old executables including set-UID/GID executables, and they're security
updated. If the old vulnerable versions remain accessible to ordinary
users, as they will if the snapshots remain routinely accessible within
the tree, then it's possible for a user to use them to gain the privs of
the user/group (typically root) they run as. If these snapshots aren't
normally accessible in the tree, then users won't have access to them and
won't be able to use the old and vulnerable versions the snapshots
contain to privilege-escalate.
Plus of course, if they're unmounted, they're less likely to be
accidentally damaged or deleted.
> If your snapshots are read-only it becomes more complex, but still
> doable.
Read-only is a snapshot property. As such, it can be toggled via btrfs
property set calls. So the idea here would be a script that loops thru
the snapshots, doing a btrfs property set writable, rm file, btrfs set
property read-only, for each snapshot. Trivial enough shell script. So
a bit more complex than if the snapshots are writable in the first place,
but far from unmanageably so. =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Recursive delete file from all subvolumes (snapshots)
2016-01-15 11:48 ` Duncan
@ 2016-01-15 13:33 ` Wolfgang Mader
2016-01-15 15:25 ` Duncan
0 siblings, 1 reply; 6+ messages in thread
From: Wolfgang Mader @ 2016-01-15 13:33 UTC (permalink / raw)
To: Duncan; +Cc: linux-btrfs
On Friday, January 15, 2016 11:48:11 AM CET Duncan wrote:
> Roman Mamedov posted on Fri, 15 Jan 2016 14:05:39 +0500 as excerpted:
> > On Fri, 15 Jan 2016 09:33:14 +0100 Wolfgang Mader
> >
> > <Wolfgang_Mader@brain-frog.de> wrote:
> >> I have a btrfs raid 10 from which I take hourly snapshots using
> >> snapper.
>
> Hopefully you have snapper setup with a good thinning program as well, as
> ideally, you want to keep your snapshots to 250-ish per subvolume and a
> thousand or two per filesystem, maximum, for scaling reasons, and hourly
> snapshots will eat up that 250 target in 10 days, 10 hours...
>
> But it's quite possible to start with hourly snapshots, thinning to say 6-
> hourly (four per day, deleting 5/6) after a couple days, and continuing
> to thin down to say weekly after several weeks, then keeping weekly
> snapshots for a year before deleting them to recover the space and
> resorting to proper backups if anything older needs recovered, and stay
> under 200 snapshots per subvolume.
>
> That'll help keep your btrfs healthy and your btrfs maintenance commands
> (scrub, balance, check, etc), if needed, running in something like
> reasonable time. =:^)
Thanks to raise this point. Right now, I have some of my subvolumes snapshoted
w/o thinning, but I am aware of the issues arising from too many subvolumes. I
have to fix this in a timely manner...
>
> >> Now, I wonder, if there is a way to delete a file together
> >> with all its occurrences in all snapshots.
>
> If the file was there at the time of the snapshot, it's part of the
> snapshot and to remove it when deleting a file from the working set
> rather defeats the purpose, tho of course with writable snapshots, you
> can delete it from the snapshot manually (see RM's method below), and as
> RM says, with read-only snapshots it's more complex but still possible
> (see my reply under that bit, further below).
>
> >> My use case is, that the file I want to delete is large, and I want to
> >> free its space on disk. Thus, I have to get rid of its "live" version
> >> but also of all references to it in snapshots.
> >
> > E.g. if your file is at /path/to/file.dat, and your snapshot structure
> > is /snapshots/YYYY-MM-DD@time/, you would simply do:
> >
> > rm /snapshots/*/path/to/file.dat
> >
> > In fact this is what I often do with my timed snapshots when deleting
> > some files and wanting to recover free space immediately, not waiting
> > for all their snapshots to expire and get deleted by the usual
> > time-based deletion rules.
>
> Of course that implies that the root containing all the snapshots is
> itself mounted or otherwise nested in the mounted tree, somewhere. The
> recommendation is to keep it unmounted and not directly accessible, by
> default, only mounting the snapshots root when you are directly working
> with the snapshots.
As far as I know, snapper puts its snapshots under .snapshots in the root of
the snapshoted subvolume. As I want to work with the subvolume, it is mounted,
and with it its snapshots. So, according to your answer, I should figure out
how to change the location at which snapper places its snapshots. The
subvolumes snapper creates as ro, which gives some protection against unwanted
changes.
>
> Among other reasons, there's a security issue if you're snapshots contain
> old executables including set-UID/GID executables, and they're security
> updated. If the old vulnerable versions remain accessible to ordinary
> users, as they will if the snapshots remain routinely accessible within
> the tree, then it's possible for a user to use them to gain the privs of
> the user/group (typically root) they run as. If these snapshots aren't
> normally accessible in the tree, then users won't have access to them and
> won't be able to use the old and vulnerable versions the snapshots
> contain to privilege-escalate.
While I can see the issue, I only have music and image files. So this should be
no problem for my setup. But good to keep this in mind.
>
> Plus of course, if they're unmounted, they're less likely to be
> accidentally damaged or deleted.
>
> > If your snapshots are read-only it becomes more complex, but still
> > doable.
>
> Read-only is a snapshot property. As such, it can be toggled via btrfs
> property set calls. So the idea here would be a script that loops thru
> the snapshots, doing a btrfs property set writable, rm file, btrfs set
> property read-only, for each snapshot. Trivial enough shell script. So
> a bit more complex than if the snapshots are writable in the first place,
> but far from unmanageably so. =:^)
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Recursive delete file from all subvolumes (snapshots)
2016-01-15 13:33 ` Wolfgang Mader
@ 2016-01-15 15:25 ` Duncan
0 siblings, 0 replies; 6+ messages in thread
From: Duncan @ 2016-01-15 15:25 UTC (permalink / raw)
To: linux-btrfs
Wolfgang Mader posted on Fri, 15 Jan 2016 14:33:27 +0100 as excerpted:
>> Of course that implies that the root containing all the snapshots is
>> itself mounted or otherwise nested in the mounted tree, somewhere. The
>> recommendation is to keep it unmounted and not directly accessible, by
>> default, only mounting the snapshots root when you are directly working
>> with the snapshots.
>
> As far as I know, snapper puts its snapshots under .snapshots in the
> root of the snapshoted subvolume. As I want to work with the subvolume,
> it is mounted,
> and with it its snapshots. So, according to your answer, I should figure
> out how to change the location at which snapper places its snapshots.
> The subvolumes snapper creates as ro, which gives some protection
> against unwanted changes.
There's a suggested layout on the wiki:
https://btrfs.wiki.kernel.org/index.php/SysadminGuide#Managing_Snapshots
I'd suggest something like the "even flatter" layout, since it emphasizes
that snapshots are simply subvolumes that happen to be a snapshot of
whatever subvolume at some particular moment.
That way, as the wiki discusses, rolling back is simply a matter of
mounting the appropriate snapshot in place of the what was the working
copy.
Tho you do have to ensure that toplevel (ID 5) is mounted any time you're
working with snapshots, which means the snapshot creation script (snapper
for you I guess) would need to mount it before taking the snapshot, and
umount it after -- probably using a lockfile to determine whether it
should umount, so you could create that lockfile any time you're working
with toplevel mounted manually, to keep the hourly snapshot script from
umounting it.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-01-15 15:26 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-15 8:33 Recursive delete file from all subvolumes (snapshots) Wolfgang Mader
2016-01-15 9:05 ` Roman Mamedov
2016-01-15 9:11 ` Wolfgang Mader
2016-01-15 11:48 ` Duncan
2016-01-15 13:33 ` Wolfgang Mader
2016-01-15 15:25 ` Duncan
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.