All of lore.kernel.org
 help / color / mirror / Atom feed
* Incremental backup over writable snapshot
@ 2014-02-19 13:45 GEO
  2014-02-19 17:00 ` Chris Murphy
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: GEO @ 2014-02-19 13:45 UTC (permalink / raw)
  To: linux-btrfs

Hi,

As suggested in another thread, I would like to know the reliability of the 
following backup scheme:

Suppose I have a subvolume of my homedirectory  called @home. 

Now I am interested in making incremental backups of data in home I am 
interested in, but not everything, so I create a normal snapshot of @home 
called @home-w and delete the files/folders I am not interested in backing up. 
After that I create a readonly snapshot of @home-w called @home-r, that I sent 
to my target volume with btrfs send. 

After that is done, I do regular backups, by always going over the writeable 
snapshot where I remove always the same directories I am not interested and 
send the difference to the target volume with  btrfs send -p @home-r @home-r-1| 
btrfs receive /path/of/target/volume. 

I do not like the idea of making subvolumes of all directories I am not 
interested in backing up.

So what I would like to know now is the following: Could there be drawbacks of 
doing this resp. could I further optimize my backup strategy, as I experienced 
it takes a while for deleting large files in the writeable snapshot (What does 
it write there?)

Could my method somehow lead to inefficiency in terms of the disk space used at 
the target volume (I mean, could the deleting cause a change, so that more is 
actually transferred as change, than in reality is?)?

One last question would be: Is there a quick way I could verify the local read 
only snapshot used last time is the same as the one synced to the target 
volume last time?


Thank you for your support and the great work!

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-19 13:45 Incremental backup over writable snapshot GEO
@ 2014-02-19 17:00 ` Chris Murphy
  2014-02-19 18:57   ` GEO
                     ` (2 more replies)
  2014-02-27 13:10 ` GEO
  2014-02-27 14:36 ` GEO
  2 siblings, 3 replies; 16+ messages in thread
From: Chris Murphy @ 2014-02-19 17:00 UTC (permalink / raw)
  To: GEO; +Cc: linux-btrfs


On Feb 19, 2014, at 6:45 AM, GEO <1g2e3o4@gmail.com> wrote:
> 
> I do not like the idea of making subvolumes of all directories I am not 
> interested in backing up.

Why? It addresses your use case.

Chris Murphy


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
       [not found]   ` <2285169.jbztTl7OC0@linuxpc>
@ 2014-02-19 17:26     ` Chris Murphy
       [not found]       ` <16991840.tqyQc6bZHr@linuxpc>
  2014-02-21 14:44     ` GEO
  1 sibling, 1 reply; 16+ messages in thread
From: Chris Murphy @ 2014-02-19 17:26 UTC (permalink / raw)
  To: Btrfs BTRFS, GEO


On Feb 19, 2014, at 10:07 AM, GEO <1g2e3o4@gmail.com> wrote:

> On Wednesday 19 February 2014 10:00:49 you wrote:
>> Why? It addresses your use case.
>> 
>> Chris Murphy
> 
> First of all: I am mainly looking for a way to exclude almost all hidden 
> directories. I do not want to manage all those subvolumes.

What's there to manage?


> If I would like to 
> make a system backup, all those subvolumes would not be included etc. 
> Somehow it seems inelegant to me: What would the purpose of simply directories 
> still be? That way everything could be a subvolume to allow selective backup. 
> I do not like the idea of changing so many things on the file system just 
> because of a backup. 

If you subvolume everything you don't want to backup, when you take a snapshot of the parent subvolume, none of the child subvolumes are included in the snapshot. So you end up with a snapshot that contains exactly what you want to backup. So create a read-only snapshot, back it up, delete the snapshot (if you want), done.

> If my approach is also effective I would very much prefer that. 

Snapshotting, deleting a bunch of directories in that snapshot, then backing up the snapshot, then deleting the snapshot will work. But it sounds more involved. But if you're scripting it, probably doesn't matter either way.


Chris Murphy

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
       [not found]       ` <16991840.tqyQc6bZHr@linuxpc>
@ 2014-02-19 17:51         ` Chris Murphy
  2014-02-19 20:20           ` Kai Krakow
  0 siblings, 1 reply; 16+ messages in thread
From: Chris Murphy @ 2014-02-19 17:51 UTC (permalink / raw)
  To: GEO, Btrfs BTRFS


On Feb 19, 2014, at 10:29 AM, GEO <1g2e3o4@gmail.com> wrote:

> On Wednesday 19 February 2014 10:26:02 you wrote:
>> Snapshotting, deleting a bunch of directories in that snapshot, then backing
>> up the snapshot, then deleting the snapshot will work. But it sounds more
>> involved. But if you're scripting it, probably doesn't matter either way.
> 
> Will it work as good? 
> I am scripting things, so it does not matter. If it makes no difference in the 
> end result it should be just a matter of taste.
> The question for me is whether both lead to the same result. If I did not 
> understand things the wrong way they should, shouldn't they?

Please also reply to the list directly.

It sounds like it's the same outcome but actually I don't know that send/receive will see it that way. It's necessary for the receive destination to be identical to the source parent, or the increment will not work. And I don't know that the way you're doing this means the source and destination are really identical even though you're deleting the same folders every time. So you'll just have to test it and see if it works. I wouldn't rely on this as a sole backup strategy.

Chris Murphy

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-19 17:00 ` Chris Murphy
@ 2014-02-19 18:57   ` GEO
  2014-02-20 13:20   ` GEO
       [not found]   ` <2285169.jbztTl7OC0@linuxpc>
  2 siblings, 0 replies; 16+ messages in thread
From: GEO @ 2014-02-19 18:57 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

On Wednesday 19 February 2014 10:00:49 Chris Murphy wrote:
> On Feb 19, 2014, at 6:45 AM, GEO <1g2e3o4@gmail.com> wrote:
> > I do not like the idea of making subvolumes of all directories I am not
> > interested in backing up.
> 
> Why? It addresses your use case.
> 
> Chris Murphy

I would prefer the idea of not snapshotting every directory I do not want to 
include, as there are almost more that I am not interested in. 

My question would simply be: Does the method going over the writeable snapshot 
and deleting things always lead to the same incremental end result as marking 
directories as snapshots that I am not interested in (apart from the 
additional empty directories created in case of the latter)?

Furthermore hidden directories in home change very often, meaning if I install 
additional software, additional hidden directories may be created.  So my 
script would have to mark them as snapshots every time.
If I have hidden files, I cannot mark files as snapshots, so it is clear that my 
method makes sense. 

Once I have marked these directories snapshots and I want to create snapshots 
of my whole home subvolume I would always additionally have to specify those.

So it makes the whole situation less manageable. 
Apart from that I find marking every directory I am not interested in as 
snapshots highly inelegant. 

So my question would be, if my preferred method is as reliable as the 
suggested method. 

Hope that's on the mailing list now :-). 


Thanks

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-19 17:51         ` Chris Murphy
@ 2014-02-19 20:20           ` Kai Krakow
  2014-02-20  3:31             ` Kai Krakow
  2014-02-20 11:03             ` Duncan
  0 siblings, 2 replies; 16+ messages in thread
From: Kai Krakow @ 2014-02-19 20:20 UTC (permalink / raw)
  To: linux-btrfs

Chris Murphy <lists@colorremedies.com> schrieb:

>>> Snapshotting, deleting a bunch of directories in that snapshot, then
>>> backing up the snapshot, then deleting the snapshot will work. But it
>>> sounds more involved. But if you're scripting it, probably doesn't
>>> matter either way.
>> 
>> Will it work as good?
>> I am scripting things, so it does not matter. If it makes no difference
>> in the end result it should be just a matter of taste.
>> The question for me is whether both lead to the same result. If I did not
>> understand things the wrong way they should, shouldn't they?
> 
> Please also reply to the list directly.
> 
> It sounds like it's the same outcome but actually I don't know that
> send/receive will see it that way. It's necessary for the receive
> destination to be identical to the source parent, or the increment will
> not work. And I don't know that the way you're doing this means the source
> and destination are really identical even though you're deleting the same
> folders every time. So you'll just have to test it and see if it works. I
> wouldn't rely on this as a sole backup strategy.

I don't understand anyway why one wouldn't want to backup the dotfile-
directories... It contains important configuration stuff or even very 
valuable user data like mail storages. Most of these directories aren't 
changing anyways most of the time and thus won't occupy disk space only once 
in the backup.

In a restore scenario it is as simple as copying this stuff back and your 
complete profile with all configuration is restored - no more hassle. You 
could delete stuff you don't want at this stage then.

The only directory in question would be ".cache" - and that's simple to 
create a subvolume from. And even then, some software may rely on their 
cache contents still existing or having a specific state in time (imagine 
you restore an older copy and leave a current .cache in place) - I'd prefer 
to simply keep them. It may be a better approach to maybe "find .cache -
ctime +90 -delete" (or more exactly specific subdirectories there known to 
grow unconditionally). For me, even that's not worth the hassle. It's better 
to have and don't need something.

I suggest not to try to micro-optimize backups and instead grow your backup 
storage if space is such a problem. Storage is inexpensive these days.

Incomplete backups are by my experience no good backups. In case of disaster 
you almost certainly will learn the hard way that you should not have 
excluded that or the other directory from the backup.

I only exclude files from backup that are known to change often and as a 
whole file and are easily recoverable from internet. That often hardly 
applies to any directory you have. Another candidate are vm images which 
often require different backup strategies. In the end, such examples are so 
rare that it is more easy to create subvolumes for a few special directories 
so they become excluded, then set a specialized backup strategy for some of 
these subvolumes. The only "management" requirement of this is keeping track 
which subvolumes need this extra treatment for different backup strategies. 
You don't need to manage mount points or anything else. Duncan had a nice 
example in this list how to migrate directories to subvolumes by using 
shallow copies: "mv dir dir.old && btrfs sub create dir && cp -a --
reflink=always dir.old/. dir/. && rm -Rf dir.old".

As a general rule of thumb: Follow the KISS principle for your backup, or 
live with a lot of headaches - at least for the case of recovery. Deleting 
stuff from a backup snapshot before sending it sounds silly, insane, and 
error-prone to me (please do not take that personally, it's not meant that 
way).

-- 
Replies to list only preferred.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-19 20:20           ` Kai Krakow
@ 2014-02-20  3:31             ` Kai Krakow
  2014-02-20 11:03             ` Duncan
  1 sibling, 0 replies; 16+ messages in thread
From: Kai Krakow @ 2014-02-20  3:31 UTC (permalink / raw)
  To: linux-btrfs

Kai Krakow <hurikhan77+btrfs@gmail.com> schrieb:

> Most of these directories aren't
> changing anyways most of the time and thus won't occupy disk space only
> once in the backup.

Of course "won't" should've read "would"... ;-)

-- 
Replies to list only preferred.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-19 20:20           ` Kai Krakow
  2014-02-20  3:31             ` Kai Krakow
@ 2014-02-20 11:03             ` Duncan
  2014-02-20 21:16               ` Kai Krakow
  1 sibling, 1 reply; 16+ messages in thread
From: Duncan @ 2014-02-20 11:03 UTC (permalink / raw)
  To: linux-btrfs

Kai Krakow posted on Wed, 19 Feb 2014 21:20:23 +0100 as excerpted:

>  Duncan had a nice example in this list how to migrate
> directories to subvolumes by using shallow copies: "mv dir dir.old &&
> btrfs sub create dir && cp -a -- reflink=always dir.old/. dir/. && rm
> -Rf dir.old".

FWIW, that was someone else.  I remember seeing it and I may well have 
been involved in some aspect of the discussion and thus might have quoted 
it, but my particular use-case doesn't involve a lot of subvolumes or 
snapshots, so I don't typically get quite that deep into the command-
detail in subvolume discussions as I've simply not had the necessary 
personal experience in that area to properly discuss at that level.  (Tho 
it's certainly typical of what I might post in other areas, just not that 
one.)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-19 17:00 ` Chris Murphy
  2014-02-19 18:57   ` GEO
@ 2014-02-20 13:20   ` GEO
  2014-02-20 23:04     ` Kai Krakow
       [not found]   ` <2285169.jbztTl7OC0@linuxpc>
  2 siblings, 1 reply; 16+ messages in thread
From: GEO @ 2014-02-20 13:20 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

@Kai Krakow: I accept your opinion and thank you for your answer.
However I have special reasons doing so. I could name you a few use cases. 
For example I do not need to backup search indexes as they mess up over time, 
so I simple recreate the cache in case of a new install. 
I know most of the settings I set and I know exactly what missing directories 
break what in case of deletion, because I have tried so various times.

This is not supposed to be a system backup, or a "home" backup, but a backup 
of my data (documents, videos etc.). 
I know hidden directories contain mails etc. but I know exactly where my mails 
are (most of them imap anyway) and I would include them in the backup. 

So I am looking for a different use case.

Anyway, I know most of you won't like my idea, but my question was, if I do 
everything right (and do not delete the wrong stuff out of stupidity), if the 
result would be as reliable as doing your approach, so please consider that as 
a technical question, even if you strongly dislike the idea.

So my initial question would remain: does the deleting of some stuff change the 
whole snapshot in a way, that the increment step would be screwed, which means 
I would back up blocks, that are not new. 
I do not have the expertise to check out the code and answer my question 
myself, so I would like to hear the opinion of the devs whether my way should 
work in theory or not, regardless of the fact that the use case has not been 
tested and is not recommended. 

Thank you very much for your opinions.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-20 11:03             ` Duncan
@ 2014-02-20 21:16               ` Kai Krakow
  0 siblings, 0 replies; 16+ messages in thread
From: Kai Krakow @ 2014-02-20 21:16 UTC (permalink / raw)
  To: linux-btrfs

Duncan <1i5t5.duncan@cox.net> schrieb:

>>  Duncan had a nice example in this list how to migrate
>> directories to subvolumes by using shallow copies: "mv dir dir.old &&
>> btrfs sub create dir && cp -a -- reflink=always dir.old/. dir/. && rm
>> -Rf dir.old".
> 
> FWIW, that was someone else.  I remember seeing it and I may well have
> been involved in some aspect of the discussion and thus might have quoted
> it, but my particular use-case doesn't involve a lot of subvolumes or
> snapshots, so I don't typically get quite that deep into the command-
> detail in subvolume discussions as I've simply not had the necessary
> personal experience in that area to properly discuss at that level.  (Tho
> it's certainly typical of what I might post in other areas, just not that
> one.)

Oh sorry... I just was testing if you are following the list closely... ;-)

I didn't as you can see. :-)
 
-- 
Replies to list only preferred.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-20 13:20   ` GEO
@ 2014-02-20 23:04     ` Kai Krakow
  0 siblings, 0 replies; 16+ messages in thread
From: Kai Krakow @ 2014-02-20 23:04 UTC (permalink / raw)
  To: linux-btrfs

GEO <1g2e3o4@gmail.com> schrieb:

> @Kai Krakow: I accept your opinion and thank you for your answer.
> However I have special reasons doing so. I could name you a few use cases.
> For example I do not need to backup search indexes as they mess up over
> time, so I simple recreate the cache in case of a new install.
> I know most of the settings I set and I know exactly what missing
> directories break what in case of deletion, because I have tried so
> various times.

I tried to keep it neutral to help people not to try applying your special 
case as an idea for their own backup which may have a total different 
requirement.

> This is not supposed to be a system backup, or a "home" backup, but a
> backup of my data (documents, videos etc.).
> I know hidden directories contain mails etc. but I know exactly where my
> mails are (most of them imap anyway) and I would include them in the
> backup.
> 
> So I am looking for a different use case.

I may be wrong but it sounds like you approach the problem from the wrong 
direction. If only selective data is important to you for backup: Why not 
put your important data in a single subvolume and backup that? You could 
install some compatibility symlinks to keep consistency with the well known 
directory structure... This is how I usually handle such corner cases. A 
small script like "recreate_symlinks.sh" which you also put into the backup 
will help you in case of restoring.

-- 
Replies to list only preferred.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
       [not found]   ` <2285169.jbztTl7OC0@linuxpc>
  2014-02-19 17:26     ` Chris Murphy
@ 2014-02-21 14:44     ` GEO
  2014-02-21 18:56       ` Kai Krakow
  1 sibling, 1 reply; 16+ messages in thread
From: GEO @ 2014-02-21 14:44 UTC (permalink / raw)
  To: linux-btrfs

First of all, I am sorry that I screw up the whole structure of the discussion 
(I have not subscribed to the mailing list, and as Kai replied to the mailing 
list only, I could not reply to his answer.)

Kai: Yeah, your point was neutral and I did never understand it otherwise. 
Thank you for your answer!
I already had idea of creating a subvolume called DATA in my home directory, 
however I find that pretty annoying, as most applications will open home by 
default. 
In fact I would find it more elegant to generally backup without making changes 
to my file system structure in home. 

I know that there other possibilities to do what I want, but I am asking 
whether the initially described method would work reliably, given that the 
user makes not a fundamental mistake himself. 
I know it may sound stubborn but I am really interested if my method works 
just as reliable as the other suggested methods.

As I am not having the level required to check the code myself, I am asking 
here, in the hope someone knowing the code could state if it should work or 
not. 

Thank you all for your help!

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-21 14:44     ` GEO
@ 2014-02-21 18:56       ` Kai Krakow
  0 siblings, 0 replies; 16+ messages in thread
From: Kai Krakow @ 2014-02-21 18:56 UTC (permalink / raw)
  To: linux-btrfs

GEO <1g2e3o4@gmail.com> schrieb:

> First of all, I am sorry that I screw up the whole structure of the
> discussion (I have not subscribed to the mailing list, and as Kai replied
> to the mailing list only, I could not reply to his answer.)

Umm... Try a NNTP gateway like gmane to follow the list in an NNTP news 
reader of your choice. You can be semi-autosubscribed then without getting 
email sent to your email inbox and without digests being sent. It's really 
enjoyable.

> Kai: Yeah, your point was neutral and I did never understand it otherwise.
> Thank you for your answer!

NP.

> I already had idea of creating a subvolume called DATA in my home
> directory, however I find that pretty annoying, as most applications will
> open home by default.

Well, I mentioned you may want to place symlinks into that data directory 
for the well-known home directory locations like "music", "documents", etc. 
It's a bit ugly but it should do the job well. You can make your data 
directory hidden by naming it ".my-backup-data" or something, then point 
symlinks into that for documents, music, etc. That way, the applications 
still open your home fine, it is not cluttered with your data directory and 
the symlinks will serve fine. It may not be what you prefer but it can be an 
elegant solution. But there you go:

> In fact I would find it more elegant to generally backup without making
> changes to my file system structure in home.
> 
> I know that there other possibilities to do what I want, but I am asking
> whether the initially described method would work reliably, given that the
> user makes not a fundamental mistake himself.
> I know it may sound stubborn but I am really interested if my method works
> just as reliable as the other suggested methods.

Given that there are no mistakes in your procedure to do the preparation 
jobs for your backup, it should work perfectly reliable. I think, btrfs 
send/receive still has some quirks handling corner cases - and those may 
well be triggered by your idea of cleaning up the snapshot first, but 
generally it should work (at least if btrfs send/receive work as intended, 
there are no design decisions which would work against your use-case).

> As I am not having the level required to check the code myself, I am
> asking here, in the hope someone knowing the code could state if it should
> work or not.

Putting potential bugs aside (btrfs is still experimental, btrfs 
send/receive still has many corner-case quirks), it will work. But the 
design of your process to prepare the backup will put a lot of unnessecary 
work to your btrfs (in comparision to the alternatives already outlined 
here), increasing fragmentation, meta-data allocation, and probably making 
btrfs send/receive less efficient.

So to conclude: Your approach will probably be a good test scenario for 
btrfs send/receive and snapshots. But I really wouldn't try it without 
having a known-to-work backup up your sleeve.

-- 
Replies to list only preferred.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-19 13:45 Incremental backup over writable snapshot GEO
  2014-02-19 17:00 ` Chris Murphy
@ 2014-02-27 13:10 ` GEO
  2014-02-28  6:54   ` Duncan
  2014-02-27 14:36 ` GEO
  2 siblings, 1 reply; 16+ messages in thread
From: GEO @ 2014-02-27 13:10 UTC (permalink / raw)
  To: linux-btrfs

Does anyone have a technical info regarding the reliability of the incremental 
backup process using the said method?
(Apart from all the recommendations not to do it that way)
So the question I am interested in: Should it work or not?
I did some testing myself and it seemed to work, however I cannot find out if 
it backs up unnecessary blocks and thus making the incremental step space 
inefficient.
That information would help me very much!
Thank you very much!

On Wednesday 19 February 2014 14:45:57 GEO wrote:
> Hi,
> 
> As suggested in another thread, I would like to know the reliability of the
> following backup scheme:
> 
> Suppose I have a subvolume of my homedirectory  called @home.
> 
> Now I am interested in making incremental backups of data in home I am
> interested in, but not everything, so I create a normal snapshot of @home
> called @home-w and delete the files/folders I am not interested in backing
> up. After that I create a readonly snapshot of @home-w called @home-r, that
> I sent to my target volume with btrfs send.
> 
> After that is done, I do regular backups, by always going over the writeable
> snapshot where I remove always the same directories I am not interested and
> send the difference to the target volume with  btrfs send -p @home-r
> @home-r-1| btrfs receive /path/of/target/volume.
> 
> I do not like the idea of making subvolumes of all directories I am not
> interested in backing up.
> 
> So what I would like to know now is the following: Could there be drawbacks
> of doing this resp. could I further optimize my backup strategy, as I
> experienced it takes a while for deleting large files in the writeable
> snapshot (What does it write there?)
> 
> Could my method somehow lead to inefficiency in terms of the disk space used
> at the target volume (I mean, could the deleting cause a change, so that
> more is actually transferred as change, than in reality is?)?
> 
> One last question would be: Is there a quick way I could verify the local
> read only snapshot used last time is the same as the one synced to the
> target volume last time?
> 
> 
> Thank you for your support and the great work!



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-19 13:45 Incremental backup over writable snapshot GEO
  2014-02-19 17:00 ` Chris Murphy
  2014-02-27 13:10 ` GEO
@ 2014-02-27 14:36 ` GEO
  2 siblings, 0 replies; 16+ messages in thread
From: GEO @ 2014-02-27 14:36 UTC (permalink / raw)
  To: linux-btrfs

@Kai, Thank you very much for your reply. Sorry, I just saw it now. 
I will take care of the mailing issue now, so that it does not happen again in 
the future.

Sorry for the inconveniences!

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Incremental backup over writable snapshot
  2014-02-27 13:10 ` GEO
@ 2014-02-28  6:54   ` Duncan
  0 siblings, 0 replies; 16+ messages in thread
From: Duncan @ 2014-02-28  6:54 UTC (permalink / raw)
  To: linux-btrfs

GEO posted on Thu, 27 Feb 2014 14:10:25 +0100 as excerpted:

> Does anyone have a technical info regarding the reliability of the
> incremental backup process using the said method?

Stepping back from your specific method for a moment...

You're using btrfs send/receive, which I wouldn't exactly call entirely 
reliable ATM -- just look at all patches going by on the list to fix it 
up ATM.  In theory it should /get/ there, but it's very much in flux at 
this moment; certainly nothing I'd personally rely on here.  Btrfs itself 
is still only semi-stable, and that's one of the more advanced and 
currently least likely to work without errors features.  (Tho raid5/6 
mode is worse, since from all I've read send/receive should at least fail 
up-front if it's going to fail, while raid5/6 will currently look like 
it's working... until you actually need the raid5/6 redundancy and btrfs 
data integrity mode aspects!)

>From what I've read, *IF* the send/receive process completes without 
errors it should make a reasonably reliable backup.  The problem is that 
there's a lot of error-triggering corner-cases ATM, and given your 
definitely non-standard use-case, I expect your chances of running into 
such errors is higher than normal.  But if send/receive /does/ complete 
without errors, AFAIK it should be a reliable replication.

Meanwhile, over time those corner-cases should be worked out, and I've 
seen nothing in your use-case that says it /shouldn't/ work, once send/
receive itself is working reliably.  Your use-case may be an odd corner-
case, but it should either work or not, and once btrfs send/receive is 
working reliably, based on all I've read both from you and on the list in 
general, your case too should work reliably. =:^)

But for the moment, unless you're aim is to be a guinea pig working 
closely with the devs to test an interesting corner-case and report 
problems so they can be traced and fixed, I'd suggest using some other 
method.  Give btrfs send/receive, and the filesystem as a whole, another 
six months or a year to mature and stabilize, and AFAIK your suggested 
method might not be the most efficient or recommended way to do things 
for the reasons others have given, but it should none-the-less work.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2014-02-28  6:55 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-19 13:45 Incremental backup over writable snapshot GEO
2014-02-19 17:00 ` Chris Murphy
2014-02-19 18:57   ` GEO
2014-02-20 13:20   ` GEO
2014-02-20 23:04     ` Kai Krakow
     [not found]   ` <2285169.jbztTl7OC0@linuxpc>
2014-02-19 17:26     ` Chris Murphy
     [not found]       ` <16991840.tqyQc6bZHr@linuxpc>
2014-02-19 17:51         ` Chris Murphy
2014-02-19 20:20           ` Kai Krakow
2014-02-20  3:31             ` Kai Krakow
2014-02-20 11:03             ` Duncan
2014-02-20 21:16               ` Kai Krakow
2014-02-21 14:44     ` GEO
2014-02-21 18:56       ` Kai Krakow
2014-02-27 13:10 ` GEO
2014-02-28  6:54   ` Duncan
2014-02-27 14:36 ` GEO

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.