linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Backup: Compare sent snapshots
@ 2014-03-30  8:58 GEO
  2014-03-30 22:41 ` Duncan
  0 siblings, 1 reply; 6+ messages in thread
From: GEO @ 2014-03-30  8:58 UTC (permalink / raw)
  To: linux-btrfs

Hi,

I am doing backups regularly following the scheme of 
https://btrfs.wiki.kernel.org/index.php/Incremental_Backup

It states we keep a local reference of the read only snapshot we sent to the 
backup drive, which I understand.
But now I have a question:  When I do a read only snapshot of home, send the 
difference to the backup drive, keep it until the next incremental step, send 
the difference to the backup drive, remove the old read only snapshot and so 
on...
I wonder what happens if the read only snapshot I keep as a local reference 
got corrupted somehow. 
Then maybe too much difference would be sent which would not be dramatic, or 
too less, which would be. 
Is there a quick way I could compare the last sent snapshot to the local one, 
to make sure the local reference is still the same?

Apart from that, imagine I somehow lost the local reference (e.g. delete it by 
mistake), would there still be a way to sync the difference to the last sent 
snapshot on the backup device?




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Backup: Compare sent snapshots
  2014-03-30  8:58 Backup: Compare sent snapshots GEO
@ 2014-03-30 22:41 ` Duncan
  2014-09-26 16:15   ` G EO
  0 siblings, 1 reply; 6+ messages in thread
From: Duncan @ 2014-03-30 22:41 UTC (permalink / raw)
  To: linux-btrfs

GEO posted on Sun, 30 Mar 2014 10:58:13 +0200 as excerpted:

> Hi,
> 
> I am doing backups regularly following the scheme of
> https://btrfs.wiki.kernel.org/index.php/Incremental_Backup

Be aware that send/receive is getting a lot of attention and bugfixes 
ATM.  To the best of my knowledge, where it completes without error it's 
100% reliable, but there are various corner-cases where it will presently 
trigger errors.  So I'd have a fallback backup method ready, just in case 
it quits working, and wouldn't rely in it working at this point.  (Altho 
with all the bug fixing it's getting ATM, it should be far more reliable 
in the future.)

FWIW, the problems seem to be semi-exotic cases, like where subdirectory 
A originally had subdir B nested inside it, but then that was reversed, 
so A is now inside B instead of B inside A.  Send/receive can get a bit 
mixed up in that sort of case, still.

> It states we keep a local reference of the read only snapshot we sent to
> the backup drive, which I understand.
> But now I have a question:  When I do a read only snapshot of home, send
> the difference to the backup drive, keep it until the next incremental
> step, send the difference to the backup drive, remove the old read only
> snapshot and so on...
> I wonder what happens if the read only snapshot I keep as a local
> reference got corrupted somehow.
> Then maybe too much difference would be sent which would not be
> dramatic, or too less, which would be.

In general, it wouldn't be a case of too much or too little being sent.  
It would be a case of send or receive saying, "Hey, this no longer makes 
sense, ERROR!"

But as I said above, as long as both ends are completing without error, 
the result should be fully reliable.

> Is there a quick way I could compare the last sent snapshot to the local
> one, to make sure the local reference is still the same?

A checksum of all content (including metadata), using md5sum or the like, 
on both ends.  Or do a checksum and keep a record of the result, 
comparing a later result to the previous one for the same snapshot.

As for what to actually do that checksum on, I'll let someone with more 
knowledge and experience speak up there.

> Apart from that, imagine I somehow lost the local reference (e.g. delete
> it by mistake), would there still be a way to sync the difference to the
> last sent snapshot on the backup device?

Two possibilities:

1) Reverse the send/receive, sending it from the backup to the working 
instance, thereby recreating the missing snapshot.

2) Keep more than one snapshot on each end, with the snapshot thinning 
scripts kept in sync.

So if you're doing hourly send/receive, keep the latest three, plus the 
one done at (say) midnight for each of the previous two days, plus the 
midnight snapshot for say Saturday night for the last four weeks, being 
sure to keep the same snapshots on both ends.

That way, if there's a problem with the latest send/receive, you can try 
doing a send/receive against the two hour ago or two day ago base, 
instead of the one from an hour ago.  If that doesn't work, you can try 
reversing the send/receive, sending from the backup.

But as I said, do be prepared for send/receive bugs and the errors they 
trigger.  If you hit one, you may have to fall back to something more 
traditional such as rsync, presumably reporting the bug and keeping the 
last good received snapshots around as a reference so you can try again 
with test patches or after the next kernel upgrade.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Backup: Compare sent snapshots
  2014-03-30 22:41 ` Duncan
@ 2014-09-26 16:15   ` G EO
  2014-09-27  4:17     ` Duncan
  0 siblings, 1 reply; 6+ messages in thread
From: G EO @ 2014-09-26 16:15 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

Okay I have a couple of questions again regarding the lost local
reference part:

How was the reverse btrfs send/receive meant? should I simply do

"btrfs send /mnt/backup-partition/snapshot_name | btrfs receive
/mnt/root-partition/"

or should I use btrfs send -p and compare to some newly created local
snapshot? When I send back  the lost reference from the backup drive
will that write the incremental part that has changed since then or
will it allocate space the size of the snapshot?

In https://btrfs.wiki.kernel.org/index.php/Incremental_Backup there is
the talk of "Efficiently determining and streaming the differences
between two snapshots if they are either snapshots of the same
underlying subvolume, or have a parent-child relationship." Wont that
required part get lost be reversely sending the last local reference?


What would you do in the case of a new install? You import your last
home snapshot (as your primary home subvolume) using "btrfs send
/mnt/backup-partition/home_timestamp | btrfs receive
/mnt/root-partition/". Now you have imported your snapshot but its
still read only. How to make it writable? I simply did this for now by
making a snapshot of the read only snapshot and renamed that snapshot
to "home". Is that the right way to do it?

Now on the new install we want to continue doing our backups using the
old backup drive. Since we have the readonly snapshot we imported (and
we took a writeable snapshot of it that is now our primary snapshot)
we can simply cotinue doing the incremental step without the bootstrap
step right?


You would really hep me in a great manner if you could answer some of
the questions.

On Mon, Mar 31, 2014 at 12:41 AM, Duncan <1i5t5.duncan@cox.net> wrote:
> GEO posted on Sun, 30 Mar 2014 10:58:13 +0200 as excerpted:
>
>> Hi,
>>
>> I am doing backups regularly following the scheme of
>> https://btrfs.wiki.kernel.org/index.php/Incremental_Backup
>
> Be aware that send/receive is getting a lot of attention and bugfixes
> ATM.  To the best of my knowledge, where it completes without error it's
> 100% reliable, but there are various corner-cases where it will presently
> trigger errors.  So I'd have a fallback backup method ready, just in case
> it quits working, and wouldn't rely in it working at this point.  (Altho
> with all the bug fixing it's getting ATM, it should be far more reliable
> in the future.)
>
> FWIW, the problems seem to be semi-exotic cases, like where subdirectory
> A originally had subdir B nested inside it, but then that was reversed,
> so A is now inside B instead of B inside A.  Send/receive can get a bit
> mixed up in that sort of case, still.
>
>> It states we keep a local reference of the read only snapshot we sent to
>> the backup drive, which I understand.
>> But now I have a question:  When I do a read only snapshot of home, send
>> the difference to the backup drive, keep it until the next incremental
>> step, send the difference to the backup drive, remove the old read only
>> snapshot and so on...
>> I wonder what happens if the read only snapshot I keep as a local
>> reference got corrupted somehow.
>> Then maybe too much difference would be sent which would not be
>> dramatic, or too less, which would be.
>
> In general, it wouldn't be a case of too much or too little being sent.
> It would be a case of send or receive saying, "Hey, this no longer makes
> sense, ERROR!"
>
> But as I said above, as long as both ends are completing without error,
> the result should be fully reliable.
>
>> Is there a quick way I could compare the last sent snapshot to the local
>> one, to make sure the local reference is still the same?
>
> A checksum of all content (including metadata), using md5sum or the like,
> on both ends.  Or do a checksum and keep a record of the result,
> comparing a later result to the previous one for the same snapshot.
>
> As for what to actually do that checksum on, I'll let someone with more
> knowledge and experience speak up there.
>
>> Apart from that, imagine I somehow lost the local reference (e.g. delete
>> it by mistake), would there still be a way to sync the difference to the
>> last sent snapshot on the backup device?
>
> Two possibilities:
>
> 1) Reverse the send/receive, sending it from the backup to the working
> instance, thereby recreating the missing snapshot.
>
> 2) Keep more than one snapshot on each end, with the snapshot thinning
> scripts kept in sync.
>
> So if you're doing hourly send/receive, keep the latest three, plus the
> one done at (say) midnight for each of the previous two days, plus the
> midnight snapshot for say Saturday night for the last four weeks, being
> sure to keep the same snapshots on both ends.
>
> That way, if there's a problem with the latest send/receive, you can try
> doing a send/receive against the two hour ago or two day ago base,
> instead of the one from an hour ago.  If that doesn't work, you can try
> reversing the send/receive, sending from the backup.
>
> But as I said, do be prepared for send/receive bugs and the errors they
> trigger.  If you hit one, you may have to fall back to something more
> traditional such as rsync, presumably reporting the bug and keeping the
> last good received snapshots around as a reference so you can try again
> with test patches or after the next kernel upgrade.
>
> --
> Duncan - List replies preferred.   No HTML msgs.
> "Every nonfree program has a lord, a master --
> and if you use the program, he is your master."  Richard Stallman
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Backup: Compare sent snapshots
  2014-09-26 16:15   ` G EO
@ 2014-09-27  4:17     ` Duncan
  2014-09-27 10:01       ` G EO
  0 siblings, 1 reply; 6+ messages in thread
From: Duncan @ 2014-09-27  4:17 UTC (permalink / raw)
  To: linux-btrfs

G EO posted on Fri, 26 Sep 2014 18:15:33 +0200 as excerpted:

> Okay I have a couple of questions again regarding the lost local
> reference part:

Please quote, then reply in context.  Trying to figure this out cold-
context is no fun, tho the context was at the bottom, but then I'm paging 
back and forth to see it...

Meanwhile, to keep /this/ in context, let me be clear that my own use 
case doesn't involve send/receive at all, so what I know of it is from 
the list and wiki, not my own experience.  At the time I wrote the 
previous reply they were still having problems with exotic corner-cases 
and send/receive.  I'm not sure if that has been worked thru for the most 
part or not, tho as I said, from everything I've seen, when there /were/ 
problems it would error out, not silently fail to do a reliable copy.

> How was the reverse btrfs send/receive meant? should I simply do
> 
> "btrfs send /mnt/backup-partition/snapshot_name | btrfs receive
> /mnt/root-partition/"

This is effectively a full send/receive, what you'd do if you did a fresh 
mkfs on the receive side and wanted to repopulate it.

> or should I use btrfs send -p and compare to some newly created local
> snapshot? When I send back  the lost reference from the backup drive
> will that write the incremental part that has changed since then or will
> it allocate space the size of the snapshot?

What I had in mind was this (again, with the caveat that I'm not actually 
using send/receive myself, so I'd suggest testing or getting confirmation 
from someone that is, before depending on this):

On the main filesystem, you did the first full send, we'll call it A.  
Then you did an incremental send with a new snapshot, B, using A as its 
parent, and later another, C, using B as its parent.

On the backup, assuming you didn't delete anything and that the receives 
completed without error, you'd then have copies of all three, A, B, C.  
Now let's say you decide A is old and you no longer need it, so you 
delete it on both sides, leaving you with B and C.

Now back on the main machine C is damaged.  But you still have B on both 
machines and C on the backup machine.

What I was suggesting was that you could reverse the last send/receive, 
sending C from the backup with B as its parent (since B exists undamaged 
on both sides, with C undamaged on the backup but damaged or missing on 
the main machine), thereby restoring a valid copy of C on the main 
machine once again.

Once you have a valid copy of C on the main machine again, you can now do 
normal incremental send/receive D, using C as its parent, just as you 
would have if C had never been damaged in the first place, because you 
restored a valid C reference on your main machine in ordered to be able 
to do so.


Snapshot size?  That's not as well defined as you might think.  Do you 
mean the size of everything in that snapshot, including blocks shared 
with other snapshots, or do you mean just the size of what isn't shared, 
in effect, the space you'd get back if you deleted that snapshot?  Or do 
you mean include the blocks shared, but divide that by the number of 
snapshots sharing each block, in effect apportioning each snapshot its 
fair share, but with the not necessarily expected side effect that if you 
delete another snapshot that shared some of the blocks, suddenly the size 
of this one increases, because there's less snapshots sharing the same 
data, now?

Of course anyone who has attempted a reasonable discussion of Linux 
memory usage, accounting for shared object libraries, should see the 
direct parallel to that discussion as well.  The same basic concepts 
apply to both, and either subject is considerably more complex than it 
might at first seem, because all three approaches have some merits and 
some disadvantages, depending on what you're trying to actually measure 
with the term "size".

I'm /guessing/ that you mean the full size of all data and metadata in 
the snapshot.  In that case, using the above "reverse" send/receive to 
recover the damaged or missing reference /should/ not require the full 
"size" of the snapshot, no.  OTOH, if you mean the size of the data and 
metadata exclusive to that snapshot, the amount you'd get back if you 
deleted it on the backup machine, then yes, it'll require that much space 
on the main machine as well.

Of course that's with the caveat that you haven't done anything to 
reduplicate the data, breaking the sharing between snapshots.  The big 
factor there is defrag.  While snapshot-aware-defrag was introduced in I 
think kernel 3.9, that implementation turned out not to scale well AT 
ALL, and it was using 10s of GiB of RAM and taking days to defrag what 
should have been done in a few hours.  So they ended up disabling 
snapshot-aware-defrag again, until they can fix the scaling issues.  That 
means that when you defrag, you're defragging /just/ the snapshot you 
happened to point defrag at, and anything it moves in that defrag is 
effect duplicated, since other snapshots previously sharing that data 
aren't defragged along with it so they keep a reference to the old, 
undefragged version, thus doubling the required space for anything moved.

Thus defrag will, obviously, have implications in terms of space 
required.  And actually, tho I didn't think of it until just now as I'm 
writing this, it's going to have send/receive implications as well, since 
the defragged blocks will no longer be shared with the reference/parent 
snapshot, thus requiring sending all that data over again.  OUCH!


> In https://btrfs.wiki.kernel.org/index.php/Incremental_Backup there is
> the talk of "Efficiently determining and streaming the differences
> between two snapshots if they are either snapshots of the same
> underlying subvolume, or have a parent-child relationship." Wont that
> required part get lost be reversely sending the last local reference?

No, because the previous reference, B in my example above, remains the 
parent reference on both sides.  As long as there's a common reference on 
both sides, the relationship should be maintained.

Think of it this way, if that relationship would be lost in a "reverse" 
send/receive, it would have been lost in the original send/receive to the 
backup machine, as well.  It's exactly the same concept in both 
instances; you're simply reversing the roles so that the machine that was 
the sending machine is now receiving, while the receiving machine is now 
the sender.  If the relationship would get lost, it'd get lost in both 
cases, and if it got lost in the original case, then incremental send/
receive would be broken and simply wouldn't work.  The ONLY way it can 
work is if the same relationship that exists on the sender always gets 
recreated on the receiver as well, and if that's the case, then as long 
as you still have the parent on what was the sender, you can still use 
that parent in the receiving role as well.

> What would you do in the case of a new install? You import your last
> home snapshot (as your primary home subvolume) using "btrfs send
> /mnt/backup-partition/home_timestamp | btrfs receive
> /mnt/root-partition/". Now you have imported your snapshot but its still
> read only. How to make it writable? I simply did this for now by making
> a snapshot of the read only snapshot and renamed that snapshot to
> "home". Is that the right way to do it?

Yes, that's the way it's done.  You can't directly make a read-only 
snapshot writable, but you can take another snapshot of the read-only 
snapshot, and make it writable when you take it. =:^)

Then depending what you have in mind you can delete the read-only 
snapshot (tho if you're using it as a send/receive reference you probably 
don't want to do that), just keeping the writable one.

I had thought I saw that mentioned on the wiki somewhere, but I sure 
can't seem to find it now!

> Now on the new install we want to continue doing our backups using the
> old backup drive. Since we have the readonly snapshot we imported (and
> we took a writeable snapshot of it that is now our primary snapshot) we
> can simply cotinue doing the incremental step without the bootstrap step
> right?

Correct.

> You would really hep me in a great manner if you could answer some of
> the questions.

Hope that helps. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Backup: Compare sent snapshots
  2014-09-27  4:17     ` Duncan
@ 2014-09-27 10:01       ` G EO
  2014-09-28  4:31         ` Duncan
  0 siblings, 1 reply; 6+ messages in thread
From: G EO @ 2014-09-27 10:01 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

> What I had in mind was this (again, with the caveat that I'm not actually
> using send/receive myself, so I'd suggest testing or getting confirmation
> from someone that is, before depending on this):
> On the main filesystem, you did the first full send, we'll call it A.
> Then you did an incremental send with a new snapshot, B, using A as its
> parent, and later another, C, using B as its parent.
> On the backup, assuming you didn't delete anything and that the receives
> completed without error, you'd then have copies of all three, A, B, C.
> Now let's say you decide A is old and you no longer need it, so you
> delete it on both sides, leaving you with B and C.
> Now back on the main machine C is damaged.  But you still have B on both
> machines and C on the backup machine.
> What I was suggesting was that you could reverse the last send/receive,
> sending C from the backup with B as its parent (since B exists undamaged
> on both sides, with C undamaged on the backup but damaged or missing on
> the main machine), thereby restoring a valid copy of C on the main
> machine once again.
> Once you have a valid copy of C on the main machine again, you can now do
> normal incremental send/receive D, using C as its parent, just as you
> would have if C had never been damaged in the first place, because you
> restored a valid C reference on your main machine in ordered to be able
> to do so.


Assuming B (nor A) is nonexistent locally anymore and C damaged (or
nonexistent) locally there would be no way for determining the
differences (between C on the backup drive and the current active
subvolume on the root partition) anymore right? At least not with send
-p right?

> Of course that's with the caveat that you haven't done anything to
> reduplicate the data, breaking the sharing between snapshots.  The big
> factor there is defrag.  While snapshot-aware-defrag was introduced in I
> think kernel 3.9, that implementation turned out not to scale well AT
> ALL, and it was using 10s of GiB of RAM and taking days to defrag what
> should have been done in a few hours.  So they ended up disabling
> snapshot-aware-defrag again, until they can fix the scaling issues.  That
> means that when you defrag, you're defragging /just/ the snapshot you
> happened to point defrag at, and anything it moves in that defrag is
> effect duplicated, since other snapshots previously sharing that data
> aren't defragged along with it so they keep a reference to the old,
> undefragged version, thus doubling the required space for anything moved.


Is defrag disabled by default?
Are "space_cache" and "compress-force=lzo" known to cause trouble with
snapshots?

> Snapshot size?  That's not as well defined as you might think.  Do you
> mean the size of everything in that snapshot, including blocks shared
> with other snapshots, or do you mean just the size of what isn't shared


The full size of a snapshot, including what is shared with others is
easy to measure simply using "du", right?
For the relative size this should work with "qgroup show", right?
Although this also shows absolute values, but that do not comply with
du here.
qroup requires enabling quote though, right? Does it have an impact on
the values shown by qgroup whether quota is enabled since the
beginning or just recently?

Thanks for your awesome help Duncan!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Backup: Compare sent snapshots
  2014-09-27 10:01       ` G EO
@ 2014-09-28  4:31         ` Duncan
  0 siblings, 0 replies; 6+ messages in thread
From: Duncan @ 2014-09-28  4:31 UTC (permalink / raw)
  To: linux-btrfs

G EO posted on Sat, 27 Sep 2014 12:01:22 +0200 as excerpted:

>> What I had in mind was this [...]:
>> On the main filesystem, you did the first full send, we'll call it A.
>> Then you did an incremental send with a new snapshot, B, using A as its
>> parent, and later another, C, using B as its parent.
>> On the backup, assuming you didn't delete anything and that the
>> receives completed without error, you'd then have copies of all three,
>> A, B, C. Now let's say you decide A is old and you no longer need it,
>> so you delete it on both sides, leaving you with B and C.
>> Now back on the main machine C is damaged.  But you still have B on
>> both machines and C on the backup machine.
>> What I was suggesting was that you could reverse the last send/receive,
>> sending C from the backup with B as its parent (since B exists
>> undamaged on both sides, with C undamaged on the backup but damaged or
>> missing on the main machine), thereby restoring a valid copy of C on
>> the main machine once again.
>> Once you have a valid copy of C on the main machine again, you can now
>> do normal incremental send/receive D, using C as its parent, just as
>> you would have if C had never been damaged in the first place, because
>> you restored a valid C reference on your main machine in ordered to be
>> able to do so.
> 
> Assuming B (nor A) is nonexistent locally anymore and C damaged (or
> nonexistent) locally there would be no way for determining the
> differences (between C on the backup drive and the current active
> subvolume on the root partition) anymore right? At least not with send
> -p right?

To the best of my knowledge that's absolutely correct.  There has to be a 
common parent reference on both sides in ordered to do the incremental 
send/receive based on it.

If you don't have that common base, then the best you can do is a full 
non-incremental send once again.  Either that or switch to some other 
alternative, say rsync, and forget about the btrfs-specific send/receive.

>> That means that when you defrag, you're defragging /just/ the snapshot
>> you happened to point defrag at, and anything it moves in that defrag
>> is effect duplicated, since other snapshots previously sharing that
>> data aren't defragged along with it so they keep a reference to the
>> old, undefragged version, thus doubling the required space for anything
>> moved.
> 
> 
> Is defrag disabled by default?

Defrag must be specifically triggered, either by specifically enabling 
the autodefrag mount option, or by running defrag manually, etc.

Of course it's possible some distros either enable autodefrag by default 
or setup a cron job or systemd timer for it, but that's them, not as 
shipped from btrfs upstream.

> Are "space_cache" and "compress-force=lzo" known to cause trouble with
> snapshots?

No.

Space_cache is enabled automatically these days, so if there's a big 
problem with it it'll affect many many people, and as such, should be 
detected and fixed rather quickly.

Not directly related to your question but FWIW, there was a recent 
deadlock (not snapshot) bug with compress, obviously easier to trigger if 
compress-force is used, but neither compress nor compress-force are 
enabled by default yet due to relatively wide use it still illustrates 
the point I made above about space_cache rather well, because the bug was 
triggered with the switch from dedicated btrfs worker threads in 3.14 to 
generic kworker threads in 3.15.  It did get thru the full 3.15 cycle but 
by the middle of the 3.16 development cycle the fact that there was a 
problem was well known, tho it was quite a tough bug to trace and took to 
the end of the 3.16 cycle to pin down a general culprit and thru the 3.17 
commit window to develop and test a patch.  That patch was applied in 
3.17-rc2 and in 3.16.2.

IOW, while it's not the default, there's enough folks using compress and 
compress-force that a bug affecting it was found, traced and fixed in two 
kernel cycles, the 3.15 commit window introduced it and it was fixed with 
3.17-rc2, thus immediately after the 3.17 commit window.

With space_cache being the default, a similar bug with it would almost 
surely not survive even the development cycle -- if it had been 
introduced in 3.15's commit window, people would have been yelling by rc2 
or 3 at the latest, and if it weren't traced, the entire btrfs patch 
series for 3.15 would have very likely been reverted by rc6 or rc7, thus 
never making it into a full release kernel at all.


Now btrfs is still under heavy development.  However, both snapshots and 
compression are widely enough used both on their own and together that if 
there was a serious existing problem with the combination, it'd be known, 
and any fleeting bug should be detected and made just that, fleeting, 
within I'd say 2-3 kernel cycles.

But meanwhile, other bugs are constantly being fixed, so unless you 
specifically know of such a bug and are avoiding the most current kernel 
or two due to that, staying current, latest kernel of the latest stable 
kernel series at least, is very strongly recommended.  During the 
situation above some folks were retreating to 3.14 since it was both 
before the problem and a long-term-stable-series, but the problem wasn't 
known until early in the 3.16 cycle and was fixed early in the 3.17 
cycle, and that exception can be noted both due to its rarity and the 
fact that it effectively existed only a single kernel cycle.

>> Snapshot size?  That's not as well defined as you might think.  Do you
>> mean the size of everything in that snapshot, including blocks shared
>> with other snapshots, or do you mean just the size of what isn't shared
> 
> 
> The full size of a snapshot, including what is shared with others is
> easy to measure simply using "du", right?

Agreed.

> For the relative size this should work with "qgroup show", right?

I've completely avoided anything having to do with quotas/qgroups here, 
as they're both well outside my use-case and had known issues of their 
own until very recently.  The quotas/qgroups code was in fact recently 
(last kernel cycle I believe) rewritten and /should/ be rather more 
reliable/stable now, but as I'm not directly following it I'm not even 
sure whether that rewrite is yet even entirely committed.  And if it is, 
it's definitely still fresh enough code that its functionality and 
general stability remain open questions.

So except for pure experimental purposes I'd avoid doing anything at all 
with qgroups ATM.  Assuming the new code is in fact all in at this point, 
I'd recommend waiting out the 3.18 kernel cycle and if it appears to be 
stable by the time 3.18 is released and the 3.19 kernel cycle starts, 
consider it then.

Beyond that, definitely compare notes with someone already using
quotas/qgroups, as I simply don't know.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-09-28  4:31 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-30  8:58 Backup: Compare sent snapshots GEO
2014-03-30 22:41 ` Duncan
2014-09-26 16:15   ` G EO
2014-09-27  4:17     ` Duncan
2014-09-27 10:01       ` G EO
2014-09-28  4:31         ` Duncan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).