All of lore.kernel.org
 help / color / mirror / Atom feed
* Virtual Device Support
@ 2013-05-10 14:03 George Mitchell
  2013-05-19 11:04 ` Martin
  2013-05-19 11:15 ` Roman Mamedov
  0 siblings, 2 replies; 21+ messages in thread
From: George Mitchell @ 2013-05-10 14:03 UTC (permalink / raw)
  To: linux-btrfs

One the things that is frustrating me the most at this point from a user 
perspective regarding btrfs is the current lack of virtual devices to 
describe volumes and subvolumes.  The current method of simply using a 
random member device or a LABEL or a UUID is just not working well for 
me.  Having a well thought out virtual device infrastructure would 
greatly simplify use and also relieve problems with util-linux tools.  I 
REALLY DO understand that most if not all btrfs devs at this point are 
working overtime on things much more serious than this issue and I 
REALLY DO have the patience to wait on this one.  But I also REALLY DO 
hope there is some plan for this in the future.  Certainly a lot of 
design for it should already exist within software RAID md code and LVM 
code.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-10 14:03 Virtual Device Support George Mitchell
@ 2013-05-19 11:04 ` Martin
  2013-05-19 14:49   ` George Mitchell
  2013-05-19 11:15 ` Roman Mamedov
  1 sibling, 1 reply; 21+ messages in thread
From: Martin @ 2013-05-19 11:04 UTC (permalink / raw)
  To: linux-btrfs

On 10/05/13 15:03, George Mitchell wrote:
> One the things that is frustrating me the most at this point from a user
> perspective ...  The current method of simply using a
> random member device or a LABEL or a UUID is just not working well for
> me.  Having a well thought out virtual device infrastructure would...

Sorry, I'm a bit lost for your comments...

What is your use case and what are you hoping/expecting to see?


I've been following development of btrfs for a while and I'm looking
forward to use it to efficiently replace some of the very useful
features of LVM2, drbd, and md-raid that I'm using at present...

OK, so the way of managing all that is going to be a little different.

How would you want that?


Regards,
Martin


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-10 14:03 Virtual Device Support George Mitchell
  2013-05-19 11:04 ` Martin
@ 2013-05-19 11:15 ` Roman Mamedov
  2013-05-19 18:18   ` Chris Murphy
  1 sibling, 1 reply; 21+ messages in thread
From: Roman Mamedov @ 2013-05-19 11:15 UTC (permalink / raw)
  To: george; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 536 bytes --]

On Fri, 10 May 2013 07:03:38 -0700
George Mitchell <george@chinilu.com> wrote:

> One the things that is frustrating me the most at this point from a user 
> perspective regarding btrfs is the current lack of virtual devices to 
> describe volumes and subvolumes.

From a user perspective btrfs subvolumes have a lot in common with just
regular directories aka folders, and nothing in common with (block)devices.
"Describing them with virtual devices" does not seem to make a whole lot of
sense.

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-19 11:04 ` Martin
@ 2013-05-19 14:49   ` George Mitchell
  2013-05-19 17:18     ` Martin
       [not found]     ` <CAHGunUke143r3pj0Piv3AtJrJO1x8Bm+qS5Z+sY1G1EobhMG_w@mail.gmail.com>
  0 siblings, 2 replies; 21+ messages in thread
From: George Mitchell @ 2013-05-19 14:49 UTC (permalink / raw)
  Cc: linux-btrfs

In reply to both of these comments in one message, let me give you an 
example.

I use shell scripts to mount and unmount btrfs volumes for backup 
purposes.  Most of these volumes are not listed in fstab simply because 
I do not want to have to clutter my fstab with volumes that are used 
only for backup.  So the only way I can mount them is either by LABEL or 
by UUID.  But I can't unmount them by either LABEL or UUID because that 
is not supported by util-linux and they have no intention of supporting 
it in the future.  So I have to resort to unmounting by directory and it 
becomes back and forth between LABEL and directory which becomes very 
confusing when you are dealing with complex shell scripts.  This is 
intolerable for me so I use a kludge that allows me to first translate 
from LABEL to device and then unmount by device.  To me it just seems 
klutzy that one has to resort to these sorts of games to use a file 
system that is supposed to be an improvement on what we already have.  A 
simple virtual volume identifier would resolve that. Doing the same for 
subvolumes would be nice, but I could live without it with no problem.  
I have worked with nixes for 30 years beginning with AT&T pre-SRV on 
DEC-1170s and have seen a lot of changes since those days, most of them 
for the better.  But, while functionality is mandatory, convenience is 
always appreciated and can help avoid costly mistakes and save time.  As 
I stated in my original post, I KNOW and appreciate that all of you are 
working hard on things that matter far more than this trivial item.  But 
it is a major convenience and clarity issue for me and I am sure it will 
be for others as well.  It is only rational that one should be able to 
expect to mount by LABEL and unmount by LABEL, but that doesn't work, 
and a major part of the reason that doesn't work is that btrfs does not 
conform to the pattern of just about every other file system on the 
planet in regards to how it treats mount points.  And this is not even 
to mention all the other issues involved like a large number of 
utilities that have no way of knowing that a given partition is mounted, 
which would also be resolved by virtual mount points since many if not 
most of those utilities understand and process virtual volume identifiers.

Please just do me a favor and think about this a bit before you just 
write it off.

- George



On 05/19/2013 04:04 AM, Martin wrote:
> On 10/05/13 15:03, George Mitchell wrote:
>> One the things that is frustrating me the most at this point from a user
>> perspective ...  The current method of simply using a
>> random member device or a LABEL or a UUID is just not working well for
>> me.  Having a well thought out virtual device infrastructure would...
> Sorry, I'm a bit lost for your comments...
>
> What is your use case and what are you hoping/expecting to see?
>
>
> I've been following development of btrfs for a while and I'm looking
> forward to use it to efficiently replace some of the very useful
> features of LVM2, drbd, and md-raid that I'm using at present...
>
> OK, so the way of managing all that is going to be a little different.
>
> How would you want that?
>
>
> Regards,
> Martin
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

On 05/19/2013 04:15 AM, Roman Mamedov wrote:
> On Fri, 10 May 2013 07:03:38 -0700
> George Mitchell <george@chinilu.com> wrote:
>
>> One the things that is frustrating me the most at this point from a user
>> perspective regarding btrfs is the current lack of virtual devices to
>> describe volumes and subvolumes.
>  From a user perspective btrfs subvolumes have a lot in common with just
> regular directories aka folders, and nothing in common with (block)devices.
> "Describing them with virtual devices" does not seem to make a whole lot of
> sense.
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-19 14:49   ` George Mitchell
@ 2013-05-19 17:18     ` Martin
       [not found]     ` <CAHGunUke143r3pj0Piv3AtJrJO1x8Bm+qS5Z+sY1G1EobhMG_w@mail.gmail.com>
  1 sibling, 0 replies; 21+ messages in thread
From: Martin @ 2013-05-19 17:18 UTC (permalink / raw)
  To: linux-btrfs

OK, so to summarise:


On 19/05/13 15:49, George Mitchell wrote:
> In reply to both of these comments in one message, let me give you an
> example.
> 
> I use shell scripts to mount and unmount btrfs volumes for backup
> purposes.  Most of these volumes are not listed in fstab simply because
> I do not want to have to clutter my fstab with volumes that are used
> only for backup.  So the only way I can mount them is either by LABEL or
> by UUID.  But I can't unmount them by either LABEL or UUID because that
> is not supported by util-linux and they have no intention of supporting
> it in the future.  So I have to resort to unmounting by directory ...

Which all comes to a way of working...

Likewise, I have some old and long used backups scripts that mount a
one-of-many backups disk pack. My solution is to use filesystem labels
and to use 'sed' to update just the one line in /etc/fstab for the
backups mount point label so that the backups are then mounted/unmounted
by the mount point.

I've never been able to use the /dev/sdXX numbering because the multiple
physical drives can be detected in a different order.

Agreed, that for the sake of good consistency, being able to unmount by
filesystem label is a nice/good idea. But is there any interest for that
to be picked up? Put in a bug/feature request onto bugzilla?


I would guess that most developers focus on mount point and let
fstab/mtab sort out the detail...

Regards,
Martin


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-19 11:15 ` Roman Mamedov
@ 2013-05-19 18:18   ` Chris Murphy
  2013-05-19 18:22     ` Chris Murphy
  2013-05-21  1:08     ` Duncan
  0 siblings, 2 replies; 21+ messages in thread
From: Chris Murphy @ 2013-05-19 18:18 UTC (permalink / raw)
  To: Btrfs BTRFS


On May 19, 2013, at 5:15 AM, Roman Mamedov <rm@romanrm.ru> wrote:

> On Fri, 10 May 2013 07:03:38 -0700
> George Mitchell <george@chinilu.com> wrote:
> 
>> One the things that is frustrating me the most at this point from a user 
>> perspective regarding btrfs is the current lack of virtual devices to 
>> describe volumes and subvolumes.
> 
> From a user perspective btrfs subvolumes have a lot in common with just
> regular directories aka folders, and nothing in common with (block)devices.
> "Describing them with virtual devices" does not seem to make a whole lot of
> sense.

It's not possible to mount regular directories with other file systems. In some ways the btrfs subvolume behaves like a folder. In other ways it acts like a device. If you stat the mount point for btrfs subvolumes, you get a unique device ID for each.

It seems inconsistent that mount and unmount allows a /dev/ designation, but only mount honors label and UUID.

I can mount with mount /dev/disk/by-uuid/xxxxxxx, but if I use umount /dev/disk/by-uuid/xxxxxxxx I get a bogus error:

umount: /dev/disk/by-uuid/xxxxxxxxxx: not mounted

mount /dev/disk/by-uuid will autocomplete/list uuids in that directory, umount will not. So just from a consistency standpoint that seems broken.


Chris Murphy

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-19 18:18   ` Chris Murphy
@ 2013-05-19 18:22     ` Chris Murphy
  2013-05-21  1:08     ` Duncan
  1 sibling, 0 replies; 21+ messages in thread
From: Chris Murphy @ 2013-05-19 18:22 UTC (permalink / raw)
  To: Btrfs BTRFS


On May 19, 2013, at 12:18 PM, Chris Murphy <lists@colorremedies.com> wrote:

> 
> It's not possible to mount regular directories with other file systems. In some ways the btrfs subvolume behaves like a folder. In other ways it acts like a device. If you stat the mount point for btrfs subvolumes, you get a unique device ID for each.

Also a possible use case for btrfs subvolumes as virtual devices, if this isn't already possible or reliable, is in place of LV's for virtual machines. It's very convenient to point a VM to local LVM storage, and have it use an LV which can easily be created, located, and destroyed, and presented to the VM as a single block device. A btrfs subvolume isn't a block device of course, but if it can be pointed to in this fashion it would make it as useful in this scenario as LVs.

Otherwise I have to create another btrfs file system on a qcow2 image which then resides on btrfs. So it's double btrfs which performance wise I think is an undesirable hit.

Chris Murphy

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-19 18:18   ` Chris Murphy
  2013-05-19 18:22     ` Chris Murphy
@ 2013-05-21  1:08     ` Duncan
  2013-05-21  2:17       ` George Mitchell
  2013-05-21  3:37       ` Virtual Device Support Chris Murphy
  1 sibling, 2 replies; 21+ messages in thread
From: Duncan @ 2013-05-21  1:08 UTC (permalink / raw)
  To: linux-btrfs

Chris Murphy posted on Sun, 19 May 2013 12:18:19 -0600 as excerpted:


> On May 19, 2013, at 5:15 AM, Roman Mamedov <rm@romanrm.ru> wrote:
> 
>> From a user perspective btrfs subvolumes have a lot in common with just
>> regular directories aka folders, and nothing in common with
>> (block)devices.
>> "Describing them with virtual devices" does not seem to make a whole
>> lot of sense.
> 
> It's not possible to mount regular directories with other file systems.

Actually, it /is/ possible, using bind-mounts, etc.  These even work at 
the individual file level, and I use a few that way here, for mounting 
usable device files over an otherwise nodev mounted filesystem (used for 
a named/bind chroot, bind-mounted and then remounted nodev,noexec, etc.).

But yes, bind-mounts are an exception to the general rule.  However, 
they're an exception that does make your above claim questionable, at 
least.  btrfs subvolumes are another such exception.

> In some ways the btrfs subvolume behaves like a folder. In other ways it
> acts like a device. If you stat the mount point for btrfs subvolumes,
> you get a unique device ID for each.

Agreed.

> It seems inconsistent that mount and unmount allows a /dev/ designation,
> but only mount honors label and UUID.

Yes.  I had tested btrfs a year ago and decided to wait so haven't been 
active here for 8 months or so, but am now getting back into btrfs as my 
requirements are different now, and as I'm reading the list, I've seen 
this frustrating inconsistency complained about more than once.  I'm 
about to setup a new btrfs system here once again, so don't yet know if 
it'll affect me personally, but given that I routinely use labels in 
fstab, it certainly could, depending on how the umounts are handled.  But 
at least I have a heads-up on the issue and can thus work around it 
should I need to.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-21  1:08     ` Duncan
@ 2013-05-21  2:17       ` George Mitchell
  2013-05-21  3:59         ` Duncan
  2013-05-21  3:37       ` Virtual Device Support Chris Murphy
  1 sibling, 1 reply; 21+ messages in thread
From: George Mitchell @ 2013-05-21  2:17 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

Duncan,  The problem affects btrfs volumes that span multiple drive.  If 
you are using btrfs on a single drive that works just fine.  But in a 
multidrive situation, sometimes it works (when umount guesses the right 
device name) and sometimes it fails (when umount guesses the wrong 
device name). Have fun!   - George

On 05/20/2013 06:08 PM, Duncan wrote:
> Chris Murphy posted on Sun, 19 May 2013 12:18:19 -0600 as excerpted:
>
>
>> On May 19, 2013, at 5:15 AM, Roman Mamedov <rm@romanrm.ru> wrote:
>>
>>>  From a user perspective btrfs subvolumes have a lot in common with just
>>> regular directories aka folders, and nothing in common with
>>> (block)devices.
>>> "Describing them with virtual devices" does not seem to make a whole
>>> lot of sense.
>> It's not possible to mount regular directories with other file systems.
> Actually, it /is/ possible, using bind-mounts, etc.  These even work at
> the individual file level, and I use a few that way here, for mounting
> usable device files over an otherwise nodev mounted filesystem (used for
> a named/bind chroot, bind-mounted and then remounted nodev,noexec, etc.).
>
> But yes, bind-mounts are an exception to the general rule.  However,
> they're an exception that does make your above claim questionable, at
> least.  btrfs subvolumes are another such exception.
>
>> In some ways the btrfs subvolume behaves like a folder. In other ways it
>> acts like a device. If you stat the mount point for btrfs subvolumes,
>> you get a unique device ID for each.
> Agreed.
>
>> It seems inconsistent that mount and unmount allows a /dev/ designation,
>> but only mount honors label and UUID.
> Yes.  I had tested btrfs a year ago and decided to wait so haven't been
> active here for 8 months or so, but am now getting back into btrfs as my
> requirements are different now, and as I'm reading the list, I've seen
> this frustrating inconsistency complained about more than once.  I'm
> about to setup a new btrfs system here once again, so don't yet know if
> it'll affect me personally, but given that I routinely use labels in
> fstab, it certainly could, depending on how the umounts are handled.  But
> at least I have a heads-up on the issue and can thus work around it
> should I need to.
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-21  1:08     ` Duncan
  2013-05-21  2:17       ` George Mitchell
@ 2013-05-21  3:37       ` Chris Murphy
  2013-05-21 12:06         ` Martin
  1 sibling, 1 reply; 21+ messages in thread
From: Chris Murphy @ 2013-05-21  3:37 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs


On May 20, 2013, at 7:08 PM, Duncan <1i5t5.duncan@cox.net> wrote:

> Chris Murphy posted on Sun, 19 May 2013 12:18:19 -0600 as excerpted:
> 
>> It seems inconsistent that mount and unmount allows a /dev/ designation,
>> but only mount honors label and UUID.
> 
> Yes.  

I'm going to contradict myself and point out that mount with label or UUID is made unambiguous via either the default subvolume being mounted, or the -o subvol= option being specified. The volume label and UUID doesn't apply to umount because it's an ambiguous command. You'd have to umount a mountpoint, or possibly a subvolume specific UUID.


Chris Murphy

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-21  2:17       ` George Mitchell
@ 2013-05-21  3:59         ` Duncan
  2013-05-21  5:21           ` George Mitchell
  2013-05-21 12:19           ` Virtual Device Support ("N-way mirror code") Martin
  0 siblings, 2 replies; 21+ messages in thread
From: Duncan @ 2013-05-21  3:59 UTC (permalink / raw)
  To: linux-btrfs

George Mitchell posted on Mon, 20 May 2013 19:17:39 -0700 as excerpted:

> Duncan,  The problem affects btrfs volumes that span multiple drive.  If
> you are using btrfs on a single drive that works just fine.  But in a
> multidrive situation, sometimes it works (when umount guesses the right
> device name) and sometimes it fails (when umount guesses the wrong
> device name). Have fun!   - George

Thanks.  I had inferred that but glad to have it confirmed.  My planned 
usage will indeed be multi-device as I'm going to be using btrfs raid1, 
but now that you mention the multi-device trigger explicitly, I 
understand why my tests a year ago didn't run into the problem, as those 
tests were single-device.  (I had inferred the multi-device connection, 
but hadn't made the additional connection between that and my earlier 
tests being single-device until you explicitly mentioned the multi-dev 
trigger, explaining...)


My personal btrfs history is a bit complicated.  Until about a year ago, 
I was running md-raid-1 with four aging 300g "spinning rust" drives
(having earlier experimented with raid-6 and lvm, then ultimately 
deciding raid1 was better for me, but the raid6 experiment was the reason 
for the four).  Because they /were/ aging, I wasn't particularly 
comfortable with the thought of reducing redundancy down to two, and was 
rather dismayed to find out that btrfs' so-called raid1 support wasn't 
raid-1 in the traditional sense of N-way mirroring at all, but was 
limited to two-way-mirroring regardless of the number of physical 
devices.  So I researched but didn't deploy at that time, waiting for the 
raid-6 support (followed by n-way-mirroring) that was then planned for 
the next kernel cycle or two, but as we know now, took several kernel 
cycles to hit, with N-way-mirroring still not available.

Then I ran into hardware issues that turned out to be bad caps on my 8-
year-old mobo (tho it was dual-socket first-gen opteron, which I had 
upgraded to top-of-its-line dual-core Opteron 290s, thus four cores @ 2.8 
GHz, with 8 gigs RAM, so it wasn't as performance-dated as its age might 
otherwise imply).

However, those issues first appeared as storage errors, so knowing the 
drives were aging, I thought it was them, and I replaced, upgrading for 
the first time to 2.5" from the older 3.5" drives.  However, that was 
still the middle of the recession and I had severe budget issues, so I 
decided to try my luck with a single drive (temporarily) in place of the 
four I had been running, fortunately enough, since it didn't turn out to 
be the drive at all.  But not knowing that at the time and having the 
opportunity now with a single drive that was new and thus should be 
reliable, I decided to try btrfs on it, which was where my actual 
deployment and testing time came from.

But meanwhile, the hardware problems continued, and I found the old 
reiserfs was MUCH more stable under those conditions than btrfs, which 
would often be missing entire directory trees after a crash, where 
reiserfs would be missing maybe the last copied file or two.  (I was 
still trying to copy from the old raid1 to the new single device hard 
drive, so was copying entire trees over... all on severely unstable 
motherboard hardware that was frequently timing out SATA commands... 
sometimes to resume after a couple minutes, sometimes not.  Of course at 
the time I was still blaming it on the old drives since that was what I 
was copying from.  It only became apparent that they weren't the issue 
once I had enough on the new drive to try running from it with the others 
detached.)

So under those conditions I decided btrfs was definitely *NOT* 
appropriate, and returned to the long stable reiserfs, which I've 
continued using until now.

Meanwhile, after getting enough on the new drive to run from it, I 
realized it wasn't the old drives that were the problem, and eventually 
realized that I had bulging and even a few popped caps.  So mobo upgrade 
time it was.

Fortunately, bad financial situation tho I was in, I still had good 
enough credit to get an account approved at the local Fry's Electronics, 
and I was able to purchase mobo/cpu/memory/graphics, all upgraded at 
once, mostly even on year-same-as-cash terms.  And fortunately, my 
financial situation has improved dramatically since then, so that's long 
ago paid off and I'm on to new things.

One of those new things is a pair of SSDs, hence my renewed interest in 
btrfs, since reiserfs' journaling is definitely NOT SSD friendly.

Meanwhile, btrfs having a year to mature since my initial tests, and now 
being on new and actually WORKING (!!) hardware, and as I said, needing 
an alternative to the reiserfs I've been using for over a decade now, I'm 
again interested in btrfs.

This time around, I'm still interested in the checksumming and data 
integrity features and in particular the multi-device angle, but the N-
way-mirroring isn't as important now since I'm looking at only two 
devices anyway, and even that's a step up from my current single device.  
In addition, new this time, I'm now interested in the SSD features.

Meanwhile, given the SSDs and the impressive benchmarks I've seen for it, 
I've also looked into f2fs, but decided it's WAAAYYY too immature to 
seriously consider at this point.

I considered ext4 as well, but the ext* filesystems and I have just never 
gotten along very well, one of the reasons I've stuck with reiserfs for 
so long.  Here, time and time again reiserfs has been proven impressively 
stable, especially since ordered-journaling became the default back in 
2.6.16 or whatever, even thru hardware issues like the above that should 
challenge ANY filesystem.  Personally, I think part of the problem with 
ext* is that too many kernel devs think they know it well enough to mess 
with it, when they don't.  I know people who suffered data loss during 
the ext3 defaulting to writeback (as opposed to ordered) journaling 
period, for instance, while I was safe on reiserfs, which most kernel 
devs are scared to touch, so it just continues working as it always has.  
(Personally I'd had enough of a demonstration of the evils of writeback 
journaling with pre-ordered reiserfs to seriously doubt the sanity of the 
ext3 switch to writeback journaling by default from the moment I heard 
about it, and of course had I been running ext3, I'd have switched back 
to ordered immediately, but the person I know that lost data due to that 
wasn't following kernel development closely enough to know why, he just 
knew it was happening.  When I saw he was running ext3 and his kernel 
version, I asked about the journaling option, and sure enough, when he 
switched to ordered, the problem disappeared!)

Also, having run reiserfs for years, I see no reason for a modern fs to 
not have tail-packing (tho f2fs arguably has an excuse due to the 
technology it's targeting).  ext* has never had that, while btrfs has 
equivalent.  I guess it's possible with ext4 and kernel 3.6+ now, but 
there's still serious limitations to it on ext4.

So I'll try btrfs raid1 mode on the SSDs, and get the checksumming and 
data integrity features that the raid1 mode provides. =:^)  Plus I'll 
have btrfs' tail-packing, and the comfort of knowing that the same Chris 
Mason who worked on reiserfs for so long and introduced reiserfs ordered 
journal mode to mainline (IDR whether he originally authored it or not) 
is lead on btrfs now. =:^)

And hopefully, now that btrfs raid5/6 is in, in a few cycles the N-way 
mirrored code will make it as well, and I can add a third SSD and 
rebalance to it.  That time gap should ensure the third device is 
sufficiently separate in lot and possibly model number as well, so I 
don't have all my raid-1 data eggs in the same device lot basket. =:^)

Meanwhile, the old "spinning rust" drive, still with demonstrated 
reliable reiserfs, can continue to serve as a backup, both to the new SSD 
technology and same-lot raid-1 devices, and to the still not fully stable 
btrfs I'll be trying to run on them in "production" mode.

So yes, it's likely I'll have to devise workarounds to the multi-device 
btrfs label= umount problem myself.  I guess I'll find out in a day or 
two, as I actually deploy and my experiment progresses. =:^)

And yes, have fun I expect I will. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-21  3:59         ` Duncan
@ 2013-05-21  5:21           ` George Mitchell
  2013-05-21 12:19           ` Virtual Device Support ("N-way mirror code") Martin
  1 sibling, 0 replies; 21+ messages in thread
From: George Mitchell @ 2013-05-21  5:21 UTC (permalink / raw)
  To: Duncan, linux-btrfs

On 05/20/2013 08:59 PM, Duncan wrote:
> Then I ran into hardware issues that turned out to be bad caps on my 
> 8- year-old mobo (tho it was dual-socket first-gen opteron, which I 
> had upgraded to top-of-its-line dual-core Opteron 290s, thus four 
> cores @ 2.8 GHz, with 8 gigs RAM, so it wasn't as performance-dated as 
> its age might otherwise imply). However, those issues first appeared 
> as storage errors, so knowing the drives were aging, I thought it was 
> them, and I replaced, upgrading for the first time to 2.5" from the 
> older 3.5" drives.
And that is actually an advantage of using btrfs.  Because ... btrfs, 
unlike conventional RAID is very compatible and simple to use with 
SMART.  That way, by watching your system logs or journal, you are 
immediately aware of any hard drive issues.
> But meanwhile, the hardware problems continued, and I found the old 
> reiserfs was MUCH more stable under those conditions than btrfs, which 
> would often be missing entire directory trees after a crash, where 
> reiserfs would be missing maybe the last copied file or two. (I was 
> still trying to copy from the old raid1 to the new single device hard 
> drive, so was copying entire trees over... all on severely unstable 
> motherboard hardware that was frequently timing out SATA commands... 
> sometimes to resume after a couple minutes, sometimes not. Of course 
> at the time I was still blaming it on the old drives since that was 
> what I was copying from. It only became apparent that they weren't the 
> issue once I had enough on the new drive to try running from it with 
> the others detached.)
What can I say?  Hans Reiser, aside from his horrendous character flaws, 
is a software genius.  After a terrifying experience of having a hard 
drive fail and no backups, Hans Reiser and his helpers came to my aid 
and in short order I had all my data back intact.  I found that pretty 
impressive.  After that for a number of years I continued to use 
Reiserfs in a software RAID 1 configuration.  I never ever had any 
complaints about Reiserfs.  I really liked it and still do.  I really 
wanted to see Reiser4 see the light of day, but after Mr Reiser's 
incarceration, that has become more and more unlikely.  So, in 2009 I 
switch to hardware RAID 1 on a pair of old 3ware cards.  But file system 
RAID as offered by btrfs and zfs have the distinct advantage of not 
having to face those terrifying syncs after loss of a drive.  So now, as 
of April 2013, I am 100% on btrfs (well, almost).  I use 5 500GB Seagate 
2½" drives with all but boot filesystem (boot filesystem is spread 
across two Seagate 2½" 80GB drives formatted btrfs)  spread across them 
in RAID1 configuration. Additionally 100% of the content of those drives 
get backed up to another 500GB Seagate drive (formatted JFS) via cron 
every 3hrs AND to a 4TB Seagate drive (formatted btrfs raw, sans 
partitioning) via anacron daily, weekly, monthly, etc.  I also have a 
stripped down maintenance OS running on ext4 that I use to backup the 
main OS itself daily as a manual operation.  I am running this on a 
SuperMicro board with a dual core Pentium processor.  Not a particularly 
muscular system, but a VERY stable one.  I also use a small UPS unit 
which I think is a very good idea if you are doing write caching with 
btrfs, or any other filesystem for that matter. I use a CyberPower unit 
and CyberPower has a very nifty UPS tool for Linux which does 
auto-shutdown on low battery without intervention.

All in all I am very happy with this arrangement so far.  It has worked 
flawlessly in most respects and I really, really like btrfs. The two 
bugs in the ointment for me right now are 1) the infamous boot bug on 
kernel 3.8 whereby one gets repeated boot failures do to "open_ctree 
failure" and can only boot the system successfully after multiple 
attempts.  And 2) the btrfs incompatibilities like the umount issue 
which I script my way around by doing `mount -l` which provides both the 
LABEL and the mount point on the same line then `grep` out the label, 
extract the mount point with a `cut` and feed the verified mount point 
to `umount`.  That all works very sweet even if it is a bit clutzy.  And 
I am well enough aware of the dark side of all of this to steer clear of 
fatal moves with partitioning tools etc that don't have a clue that a 
mounted btrfs partition is ... a mounted btrfs partition even if it is 
NOT the mount point.  My only real concern at this point is the boot 
issue.  Overall, I have a lot less problems now than I did with hardware 
RAID and have not the least desire to go back.  btrfs could be better, 
but its still head and shoulders over any other approach I have tried, 
but that does not include zfs, which I have also heard very good things 
about.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-21  3:37       ` Virtual Device Support Chris Murphy
@ 2013-05-21 12:06         ` Martin
  2013-05-22  2:23           ` Chris Murphy
  0 siblings, 1 reply; 21+ messages in thread
From: Martin @ 2013-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs

On 21/05/13 04:37, Chris Murphy wrote:
> 
> On May 20, 2013, at 7:08 PM, Duncan <1i5t5.duncan@cox.net> wrote:
> 
>> Chris Murphy posted on Sun, 19 May 2013 12:18:19 -0600 as
>> excerpted:
>> 
>>> It seems inconsistent that mount and unmount allows a /dev/
>>> designation, but only mount honors label and UUID.
>> 
>> Yes.
> 
> I'm going to contradict myself and point out that mount with label or
> UUID is made unambiguous via either the default subvolume being
> mounted, or the -o subvol= option being specified. The volume label
> and UUID doesn't apply to umount because it's an ambiguous command.
> You'd have to umount a mountpoint, or possibly a subvolume specific
> UUID.


I'll admit that I prefer working with filesystem labels.


This is getting rather semantic... From "man umount", this is what
umount intends:

#####
umount [-dflnrv] {dir|device}...

The  umount  command  detaches the file system(s) mentioned from the
file hierarchy.  A file system is specified by giving the directory
where it has been mounted.  Giving the special device on which the file
system lives may also work, but is obsolete, mainly because it will fail
in case this device was mounted on more than one directory.
#####


I guess the ideas of labels and UUID and multiple devices came out a few
years later?... For btrfs, umount needs to operate on the default subvol
but with the means for also specifying a specific subvol if needed.

One hook for btrfs to extend what/how 'umount' operates might be to
perhaps extend what can be done with a "/sbin/(u?)mount.btrfs" 'helper'?


Regards,
Martin


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support ("N-way mirror code")
  2013-05-21  3:59         ` Duncan
  2013-05-21  5:21           ` George Mitchell
@ 2013-05-21 12:19           ` Martin
  2013-05-23 16:08             ` Martin Steigerwald
  1 sibling, 1 reply; 21+ messages in thread
From: Martin @ 2013-05-21 12:19 UTC (permalink / raw)
  To: linux-btrfs

Duncan,

Thanks for quiet a historical summary.

Yep, ReiserFS has stood the test of time very well and I'm still using
and abusing it still on various servers all the way from something like
a decade ago!

More recently I've been putting newer systems on ext4 mainly to take
advantage of extents for large files on all disk types, and also
deferred allocation to hopefully reduce wear on SSDs.

Meanwhile, I've seen no need to change the ReiserFS on the existing
systems, even for the multi-Terabyte backups. The near unlimited file
linking is beautiful for creating in effect incremental backups spanning
years!

All on raid1 or raid5, and all remarkably robust.

Enough waffle! :-)


On 21/05/13 04:59, Duncan wrote:
> And hopefully, now that btrfs raid5/6 is in, in a few cycles the N-way 
> mirrored code will make it as well

I too am waiting for the "N-way mirrored code" for example to have 3
copies of data/metadata across 4 physical disks.

When might that hit? Or is there a stable patch that can be added into
kernel 3.8.13?


Regards,
Martin


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
       [not found]     ` <CAHGunUke143r3pj0Piv3AtJrJO1x8Bm+qS5Z+sY1G1EobhMG_w@mail.gmail.com>
@ 2013-05-21 14:26       ` George Mitchell
  0 siblings, 0 replies; 21+ messages in thread
From: George Mitchell @ 2013-05-21 14:26 UTC (permalink / raw)
  To: linux-btrfs

In my case, I am backing up a system spanning five drives formatted 
btrfs, on a separate drive containing a separate backup volume and 
multiple complete backups, each from a different point in time.  This 
gives me protection from filesystem corruption, since the backups are on 
a separate volume, also protection from accidental deletion and other 
such issues by having backups spread over time going back as far as 
three months.  It also makes things very simple since I can just mount 
one of these backup subvolumes in place of the original and immediately 
be up and running.  Of course, all btrfs volumes could die at once as a 
result of some obscure problem such as a poison update or something like 
that, and that is why I keep a constantly updated JFS (and period 
backups to bluray) copy on hand.  I realize this is not foolproof, and 
actually plan to extend it further.  But huge drives are terribly 
inexpensive right now and this is one way I can take advantage of that.  
Of course I could have done this using multiple partitions, and I may 
one day regret not doing it that way for the very reasons you point out. 
However, I believe that I am sufficiently protected at this point to 
take the risk.  I really figure that if something were to corrupt both 
my main system AND the backup volume at the same time, it would probably 
knock out separate partitions as well. But ... perhaps not.  But I am 
indeed aware that one filesystem corruption could knock out all of those 
backups in one sweep.  As for the umount issue, it is ridiculously easy 
to script around, it just seems like somebody, either on the util-linux 
side, or on the btrfs side, could provide a more elegant solution, but 
it seems to fly in the face of entrenched ideologies on both sides. 
Fortunately, my only real problem that I can't script around is the boot 
issue and that is hopefully, just a matter of time before it gets 
fixed.  Thanks for your thoughts,  George

On 05/21/2013 01:16 AM, Michael Johnson - MJ wrote:
> I realize I am a bit late to the party on this, and but I would like 
> to understand the details of the workflow you are describing as I am 
> not seeing the benefit to creating backups on different subvolumes 
> with btrfs (but that's not to say there aren't reasons).
>
> The way I've gone about things is to have one btrfs volume mounted at 
> say /mnt/brtfs, backup to it, and then creates a read-only snapshots 
> in /mnt/btrfs/.snapshots.  I think this gives me all the benefits of 
> what you are describing without any of the hassle.
>
> Now, if my btrfs were to become corrupted, I would loose all of my 
> backups, but I believe you would be in the same boat using 
> completely separate subvolumes as they still part of the same 
> underlying data structure.  You would have a similar issue with zfs I 
> believe.
>
> With more tradition filesystems where the volume management is 
> separate from the filesystem, having 2 separate instances of say xfs, 
> on lvm volumes or even different partitions, your filesystem 
> corruption would not spread from volume A to volume B so that level of 
> separation makes sense.  But with btrfs, treating the subvolumes like 
> this does not provide such protection, If this is the reason for your 
> workflow, it may be simply that it used to provide some benefit, but 
> that benefit is gone.  As such I think you can actually simplify your 
> workflow and utilize btrfs more fully.
>
> That being said, I don't really know your workflow and the reasons for 
> it, so it is quite possibly a reasonable thing to keep doing.  I 
> simply don't have enough information and know that I have often time 
> found my self annoyed with a change, spent a lot of time to working 
> around the change, and then later realized, but I just was stuck in an 
> old way of thinking and that the new way allowed a more elegant workflow.
>
> But that is all just food for thought.  I do agree that if I can say 
> "mount LABEL=foo" it would be expected that I could also say "umount 
> LABEL=foo".  Perhaps the right thing to do would be to have modify the 
> btrfs command to allow 'btrfs mount' and 'btrfs umount' similar to the 
> way zfs works as this would allow some fun magic to happen outside of 
> util-linux.
>
> In any case, I hope my thoughts are at least a little useful.  Cheers!
>
>
> On Sun, May 19, 2013 at 7:49 AM, George Mitchell <george@chinilu.com 
> <mailto:george@chinilu.com>> wrote:
>
>     In reply to both of these comments in one message, let me give you
>     an example.
>
>     I use shell scripts to mount and unmount btrfs volumes for backup
>     purposes.  Most of these volumes are not listed in fstab simply
>     because I do not want to have to clutter my fstab with volumes
>     that are used only for backup.  So the only way I can mount them
>     is either by LABEL or by UUID.  But I can't unmount them by either
>     LABEL or UUID because that is not supported by util-linux and they
>     have no intention of supporting it in the future.  So I have to
>     resort to unmounting by directory and it becomes back and forth
>     between LABEL and directory which becomes very confusing when you
>     are dealing with complex shell scripts.  This is intolerable for
>     me so I use a kludge that allows me to first translate from LABEL
>     to device and then unmount by device.  To me it just seems klutzy
>     that one has to resort to these sorts of games to use a file
>     system that is supposed to be an improvement on what we already
>     have.  A simple virtual volume identifier would resolve that.
>     Doing the same for subvolumes would be nice, but I could live
>     without it with no problem.  I have worked with nixes for 30 years
>     beginning with AT&T pre-SRV on DEC-1170s and have seen a lot of
>     changes since those days, most of them for the better.  But, while
>     functionality is mandatory, convenience is always appreciated and
>     can help avoid costly mistakes and save time.  As I stated in my
>     original post, I KNOW and appreciate that all of you are working
>     hard on things that matter far more than this trivial item.  But
>     it is a major convenience and clarity issue for me and I am sure
>     it will be for others as well.  It is only rational that one
>     should be able to expect to mount by LABEL and unmount by LABEL,
>     but that doesn't work, and a major part of the reason that doesn't
>     work is that btrfs does not conform to the pattern of just about
>     every other file system on the planet in regards to how it treats
>     mount points.  And this is not even to mention all the other
>     issues involved like a large number of utilities that have no way
>     of knowing that a given partition is mounted, which would also be
>     resolved by virtual mount points since many if not most of those
>     utilities understand and process virtual volume identifiers.
>
>     Please just do me a favor and think about this a bit before you
>     just write it off.
>
>     - George
>
>
>
>
>     On 05/19/2013 04:04 AM, Martin wrote:
>
>         On 10/05/13 15:03, George Mitchell wrote:
>
>             One the things that is frustrating me the most at this
>             point from a user
>             perspective ...  The current method of simply using a
>             random member device or a LABEL or a UUID is just not
>             working well for
>             me.  Having a well thought out virtual device
>             infrastructure would...
>
>         Sorry, I'm a bit lost for your comments...
>
>         What is your use case and what are you hoping/expecting to see?
>
>
>         I've been following development of btrfs for a while and I'm
>         looking
>         forward to use it to efficiently replace some of the very useful
>         features of LVM2, drbd, and md-raid that I'm using at present...
>
>         OK, so the way of managing all that is going to be a little
>         different.
>
>         How would you want that?
>
>
>         Regards,
>         Martin
>
>         --
>         To unsubscribe from this list: send the line "unsubscribe
>         linux-btrfs" in
>         the body of a message to majordomo@vger.kernel.org
>         <mailto:majordomo@vger.kernel.org>
>         More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
>     On 05/19/2013 04:15 AM, Roman Mamedov wrote:
>
>         On Fri, 10 May 2013 07:03:38 -0700
>         George Mitchell <george@chinilu.com
>         <mailto:george@chinilu.com>> wrote:
>
>             One the things that is frustrating me the most at this
>             point from a user
>             perspective regarding btrfs is the current lack of virtual
>             devices to
>             describe volumes and subvolumes.
>
>          From a user perspective btrfs subvolumes have a lot in common
>         with just
>
>         regular directories aka folders, and nothing in common with
>         (block)devices.
>         "Describing them with virtual devices" does not seem to make a
>         whole lot of
>         sense.
>
>     --
>     To unsubscribe from this list: send the line "unsubscribe
>     linux-btrfs" in
>     the body of a message to majordomo@vger.kernel.org
>     <mailto:majordomo@vger.kernel.org>
>     More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
>
> -- 
> Michael Johnson - MJ


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support
  2013-05-21 12:06         ` Martin
@ 2013-05-22  2:23           ` Chris Murphy
  0 siblings, 0 replies; 21+ messages in thread
From: Chris Murphy @ 2013-05-22  2:23 UTC (permalink / raw)
  To: Btrfs BTRFS


On May 21, 2013, at 8:06 AM, Martin <m_btrfs@ml1.co.uk> wrote:

> On 21/05/13 04:37, Chris Murphy wrote:
>> 
>> I'm going to contradict myself and point out that mount with label or
>> UUID is made unambiguous via either the default subvolume being
>> mounted, or the -o subvol= option being specified. The volume label
>> and UUID doesn't apply to umount because it's an ambiguous command.
>> You'd have to umount a mountpoint, or possibly a subvolume specific
>> UUID.
> 
> I guess the ideas of labels and UUID and multiple devices came out a few
> years later?... For btrfs, umount needs to operate on the default subvol
> but with the means for also specifying a specific subvol if needed.

Yeah and I think specifying -o for umount isn't what devs are interested in doing, and I think that's understandable. But I'm pretty sure there are btrfs subvolume UUIDs now? So even if umount doesn't support -o, it should still support /dev/disk/by-uuid/xxxxxxxxxxx just as it does /dev/sda1 or whatever.


Chris Murphy


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support ("N-way mirror code")
  2013-05-21 12:19           ` Virtual Device Support ("N-way mirror code") Martin
@ 2013-05-23 16:08             ` Martin Steigerwald
  2013-05-24  1:41               ` George Mitchell
  2013-05-24  6:13               ` Duncan
  0 siblings, 2 replies; 21+ messages in thread
From: Martin Steigerwald @ 2013-05-23 16:08 UTC (permalink / raw)
  To: Martin; +Cc: linux-btrfs

Am Dienstag, 21. Mai 2013, 13:19:31 schrieb Martin:
> Yep, ReiserFS has stood the test of time very well and I'm still using
> and abusing it still on various servers all the way from something like
> a decade ago!

Very interesting. I only used it for a short time and it worked.

But co-workers lost several ReiserFS filesystems completely.

Well, if you search for the terms corrupt and your favorite filesystem, you 
will always find hits.

Anyway, I won´t use ReiserFS 3 today for several reasons:

1) It is not yet actively developed anymore, but more in a maintenance. I know 
for some that might be a reason to use it, but I think this basically 
increases the risk of breakages instead of reducing it. That said, I didn´t 
hear of any, and also JFS is in maintenance, but appears to work as well.

2) As to my knowledge a fsck.reiserfs cannot tell the filesystem I check and 
possible ReiserFS3 filesystems in virtual machine image files on it appart, 
happily mixing them together in a huge big mess.

3) As to my knowledge mount times of large partitions can be quite long with 
ReiserFS 3.

That said, I am using BTRFS on my main laptop even for /home now after having 
used it on several other machines for more than a year. Despite from that 
wierd scrub issue that I "fixed" by redoing the filesystem, rsync backup 
appeared t be okay, I am ready to trust my data to BTRFS. Also my backup 
harddisks are BTRFS.

I like BTRFS for some reasons, two that immediately come to my mind:

1) It can prove to me that the data is intact. I find this rather valuable.

2) Due to snapshots I know have well snapshots for my backup. And even on SSD 
for my /home. I am not yet creating those in an automated way, but well I do 
use them.

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support ("N-way mirror code")
  2013-05-23 16:08             ` Martin Steigerwald
@ 2013-05-24  1:41               ` George Mitchell
  2013-05-25 11:53                 ` Martin Steigerwald
  2013-05-24  6:13               ` Duncan
  1 sibling, 1 reply; 21+ messages in thread
From: George Mitchell @ 2013-05-24  1:41 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: Martin, linux-btrfs

On 05/23/2013 09:08 AM, Martin Steigerwald wrote:
> 3) As to my knowledge mount times of large partitions can be quite 
> long with ReiserFS 3.

That may well be, but I certainly wouldn't consider btrfs mount times 
"fast" in such cases.

[root@localhost ghmitch]# time mount LABEL=BACKUP /backup

real    0m18.133s
user    0m0.000s
sys     0m0.190s
[root@localhost ghmitch]#


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support ("N-way mirror code")
  2013-05-23 16:08             ` Martin Steigerwald
  2013-05-24  1:41               ` George Mitchell
@ 2013-05-24  6:13               ` Duncan
  2013-05-25 11:56                 ` Martin Steigerwald
  1 sibling, 1 reply; 21+ messages in thread
From: Duncan @ 2013-05-24  6:13 UTC (permalink / raw)
  To: linux-btrfs

Martin Steigerwald posted on Thu, 23 May 2013 18:08:35 +0200 as excerpted:

> Am Dienstag, 21. Mai 2013, 13:19:31 schrieb Martin:
>> Yep, ReiserFS has stood the test of time very well and I'm still using
>> and abusing it still on various servers all the way from something like
>> a decade ago!
> 
> Very interesting. I only used it for a short time and it worked.
> 
> But co-workers lost several ReiserFS filesystems completely.

Do you know if that was before (Chris's) ordered-mode patches?  I never 
lost complete FSs (even once when my AC went out and the disks overheated 
resulting in a head-crash... the disks worked again once temps returned 
to normal, tho I did lose some data where the platters were very likely 
physically damaged due to the head-crash), but before the ordered-mode 
patches, I did lose data a number of times due to simple loss of power or 
system lockup.

So I learned to keep tested backups, tho they weren't always current.  
But a few hours of repeated work on a not-current backup copy, sure beats 
days of recreation from scratch, and when that head-crash happened, I was 
glad I had 'em!

But after data=ordered, the only data loss was due to that physical head 
crash, and even then it was whatever files were in the physically damaged 
area, not the entire filesystem.  And with that and various other 
hardware problems I've had including wonky memory in various forms and a 
mobo that popped a few capacitors in the sata bus area that I was still 
able to run if I kept it cold enough (when it started timing out 
operations I'd know it was too warm), that notably, btrfs could NOT 
handle... yes, I have some pretty deep respect for reiserfs, now.  It 
survived hardware issues that nobody could /sanely/ expect /any/ 
filesystem to survive, yet reiserfs did.

> Well, if you search for the terms corrupt and your favorite filesystem,
> you will always find hits.

True.

> Anyway, I won´t use ReiserFS 3 today for several reasons:
> 
> 1) It is not yet actively developed anymore, but more in a maintenance.
> I know for some that might be a reason to use it, but I think this
> basically increases the risk of breakages instead of reducing it. That
> said, I didn´t hear of any, and also JFS is in maintenance, but appears
> to work as well.

Well, there's a difference between being left to rot, which I'm beginning 
to be concerned might be where reiserfs is likely to be headed at this 
point, and simply mature and feature complete, so the only real 
maintenance needed is to keep up with the ever-changing kernel api, which 
people changing that api must do for anything in-kernel.  That's where 
reiserfs has been for some time, now.

And as I believe I mentioned earlier, being simply mature is definitely 
better than the ext4, and for some time altho not so much any longer, 
ext3, were, where every kernel hacker and their brother seems to consider 
it worth changing, including even Linus himself when he took the ext3 
writeback-by-default commit, which lasted for several kernel cycles, that 
proved a bad decision for data safety in a number of cases I know about 
personally.

You mentioned jfs is in a similar position, what I'd call mature but 
maintained.

FWIW, I'd consider XFS to be a pretty good example of a somewhat more 
assertive middle ground, still being actively developed, new features 
being added, but generally by a core xfs group, not everybody and their 
brother, and arguably with a cautious enough approach that (as reiserfs 
with ordered mode, extended attributes and quotas being added after it 
was declared feature complete and more or less abandoned by its previous 
upstream developer), it's actually much more stable and broadly usable 
these days than it was in its heyday, when it had a reputation of being 
great for UPS-backed enterprise systems, but for eating data on joe-user 
line-only-powered systems should that line power disappear.

> 2) As to my knowledge a fsck.reiserfs cannot tell the filesystem I check
> and possible ReiserFS3 filesystems in virtual machine image files on it
> appart, happily mixing them together in a huge big mess.

AFAIK that's limited to the --rebuild-tree option, which comes with 
pretty scary warnings and requires not just a y/n, but actually a fully 
typed out yes, to proceed.  So it's not something that people should 
normally run -- they should be going to the backups if they have them 
before they run --rebuild-tree.  But it's there for those who didn't HAVE 
backups, and who are prepared to gamble with SOME data loss in ordered to 
have the chance at SOME recovery.  And even then, the instructions in the 
warning say to ddrescue or the like to create a backup image before 
trying to recover, just in case.

However, yes, with those caveats AFAIK that's still an issue.  Had such 
usage been foreseen all those years ago, I'm sure the implementation 
would have been rather different.

> 3) As to my knowledge mount times of large partitions can be quite long
> with ReiserFS 3.

Hmmm... Never seen that except in the case where it's replaying a 
journal, and in that case the mount time is limited by the size of the 
journal to replay.

However, it's quite likely that I simply don't run large enough 
filesystems for it to be an issue here, since I tend to use partitions 
far more heavily than most, so I've never actually run a reiserfs of more 
than a few hundred GB, let alone the multiple TBs common on unpartitioned 
multi-TB disks today.

And multiple partitions will continue to be the case here with btrfs, as 
well.  I'll use snapshots for the convenience of rollback, but probably 
won't be using the general case subvolume support, preferring entirely 
separate partitions instead.

Because a strong multi-partition policy as saved my *** more than once!  
The case that really iron-clad it for me was actually before I switched 
to Linux, when I was still running MS servantware (see my sig for 
context).  I was beta-testing the IE4 previews, and MS changed the way it 
handled the IE cache index file for performance reasons, maintaining its 
absolute disk addresses in memory instead of grabbing the info from the 
disk each time.

Then some of the testers started having horrible cross-linked files 
problems with a number of them losing valuable data.  Turns out that (MS' 
own) defrag was moving the files out from under IE, which in IE4 was now 
the desktop shell so it was now running all the time, including when the 
defrag was running, and when IE later wrote back the index to the 
absolute disk addresses that the file had been at before, it was 
overwriting other files that the defragger had moved into that spot in 
the mean time.

Eventually, MS fixed the problem by simply marking the cache files as 
system, read-only, so the defragger wouldn't touch them.

But in the mean time, a bunch of people running the affected IE4 pre-
releases lost data!

However, all I lost was a few unimportant temp files, because I had 
Internet Temporary Files located on my TEMP partition, and the only files 
on that partition besides IE's cache were other temporary files -- no big 
deal if they got overwritten with IE-cache-index!

And actually, I never even bothered reconfiguring defrag to avoid the 
problem, even after the problem was known and before it was fixed.  The 
only files possibly affected were temporary anyway.  No big deal.

But that reinforced what before that had been simply gut instinct into a 
hard-clad rule that I continue to observe to this day, of course on 
freedomware Linux now -- THOU SHALT KEEP THY FILE DATA TYPES SEPARATE!  
That means separate partitions if not separate physical drives, not some 
weird new-fangled subvolume thing where if the filesystem metadata gets 
screwed up it's a potential loss of everything on all the subvolumes 
despite the subvolume separation.

And it has saved some trouble at least once and I believe at least twice 
on Linux as well (tho unlike that first MS experience that left such an 
impression and created that hard-clad rule, I don't remember much about 
the details of these, as they just weren't that big a deal anyway, since 
my existing partitioning policy prevented them from becoming one)... and 
that's just the stuff I *KNOW* about!

So other than for the convenience of snapshots (which as the wiki says do 
NOT replace backups), I have no plans for btrfs subvolumes at all.  From 
my perspective, either it's the same general type of data and simply 
keeping it in ordinary directory trees is separation enough, or it NEEDS 
its own separate partition; there's no namby-pamby subvolume middle 
ground to be had.

(FWIW, I have similar philosophical issues with LVM, tho I realize 
there's a LOT of people using it by default, since that's what a lot of 
distros install by default, basically everything on LVM.)

> That said, I am using BTRFS on my main laptop even for /home now after
> having used it on several other machines for more than a year. Despite
> from that wierd scrub issue that I "fixed" by redoing the filesystem,
> rsync backup appeared t be okay, I am ready to trust my data to BTRFS.
> Also my backup harddisks are BTRFS.
> 
> I like BTRFS for some reasons, two that immediately come to my mind:
> 
> 1) It can prove to me that the data is intact. I find this rather
> valuable.

Indeed.

> 2) Due to snapshots I know have well snapshots for my backup. And even
> on SSD for my /home. I am not yet creating those in an automated way,
> but well I do use them.

As I already mentioned the warning on the wiki, do be aware of the 
limitations of snapshots.  They're NOT the same as separate backups.  I 
believe you know that already and just didn't mention it, but I'm worried 
about others who might come across your comment.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support ("N-way mirror code")
  2013-05-24  1:41               ` George Mitchell
@ 2013-05-25 11:53                 ` Martin Steigerwald
  0 siblings, 0 replies; 21+ messages in thread
From: Martin Steigerwald @ 2013-05-25 11:53 UTC (permalink / raw)
  To: george; +Cc: Martin, linux-btrfs

Am Donnerstag, 23. Mai 2013, 18:41:11 schrieb George Mitchell:
> On 05/23/2013 09:08 AM, Martin Steigerwald wrote:
> > 3) As to my knowledge mount times of large partitions can be quite
> > long with ReiserFS 3.
> 
> That may well be, but I certainly wouldn't consider btrfs mount times
> "fast" in such cases.
> 
> [root@localhost ghmitch]# time mount LABEL=BACKUP /backup
> 
> real    0m18.133s
> user    0m0.000s
> sys     0m0.190s
> [root@localhost ghmitch]#

Well yes, I saw some a bit longer mount times for my 2 TB backup disk with 
quite some snapshots as well already.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Virtual Device Support ("N-way mirror code")
  2013-05-24  6:13               ` Duncan
@ 2013-05-25 11:56                 ` Martin Steigerwald
  0 siblings, 0 replies; 21+ messages in thread
From: Martin Steigerwald @ 2013-05-25 11:56 UTC (permalink / raw)
  To: linux-btrfs

Am Freitag, 24. Mai 2013, 06:13:04 schrieb Duncan:
> > 2) Due to snapshots I know have well snapshots for my backup. And even
> > on SSD for my /home. I am not yet creating those in an automated way,
> > but well I do use them.
> 
> As I already mentioned the warning on the wiki, do be aware of the 
> limitations of snapshots.  They're NOT the same as separate backups.  I 
> believe you know that already and just didn't mention it, but I'm worried 
> about others who might come across your comment.

Well, a snapshot is not a backup. Just like a RAID is also not a backup.

:)

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2013-05-25 11:56 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-10 14:03 Virtual Device Support George Mitchell
2013-05-19 11:04 ` Martin
2013-05-19 14:49   ` George Mitchell
2013-05-19 17:18     ` Martin
     [not found]     ` <CAHGunUke143r3pj0Piv3AtJrJO1x8Bm+qS5Z+sY1G1EobhMG_w@mail.gmail.com>
2013-05-21 14:26       ` George Mitchell
2013-05-19 11:15 ` Roman Mamedov
2013-05-19 18:18   ` Chris Murphy
2013-05-19 18:22     ` Chris Murphy
2013-05-21  1:08     ` Duncan
2013-05-21  2:17       ` George Mitchell
2013-05-21  3:59         ` Duncan
2013-05-21  5:21           ` George Mitchell
2013-05-21 12:19           ` Virtual Device Support ("N-way mirror code") Martin
2013-05-23 16:08             ` Martin Steigerwald
2013-05-24  1:41               ` George Mitchell
2013-05-25 11:53                 ` Martin Steigerwald
2013-05-24  6:13               ` Duncan
2013-05-25 11:56                 ` Martin Steigerwald
2013-05-21  3:37       ` Virtual Device Support Chris Murphy
2013-05-21 12:06         ` Martin
2013-05-22  2:23           ` Chris Murphy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.