All of lore.kernel.org
 help / color / mirror / Atom feed
* System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /`
@ 2015-10-13 21:21 Carmine Paolino
  2015-10-14  5:08 ` Duncan
  2015-10-15  4:39 ` Zygo Blaxell
  0 siblings, 2 replies; 7+ messages in thread
From: Carmine Paolino @ 2015-10-13 21:21 UTC (permalink / raw)
  To: linux-btrfs

Hi all,

I have an home server with 3 hard drives that I added to the same btrfs filesystem. Several hours ago I run `btrfs balance start -dconvert=raid0 /` and as soon as I run `btrfs fi show /` I lost my ssh connection to the machine. The machine is still on, but it doesn’t even respond to ping: I always get a request timeout and sometimes even an host is down message. Its fans are spinning at full blast and the hard drives’s led are registering activity all the time. I run Plex Home Theater too there and the display output is stuck at the time when I run those two commands. I left it running because I fear to lose everything by powering it down manually.

Should I leave it like this and let it finish? How long it might take? (I have a 250gb internal hard drive, a 120gb usb 2.0 one and a 2TB usb 2.0 one so the transfer speeds are pretty low) Is it safe to power it off manually? Should I file a bug after it?

Any help would be appreciated.

Thanks,
Carmine

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /`
  2015-10-13 21:21 System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /` Carmine Paolino
@ 2015-10-14  5:08 ` Duncan
  2015-10-14  9:13   ` Hugo Mills
  2015-10-15  4:39 ` Zygo Blaxell
  1 sibling, 1 reply; 7+ messages in thread
From: Duncan @ 2015-10-14  5:08 UTC (permalink / raw)
  To: linux-btrfs

Carmine Paolino posted on Tue, 13 Oct 2015 23:21:49 +0200 as excerpted:

> I have an home server with 3 hard drives that I added to the same btrfs
> filesystem. Several hours ago I run `btrfs balance start -dconvert=raid0
> /` and as soon as I run `btrfs fi show /` I lost my ssh connection to
> the machine. The machine is still on, but it doesn’t even respond to
> ping[. ...]
> 
> (I have a 250gb internal hard drive, a 120gb usb 2.0 one and a 2TB usb
> 2.0 one so the transfer speeds are pretty low)

I won't attempt to answer the primary question[1] directly, but can point 
out that in many cases, USB-connected devices simply don't have a stable 
enough connection to work reliably in a multi-device btrfs.  There's 
several possibilities for failure, including flaky connections (sometimes 
assisted by cats or kids), unstable USB host port drivers, and unstable 
USB/ATA translators.  A number of folks have reported problems with such 
filesystems with devices connected over USB, that simply disappear if 
they direct-connect the exact same devices to a proper SATA port.  The 
problem seems to be /dramatically/ worse with USB connected devices, than 
it is with, for instance, PCIE-based SATA expansion cards.

Single-device btrfs with USB-attached devices seem to work rather better, 
because at least in that case, if the connection is flaky, the entire 
filesystem appears and disappears at once, and btrfs' COW, atomic-commit 
and data-integrity features, kick in to help deal with the connection's 
instability.

Arguably, a two-device raid1 (both data/metadata, with metadata including 
system) should work reasonably well too, as long as scrubs are done after 
reconnection when there's trouble with one of the pair, because in that 
case, all data appears on both devices, but single and raid0 modes are 
likely to have severe issues in that sort of environment, because even 
temporary disconnection of a single device means loss of access to some 
data/metadata on the filesystem.  Raid10, 3+-device-raid1, and raid5/6, 
are more complex situations.  They should survive loss of at least one 
device, but keeping the filesystem healthy in the presence of unstable 
connections is... complex enough I'd hate to be the one having to deal 
with it, which means I can't recommend it to others, either.

So I'd recommend either connecting all devices internally if possible, or 
setting up the USB-connected devices with separate filesystems, if 
internal direct-connection isn't possible.

---
[1] Sysadmin's rule of backups.  If the data isn't backed up, by 
definition it is of less value than the resource and hassle cost of 
backup.  No exceptions -- post-loss claims to the contrary simply put the 
lie to the claims, as actions spoke louder than words and they defined 
the cost of the backup as more expensive than the data that would have 
been backed up.  Worst-case is then loss of data that was by definition 
of less value than the cost of backup, and the more valuable resource and 
hassle cost of the backup was avoided, so the comparatively lower value 
data loss is no big deal.

So in a case like this, I'd simply power down and take my chances of 
filesystem loss, strictly limiting the time and resources I'd devote to 
any further attempt at recovery, because the data is by definition either 
backed up, or of such low value that a backup was considered too 
expensive to do, meaning there's a very real possibility of spending more 
time in a recovery attempt that's iffy at best, than the data on the 
filesystem is actually worth, either because there are backups, or 
because it's throw-away data in the first place.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /`
  2015-10-14  5:08 ` Duncan
@ 2015-10-14  9:13   ` Hugo Mills
  2015-10-14 13:12     ` Austin S Hemmelgarn
  2015-10-14 21:09     ` Duncan
  0 siblings, 2 replies; 7+ messages in thread
From: Hugo Mills @ 2015-10-14  9:13 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 4797 bytes --]

On Wed, Oct 14, 2015 at 05:08:17AM +0000, Duncan wrote:
> Carmine Paolino posted on Tue, 13 Oct 2015 23:21:49 +0200 as excerpted:
> 
> > I have an home server with 3 hard drives that I added to the same btrfs
> > filesystem. Several hours ago I run `btrfs balance start -dconvert=raid0
> > /` and as soon as I run `btrfs fi show /` I lost my ssh connection to
> > the machine. The machine is still on, but it doesn’t even respond to
> > ping[. ...]
> > 
> > (I have a 250gb internal hard drive, a 120gb usb 2.0 one and a 2TB usb
> > 2.0 one so the transfer speeds are pretty low)
> 
> I won't attempt to answer the primary question[1] directly, but can point 
> out that in many cases, USB-connected devices simply don't have a stable 
> enough connection to work reliably in a multi-device btrfs.  There's 
> several possibilities for failure, including flaky connections (sometimes 
> assisted by cats or kids), unstable USB host port drivers, and unstable 
> USB/ATA translators.  A number of folks have reported problems with such 
> filesystems with devices connected over USB, that simply disappear if 
> they direct-connect the exact same devices to a proper SATA port.  The 
> problem seems to be /dramatically/ worse with USB connected devices, than 
> it is with, for instance, PCIE-based SATA expansion cards.
> 
> Single-device btrfs with USB-attached devices seem to work rather better, 
> because at least in that case, if the connection is flaky, the entire 
> filesystem appears and disappears at once, and btrfs' COW, atomic-commit 
> and data-integrity features, kick in to help deal with the connection's 
> instability.
> 
> Arguably, a two-device raid1 (both data/metadata, with metadata including 
> system) should work reasonably well too, as long as scrubs are done after 
> reconnection when there's trouble with one of the pair, because in that 
> case, all data appears on both devices, but single and raid0 modes are 
> likely to have severe issues in that sort of environment, because even 
> temporary disconnection of a single device means loss of access to some 
> data/metadata on the filesystem.  Raid10, 3+-device-raid1, and raid5/6, 
> are more complex situations.  They should survive loss of at least one 
> device, but keeping the filesystem healthy in the presence of unstable 
> connections is... complex enough I'd hate to be the one having to deal 
> with it, which means I can't recommend it to others, either.

   Note also that RAID-0 is a poor choice for this configuration,
because you'll only get 640 GB usable space out of it. With single,
you'll get the full sum of 2370 GB usable. With RAID-1, you'll have
320 GB usable. The low figures for the RAID-0 and -1 come from the
fact that you've got two small devices, and that both RAID-0 and
RAID-1 have a minimum of two devices per block group. You can play
around with the configurations at http://carfax.org.uk/btrfs-usage

   But I second Duncan's advice about not using USB. It's really not a
reliable configuration with btrfs.

   Hugo.

> So I'd recommend either connecting all devices internally if possible, or 
> setting up the USB-connected devices with separate filesystems, if 
> internal direct-connection isn't possible.
> 
> ---
> [1] Sysadmin's rule of backups.  If the data isn't backed up, by 
> definition it is of less value than the resource and hassle cost of 
> backup.  No exceptions -- post-loss claims to the contrary simply put the 
> lie to the claims, as actions spoke louder than words and they defined 
> the cost of the backup as more expensive than the data that would have 
> been backed up.  Worst-case is then loss of data that was by definition 
> of less value than the cost of backup, and the more valuable resource and 
> hassle cost of the backup was avoided, so the comparatively lower value 
> data loss is no big deal.
> 
> So in a case like this, I'd simply power down and take my chances of 
> filesystem loss, strictly limiting the time and resources I'd devote to 
> any further attempt at recovery, because the data is by definition either 
> backed up, or of such low value that a backup was considered too 
> expensive to do, meaning there's a very real possibility of spending more 
> time in a recovery attempt that's iffy at best, than the data on the 
> filesystem is actually worth, either because there are backups, or 
> because it's throw-away data in the first place.
> 

-- 
Hugo Mills             | There's an infinite number of monkeys outside who
hugo@... carfax.org.uk | want to talk to us about this new script for Hamlet
http://carfax.org.uk/  | they've worked out!
PGP: E2AB1DE4          |                                           Arthur Dent

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /`
  2015-10-14  9:13   ` Hugo Mills
@ 2015-10-14 13:12     ` Austin S Hemmelgarn
  2015-10-14 21:09     ` Duncan
  1 sibling, 0 replies; 7+ messages in thread
From: Austin S Hemmelgarn @ 2015-10-14 13:12 UTC (permalink / raw)
  To: Hugo Mills, Duncan, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 3569 bytes --]

On 2015-10-14 05:13, Hugo Mills wrote:
> On Wed, Oct 14, 2015 at 05:08:17AM +0000, Duncan wrote:
>> Carmine Paolino posted on Tue, 13 Oct 2015 23:21:49 +0200 as excerpted:
>>
>>> I have an home server with 3 hard drives that I added to the same btrfs
>>> filesystem. Several hours ago I run `btrfs balance start -dconvert=raid0
>>> /` and as soon as I run `btrfs fi show /` I lost my ssh connection to
>>> the machine. The machine is still on, but it doesn’t even respond to
>>> ping[. ...]
>>>
>>> (I have a 250gb internal hard drive, a 120gb usb 2.0 one and a 2TB usb
>>> 2.0 one so the transfer speeds are pretty low)
>>
>> I won't attempt to answer the primary question[1] directly, but can point
>> out that in many cases, USB-connected devices simply don't have a stable
>> enough connection to work reliably in a multi-device btrfs.  There's
>> several possibilities for failure, including flaky connections (sometimes
>> assisted by cats or kids), unstable USB host port drivers, and unstable
>> USB/ATA translators.  A number of folks have reported problems with such
>> filesystems with devices connected over USB, that simply disappear if
>> they direct-connect the exact same devices to a proper SATA port.  The
>> problem seems to be /dramatically/ worse with USB connected devices, than
>> it is with, for instance, PCIE-based SATA expansion cards.
>>
>> Single-device btrfs with USB-attached devices seem to work rather better,
>> because at least in that case, if the connection is flaky, the entire
>> filesystem appears and disappears at once, and btrfs' COW, atomic-commit
>> and data-integrity features, kick in to help deal with the connection's
>> instability.
>>
>> Arguably, a two-device raid1 (both data/metadata, with metadata including
>> system) should work reasonably well too, as long as scrubs are done after
>> reconnection when there's trouble with one of the pair, because in that
>> case, all data appears on both devices, but single and raid0 modes are
>> likely to have severe issues in that sort of environment, because even
>> temporary disconnection of a single device means loss of access to some
>> data/metadata on the filesystem.  Raid10, 3+-device-raid1, and raid5/6,
>> are more complex situations.  They should survive loss of at least one
>> device, but keeping the filesystem healthy in the presence of unstable
>> connections is... complex enough I'd hate to be the one having to deal
>> with it, which means I can't recommend it to others, either.
>
>     Note also that RAID-0 is a poor choice for this configuration,
> because you'll only get 640 GB usable space out of it. With single,
> you'll get the full sum of 2370 GB usable. With RAID-1, you'll have
> 320 GB usable. The low figures for the RAID-0 and -1 come from the
> fact that you've got two small devices, and that both RAID-0 and
> RAID-1 have a minimum of two devices per block group. You can play
> around with the configurations at http://carfax.org.uk/btrfs-usage
>
>     But I second Duncan's advice about not using USB. It's really not a
> reliable configuration with btrfs.
I'd also second that statement, but go even further and say to not use 
USB for anything except backups and transferring data between computers 
unless you have absolutely no other option, and be wary of using any 
externally connected storage device for other use cases (I've seen 
similar issues with eSATA drives and BTRFS, and have heard of such 
issues with some Thunderbolt connected storage devices).



[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3019 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /`
  2015-10-14  9:13   ` Hugo Mills
  2015-10-14 13:12     ` Austin S Hemmelgarn
@ 2015-10-14 21:09     ` Duncan
  1 sibling, 0 replies; 7+ messages in thread
From: Duncan @ 2015-10-14 21:09 UTC (permalink / raw)
  To: linux-btrfs

Hugo Mills posted on Wed, 14 Oct 2015 09:13:25 +0000 as excerpted:

> On Wed, Oct 14, 2015 at 05:08:17AM +0000, Duncan wrote:
>> Carmine Paolino posted on Tue, 13 Oct 2015 23:21:49 +0200 as excerpted:
>> 
>> > I have an home server with 3 hard drives that I added to the same
>> > btrfs filesystem. Several hours ago I run `btrfs balance start
>> > -dconvert=raid0 /` and as soon as I run `btrfs fi show /` I lost my
>> > ssh connection to the machine. The machine is still on, but it
>> > doesn’t even respond to ping[. ...]
>> > 
>> > (I have a 250gb internal hard drive, a 120gb usb 2.0 one and a 2TB
>> > usb 2.0 one so the transfer speeds are pretty low)
>> 
> 
>    Note also that RAID-0 is a poor choice for this configuration,
> because you'll only get 640 GB usable space out of it. With single,
> you'll get the full sum of 2370 GB usable. With RAID-1, you'll have 320
> GB usable. The low figures for the RAID-0 and -1 come from the fact that
> you've got two small devices, and that both RAID-0 and RAID-1 have a
> minimum of two devices per block group. You can play around with the
> configurations at http://carfax.org.uk/btrfs-usage

Thanks, Hugo.  I totally forgot about the sizing effects of raid0 with 
that sort of wide device mismatch, but you have a very good point as I 
can't imagine anyone limiting their space usage like that on purpose.

Tho AFAIK the raid0 space available space would be 120*3 + (250-120)*2 =
360 + 130*2 = 360+260 = 620 GB?  (Raid0 stripe 3 devices wide until the 
120 gig device is full, 2 devices wide after that until the remainder of 
the 250 gig device is full, the rest of the 2 TB device unused as there's 
no second device available to raid0 stripe with.)

In the event that a raid0 of the three is desired, I'd partition up the 
big 2 TB device with the partition for the raid0 being 250 GB, same as 
the second largest device, with the remaining ~1750 GB then being 
available for use as a single-device filesystem.

Alternatively, one /could/ partition up the 2 TB device as eight 250 GB 
partitions, and add each of those separately to the raid0, thus letting 
the filesystem do 10-way raid0 striping on the first 120 GB, 9-way raid0 
striping after that, tho 8 of those strips would actually be on 
partitions of the same physical device.  It'd certainly be slow, but it'd 
use the available space on the 2 TB device, if that was the primary 
object.

But single mode would certainly be more efficient, for sure.

Meanwhile, raid0 in general, even totally apart from btrfs, should always 
be considered usable only for throw-away data, either because there's a 
backup or because it really is throw-away (I used to use mdraid0 on 
partitions for my distro's package repo and similar effective internet 
local cache data, for instance, where loss of the raid0 simply meant 
redownloading what it had cached), because with raid0, loss of a single 
device means loss of everything on the raid, so you've effectively 
multiplied the chance of failure by the number of physical devices in the 
raid0.  So quite apart from the backups rule I mentioned earlier, raid0 
really does mean throw-away data, not worth worrying about recovery.  
Raid0's primary use is simply speed, but across USB2, it can't even 
really be used for speed, so there's really no use for it at all, unless 
one is simply doing it "because they can" (like the raid0 of about 50 
1.44 MB floppies I believe I watched a youtube video of, at one point, 
simply "because they can"!).

Btrfs multi-device single mode (at least with single mode for metadata as 
well as data) isn't really better than raid0 in terms of reliability, but 
at least it lets you use the full capacity of the devices.  (Btrfs raid1 
metadata, single data, is the default for multi-device btrfs, and is at 
least in theory /somewhat/ more reliable than single-mode metadata, since 
at least in theory, a dropped device at least lets you recover anything 
that didn't happen to be on the bad device, but in practice I'd not 
recommend relying on that.  Plan as if it was raid0 reliability -- loss 
of a single device means loss of the entire thing.)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /`
  2015-10-13 21:21 System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /` Carmine Paolino
  2015-10-14  5:08 ` Duncan
@ 2015-10-15  4:39 ` Zygo Blaxell
  2015-10-15  7:59   ` Duncan
  1 sibling, 1 reply; 7+ messages in thread
From: Zygo Blaxell @ 2015-10-15  4:39 UTC (permalink / raw)
  To: Carmine Paolino; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 4258 bytes --]

On Tue, Oct 13, 2015 at 11:21:49PM +0200, Carmine Paolino wrote:
> I have an home server with 3 hard drives that I added to the same btrfs
> filesystem. Several hours ago I run `btrfs balance start -dconvert=raid0
> /` and as soon as I run `btrfs fi show /` I lost my ssh connection to
> the machine. The machine is still on, but it doesn’t even respond
> to ping: I always get a request timeout and sometimes even an host
> is down message. Its fans are spinning at full blast and the hard
> drives’s led are registering activity all the time. I run Plex Home
> Theater too there and the display output is stuck at the time when
> I run those two commands. I left it running because I fear to lose
> everything by powering it down manually.
> 
> Should I leave it like this and let it finish? How long it might
> take? (I have a 250gb internal hard drive, a 120gb usb 2.0 one and a
> 2TB usb 2.0 one so the transfer speeds are pretty low) Is it safe to
> power it off manually? Should I file a bug after it?

As others have pointed out, the raid0 allocator has a 2-disk-minimum
constraint, so any difference in size between the largest and
second-largest disk is unusable.  In your case that's 73% of the raw
space.

If the two smaller disks were almost full (no space unallocated in 'btrfs
fi usage') before you converted to raid0, then immediately after starting
a conversion to raid0 you have no space left _at all_.  This is because
the space you previously had under some other data profile is no longer
considered "free" even if it isn't in use.  All future allocations must
be raid0, starting immediately, but no space is available for raid0
data chunks.

This will cause some symptoms like huge write latency (it will not take
seconds or minutes, but *hours* to write anything to the disk) and
insanely high CPU usage.

Normally btrfs gets slower exponentially as it gets full (this is arguably
a performance bug), so you'll have plenty of opportunity to get the system
under control before things get unusably slow.  What you have done is
somewhat different--you've gone all the way to zero free space all at
once, but you still have lots of what _might_ be free space to search
through when doing allocations.  Now your CPU is spending all of its time
searching everywhere for free space that isn't really there--and when
it doesn't find any free space, it immediately starts the search over
from scratch.

If you're running root on this filesystem, it is likely that various
daemons are trying to write data constantly, e.g. kernel log messages.
Each of these writes, no matter how small, will take hours.  Then the
daemons will be trying to log the fact that writes are taking hours.
Which will take hours.  And so on.  This flood of writes at nearly 20K
per hour will overwhelm the tiny amount of bandwidth btrfs can accomodate
in this condition.

The way to get out of this is to mount the filesystem such that nothing
is attempting to write to it (e.g. boot from rescue media).  Mount the
filesystem with the 'skip_balance' option, and do 'btrfs balance cancel
/fs; btrfs balance start -dconvert=single,soft /fs'.  Expect both commands
to take several hours (maybe even days) to run.

In theory, you can add another disk in order to enable raid0 allocations,
but you have to mount the filesystem and stop the running balance before
you can add any disks...and that will take hours anyway, so extra disks
won't really help.

If you can get a root shell and find the kworker threads that are spinning
on your CPU, you can renice them.  If you have RT priority processes in
your system, some random kworkers will randomly acquire RT privileges.
Random kworkers are used by btrfs, so when btrfs eats all your CPU it
can block everything for minutes at a time.  The kworkers obey the usual
schedtool commands, e.g. 'schedtool -D -n20 -v <pids of kworker threads>'
to make them only run when the CPU is idle.

> Any help would be appreciated.
> 
> Thanks,
> Carmine--
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /`
  2015-10-15  4:39 ` Zygo Blaxell
@ 2015-10-15  7:59   ` Duncan
  0 siblings, 0 replies; 7+ messages in thread
From: Duncan @ 2015-10-15  7:59 UTC (permalink / raw)
  To: linux-btrfs

Zygo Blaxell posted on Thu, 15 Oct 2015 00:39:27 -0400 as excerpted:

> As others have pointed out, the raid0 allocator has a 2-disk-minimum
> constraint, so any difference in size between the largest and
> second-largest disk is unusable.  In your case that's 73% of the raw
> space.
> 
> If the two smaller disks were almost full (no space unallocated in
> 'btrfs fi usage') before you converted to raid0, then immediately after
> starting a conversion to raid0 you have no space left _at all_.  This is
> because the space you previously had under some other data profile is no
> longer considered "free" even if it isn't in use.  All future
> allocations must be raid0, starting immediately, but no space is
> available for raid0 data chunks.
> 
> This will cause some symptoms like huge write latency (it will not take
> seconds or minutes, but *hours* to write anything to the disk) and
> insanely high CPU usage.

Very nice analysis.  The implications hadn't occurred to me, but you 
spell them out in the stark terms the reality of the situation dictates, 
along with offering a sane way out.

Thanks.  I'll have to keep this in mind for the next time something like 
this comes up.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-10-15  7:59 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-13 21:21 System completely unresponsive after `btrfs balance start -dconvert=raid0 /` and `btrfs fi show /` Carmine Paolino
2015-10-14  5:08 ` Duncan
2015-10-14  9:13   ` Hugo Mills
2015-10-14 13:12     ` Austin S Hemmelgarn
2015-10-14 21:09     ` Duncan
2015-10-15  4:39 ` Zygo Blaxell
2015-10-15  7:59   ` Duncan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.