All of lore.kernel.org
 help / color / mirror / Atom feed
* Replace RAID devices without resorting to degraded mode?
@ 2014-03-11 17:36 Scott D'Vileskis
  2014-03-11 23:26 ` Adam Goryachev
  2014-03-12  9:12 ` David Brown
  0 siblings, 2 replies; 10+ messages in thread
From: Scott D'Vileskis @ 2014-03-11 17:36 UTC (permalink / raw)
  To: linux-raid

Hello--
I have been using Linux RAID for about the last 12 years or so and
have endured dozens of RAID migrations, swapping of disks, growing &
shrinking arrays, transforming partitions, etc. I consider myself
pretty well versed in RAID0/1/5, and more recently RAID6.

I would like to grow my RAID5 array to fill larger devices (larger
partitions, actually). In the past, the typical method of replacing
all the disks/partitions with larger ones is to:
1) Add a larger drive/partition as a hot spare
2) Fail a disk
3) Wait for the rebuild/resync
4) Repeat for each disk in the array
5) After all drives/partitions replaced and resynced, Grow the device
and wait for a resync of the new space.
6) Resize the filesystem
While this typically works flawlessly, it does require the array to be
operated in degraded mode for the entire operation, which many would
consider risky.

Does Linux MD RAID support a method of hot replacing a disk WITHOUT
having to resort to degraded mode?

Scenario: I have 6 reliable, perfectly functioning Samsung 2TB drives,
all recently passed SMART tests, zero reallocated sectors, etc.
--One drive is a spare
--Five drives are each partitioned into 500G and 1500G. The five 1500G
partitions make up a RAID5. The 500G partitions were used in a
different RAID array; I am abandoning the 500G partitions and
reclaiming the space, but I want to transform RAID5 to use all the
space on each drive. (And then probably convert to a RAID6)

IF-- the 1500G partitions were at the beginning of the drive, I could
simply (and I believe I have done this in the past):
1) Stop the RAID array
2) Delete both partitions, create a single partition with the same
offset (On a 4K sector boundary for those picky about the details)
3) Restart the array to check for errors/mistakes (It should come up clean)
4) Repeat steps 1-3 for additional drives
5) Grow the array (Resync starts at the *new space)
6) Resize the filesystem

However, in my situation, my RAID5 partitions start in the middle of
the drive, complicating that slightly... Fortunately, I have a spare
drive or two to assist.
Would the following off-line scenario work?
1) Stop RAID array
2) Clone one of the RAID devices to a larger disk (Using dd)
3) Remove the old RAID device from the system
4) Restart the RAID array in readonly mode (to test that the clone was
successful without marking the array as dirty, otherwise, revert to
the removed disk)
5) Optional: Restart the RAID array in readwrite mode to confirm
6) Repeat 1-5 for each additional disk
7) Grow the array (Resync starts at the new space)
8) Grow the filesystem

Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-11 17:36 Replace RAID devices without resorting to degraded mode? Scott D'Vileskis
@ 2014-03-11 23:26 ` Adam Goryachev
  2014-03-12  0:00   ` Scott D'Vileskis
  2014-03-12  9:12 ` David Brown
  1 sibling, 1 reply; 10+ messages in thread
From: Adam Goryachev @ 2014-03-11 23:26 UTC (permalink / raw)
  To: Scott D'Vileskis, linux-raid

On 12/03/14 04:36, Scott D'Vileskis wrote:
> Hello--
> I have been using Linux RAID for about the last 12 years or so and
> have endured dozens of RAID migrations, swapping of disks, growing &
> shrinking arrays, transforming partitions, etc. I consider myself
> pretty well versed in RAID0/1/5, and more recently RAID6.
>
> I would like to grow my RAID5 array to fill larger devices (larger
> partitions, actually). In the past, the typical method of replacing
> all the disks/partitions with larger ones is to:
> 1) Add a larger drive/partition as a hot spare
> 2) Fail a disk
> 3) Wait for the rebuild/resync
> 4) Repeat for each disk in the array
> 5) After all drives/partitions replaced and resynced, Grow the device
> and wait for a resync of the new space.
> 6) Resize the filesystem
> While this typically works flawlessly, it does require the array to be
> operated in degraded mode for the entire operation, which many would
> consider risky.
>
> Does Linux MD RAID support a method of hot replacing a disk WITHOUT
> having to resort to degraded mode?

Yes, it does, if you use a recent kernel + mdadm

However, you have another option anyway. Just remove the hot spare, 
re-partition as needed, then grow the raid5 to raid6.
1) Wait for the re-sync to complete
2) Drop another old drive from the array
3) Re-partition
4) Add back to the array and re-sync

You will never have worse redundancy than current during the above 
process. Personally, I'd probably use the hot spare to move to RAID6, 
and then use the migration to move a drive to its replacement (assuming 
you have another spare drive available).

> Scenario: I have 6 reliable, perfectly functioning Samsung 2TB drives,
> all recently passed SMART tests, zero reallocated sectors, etc.
> --One drive is a spare
> --Five drives are each partitioned into 500G and 1500G. The five 1500G
> partitions make up a RAID5. The 500G partitions were used in a
> different RAID array; I am abandoning the 500G partitions and
> reclaiming the space, but I want to transform RAID5 to use all the
> space on each drive. (And then probably convert to a RAID6)
>
> IF-- the 1500G partitions were at the beginning of the drive, I could
> simply (and I believe I have done this in the past):
> 1) Stop the RAID array
> 2) Delete both partitions, create a single partition with the same
> offset (On a 4K sector boundary for those picky about the details)
> 3) Restart the array to check for errors/mistakes (It should come up clean)
> 4) Repeat steps 1-3 for additional drives
> 5) Grow the array (Resync starts at the *new space)
> 6) Resize the filesystem

Yep, did this recently, and it works well. As long as you are using a 
version of MD that puts the superblock at the beginning or offset from 
the beginning. If the superblock is at the end of the block device, then 
when you change the size of the partition, md won't find the superblock, 
and so the array won't assemble.

> However, in my situation, my RAID5 partitions start in the middle of
> the drive, complicating that slightly... Fortunately, I have a spare
> drive or two to assist.
> Would the following off-line scenario work?
> 1) Stop RAID array
> 2) Clone one of the RAID devices to a larger disk (Using dd)
> 3) Remove the old RAID device from the system
> 4) Restart the RAID array in readonly mode (to test that the clone was
> successful without marking the array as dirty, otherwise, revert to
> the removed disk)
> 5) Optional: Restart the RAID array in readwrite mode to confirm
> 6) Repeat 1-5 for each additional disk
> 7) Grow the array (Resync starts at the new space)
> 8) Grow the filesystem

I'm not sure I would want to do that. It means very lengthy downtime of 
the array (though it depends on how much downtime is allowed), and I'd 
prefer to let MD manage the migration rather than me and dd (ie, 
hopefully MD is more careful and knowledgeable than I am).

Hope this helps somewhat.

Actually, I was trying to find the URL to show the migrate options, but 
couldn't seem to find any docs in the mdadm wiki at:
http://vger.kernel.org/vger-lists.html#linux-raid
Also, the debian raid wiki, the Neil Brown blog, and various other 
resources. Hopefully someone else will be able to provide the relevant 
link. Perhaps searching the mailing list itself would be best (I 
definitely recall seeing it discussed here), but I'm out of time now. 
Good luck.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-11 23:26 ` Adam Goryachev
@ 2014-03-12  0:00   ` Scott D'Vileskis
  2014-03-12 11:54     ` Mikael Abrahamsson
  0 siblings, 1 reply; 10+ messages in thread
From: Scott D'Vileskis @ 2014-03-12  0:00 UTC (permalink / raw)
  To: Adam Goryachev; +Cc: linux-raid

Thank you for the response..


>> Does Linux MD RAID support a method of hot replacing a disk WITHOUT
>> having to resort to degraded mode?
>
>
> Yes, it does, if you use a recent kernel + mdadm

I remembered reading about this once, and researching once before, but
I am pretty sure my Xubuntu 13.10 distro doesn't have the flavor of
mdadm I need.  (Awesome work on this tool Neil! With mdadm you have
transformed the utility of the md subsystem, and made it almost
impossible to break an array with bad options)
>
> However, you have another option anyway. Just remove the hot spare,
> re-partition as needed, then grow the raid5 to raid6.
> 1) Wait for the re-sync to complete
> 2) Drop another old drive from the array
> 3) Re-partition
> 4) Add back to the array and re-sync
> You will never have worse redundancy than current during the above process.
> Personally, I'd probably use the hot spare to move to RAID6, and then use
> the migration to move a drive to its replacement (assuming you have another
> spare drive available).

Thanks for pointing out that obvious solution, I had almost forgot! I
think this had crossed my mind at some point, but I wasn't sure if I
needed RAID6 at this time. The 2TB Samsung F4EG drives have a 1e^15
BER, which is on par with enterprise drives. I've been using them for
about 3 years, and they are still barely audible and perform great. I
have even purchased several used (The genuine Samsung article, Made in
Korea, not the post-merger Seagate flavor) and they all seem to behave
awesome.
The other kink to this solution is that I only plan to have 4 drives
in the system when all is said and done. I might just go with a RAID10
in that case.

If I decide to go this route, migrating to RAID6 is certainly a great solution.
>
>
>> However, in my situation, my RAID5 partitions start in the middle of
>> the drive, complicating that slightly... Fortunately, I have a spare
>> drive or two to assist.
>> 1) Stop RAID array
>> 2) Clone one of the RAID devices to a larger disk (Using dd)
>> 3) Remove the old RAID device from the system
>> 4) Restart the RAID array in readonly mode (to test that the clone was
>> successful without marking the array as dirty, otherwise, revert to
>> the removed disk)
>> 5) Optional: Restart the RAID array in readwrite mode to confirm
>> 6) Repeat 1-5 for each additional disk
>> 7) Grow the array (Resync starts at the new space)
>> 8) Grow the filesystem
>
I did start this process and migrated the first drive. Array downtime
was acceptable to me.. Details:
1) I stopped the RAID array
2) I created a partition on my spare drive
   (starting at sector 2048 so my 4K sector drive lies on a 4K boundary)
3) I cloned the partition with dd, It ran for a few hours at 100MB/min
sustained:
dd if=/dev/sda2 of=/dev/sdf1 bs=1M
(in another terminal: "while killall -USR1 dd; do sleep 60; done" was
pretty handy for monitoring progress)
4) I couldn't figure out how to start the array readonly, but I
assembled it manually with the following:
mdadm --assemble /dev/md5 /dev/sdf1 /dev/sdb2 /dev/sdc2 /dev/sdd2
/dev/sde2 --no-degraded
mdadm: /dev/md5 has been started with 5 drives.

So, while this solution does require a spare disk, this is an option
for migrating raid5 without running the array in degraded mode.
>
> Actually, I was trying to find the URL to show the migrate options, but
> couldn't seem to find any docs in the mdadm wiki at:
> http://vger.kernel.org/vger-lists.html#linux-raid
> Also, the debian raid wiki, the Neil Brown blog, and various other
> resources. Hopefully someone else will be able to provide the relevant link.
> Perhaps searching the mailing list itself would be best (I definitely recall
> seeing it discussed here), but I'm out of time now. Good luck.

I recall seeing it at one point too.. Maybe it was in the btrfs man pages?
Thanks again!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-11 17:36 Replace RAID devices without resorting to degraded mode? Scott D'Vileskis
  2014-03-11 23:26 ` Adam Goryachev
@ 2014-03-12  9:12 ` David Brown
  2014-03-12 12:23   ` Raul Dias
  2014-03-12 15:38   ` Scott D'Vileskis
  1 sibling, 2 replies; 10+ messages in thread
From: David Brown @ 2014-03-12  9:12 UTC (permalink / raw)
  To: Scott D'Vileskis, linux-raid

On 11/03/14 18:36, Scott D'Vileskis wrote:
> Hello--
> I have been using Linux RAID for about the last 12 years or so and
> have endured dozens of RAID migrations, swapping of disks, growing &
> shrinking arrays, transforming partitions, etc. I consider myself
> pretty well versed in RAID0/1/5, and more recently RAID6.
> 
> I would like to grow my RAID5 array to fill larger devices (larger
> partitions, actually). In the past, the typical method of replacing
> all the disks/partitions with larger ones is to:
> 1) Add a larger drive/partition as a hot spare
> 2) Fail a disk
> 3) Wait for the rebuild/resync
> 4) Repeat for each disk in the array
> 5) After all drives/partitions replaced and resynced, Grow the device
> and wait for a resync of the new space.
> 6) Resize the filesystem
> While this typically works flawlessly, it does require the array to be
> operated in degraded mode for the entire operation, which many would
> consider risky.
> 
> Does Linux MD RAID support a method of hot replacing a disk WITHOUT
> having to resort to degraded mode?

Step 1 in all this is, of course, to take a backup.  And step 2 is to
check that your backup is good.

It is also a good idea to practice on fake arrays made from loopback
"disks" - they work fine for md raid, and let you practice re-shaping,
re-sizing, etc., without any risk to your real disks.


If you want to safely replace the disks in a raid5 array, the easiest
way is to add a new disk (this can be an external USB disk if necessary)
and re-shape to an asymmetric raid6 with parity Q on the new disk.  Now
you have an extra redundancy for safety.  (Use asymmetric raid6 to avoid
re-striping the existing disks.)

In your case, I think you want to re-use the original disks (but with
different partitioning).  So for each disk, you have the steps:

1. Fail the disk.
2. Re-partition the disk.  It's a good idea to zero the superblock too,
to avoid confusion.
3. Add the new disk partition into the array as a hot spare.
4. Wait for the rebuild/resync

And at the end, fail the extra disk with the Q parity, then reshape back
to raid 5 (this will not involve any data movement since the disks are
already in raid 5 shape).  At all times, you have at least 1 disk worth
of redundancy.


If you are using new disks (or at least one more new disk), and you have
a new kernel and mdadm with hot replace support, then the procedure is
similar.  First make your asymmetric raid6 with an additional disk for
extra safety.  Then for each disk in the main array, do this:

1. Attach a new disk, and partition it appropriately.  Zero the
superblock if it is a recycled disk.  Then add it as a hot spare.
2. Mark one of the original disks as replaceable.
3. Wait for the rebuild as data is copied from the replaceable disk to
the hot spare.
4. Fail and remove the replaced disk.

Again, remove the extra Q parity disk at the end.  After the generation
of the Q disk and before its removal, you have at least 2 disks of
redundancy.  This gives you extra protection against user error, such as
pulling the wrong disk!




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-12  0:00   ` Scott D'Vileskis
@ 2014-03-12 11:54     ` Mikael Abrahamsson
  0 siblings, 0 replies; 10+ messages in thread
From: Mikael Abrahamsson @ 2014-03-12 11:54 UTC (permalink / raw)
  To: Scott D'Vileskis; +Cc: linux-raid


> I remembered reading about this once, and researching once before, but
> I am pretty sure my Xubuntu 13.10 distro doesn't have the flavor of
> mdadm I need.  (Awesome work on this tool Neil! With mdadm you have
> transformed the utility of the md subsystem, and made it almost
> impossible to break an array with bad options)

I did "--replace" on a 12.04 LTS with a 3.8 kernel plus side-compiled 
mdadm.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-12  9:12 ` David Brown
@ 2014-03-12 12:23   ` Raul Dias
  2014-03-12 12:45     ` David Brown
  2014-03-12 15:38   ` Scott D'Vileskis
  1 sibling, 1 reply; 10+ messages in thread
From: Raul Dias @ 2014-03-12 12:23 UTC (permalink / raw)
  To: David Brown; +Cc: Scott D'Vileskis, linux-raid@vger.kernel.org List

2014-03-12 6:12 GMT-03:00 David Brown <david.brown@hesbynett.no>:
...
>
> If you want to safely replace the disks in a raid5 array, the easiest
> way is to add a new disk (this can be an external USB disk if necessary)
> and re-shape to an asymmetric raid6 with parity Q on the new disk.  Now
> you have an extra redundancy for safety.  (Use asymmetric raid6 to avoid
> re-striping the existing disks.)

Is there a solution for other RAID setups? 1,0,6,10?

thanks

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-12 12:23   ` Raul Dias
@ 2014-03-12 12:45     ` David Brown
  2014-03-12 23:18       ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: David Brown @ 2014-03-12 12:45 UTC (permalink / raw)
  To: Raul Dias; +Cc: Scott D'Vileskis, linux-raid@vger.kernel.org List

On 12/03/14 13:23, Raul Dias wrote:
> 2014-03-12 6:12 GMT-03:00 David Brown <david.brown@hesbynett.no>:
> ...
>>
>> If you want to safely replace the disks in a raid5 array, the easiest
>> way is to add a new disk (this can be an external USB disk if necessary)
>> and re-shape to an asymmetric raid6 with parity Q on the new disk.  Now
>> you have an extra redundancy for safety.  (Use asymmetric raid6 to avoid
>> re-striping the existing disks.)
> 
> Is there a solution for other RAID setups? 1,0,6,10?
> 
> thanks
> 

The whole idea of bumping up from raid5 to raid6 before maintenance is
to improve the redundancy before you remove existing drives.  It is not
strictly necessary, but I think it is nice to have the extra safety.
Hot replace makes it less important (since you don't remove the "old"
drive until the new one is fully sync'ed).

So for raid1, you would add a new mirror device, making a 3-way mirror
instead of a 2-way one (I don't know off-hand if md raid allows this
re-shape).

Raid0 has no redundancy at all - so re-shape it to raid4 or asymmetric
raid5 (which is equivalent) with parity on a new disk before doing the
replacements.

Raid6 already has two parities.  You'll have to wait for Andrea
Mazzoleni's great work on multi-parity raid to make it into mdadm and
the kernel before you can add extra redundancy, but two disk redundancy
is probably enough anyway (especially with hot replace).

I think there are sever limits on the type of Raid10 reshapes you can
do, so I don't think it is possible to add redundancy here.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-12  9:12 ` David Brown
  2014-03-12 12:23   ` Raul Dias
@ 2014-03-12 15:38   ` Scott D'Vileskis
  1 sibling, 0 replies; 10+ messages in thread
From: Scott D'Vileskis @ 2014-03-12 15:38 UTC (permalink / raw)
  To: David Brown; +Cc: linux-raid

>
> Step 1 in all this is, of course, to take a backup.  And step 2 is to
> check that your backup is good.

Everything important was backed up recently.  :-)

>
> It is also a good idea to practice on fake arrays made from loopback
> "disks" - they work fine for md raid, and let you practice re-shaping,
> re-sizing, etc., without any risk to your real disks.

I also recommend this for things I am not sure about,
and I imagine it is critical for development testing.
In fact, when I like to give a demo to folks:
--I'll first create a RAID array with loop devices.
--Ill then copy over a movie and start watching the movie.
--I'll then fail a disk and delete the file
--Then I'll create a new file, add it to the array
and watch it recover, all the while the movie doesn't skip a beat. :-)

>
>
> If you want to safely replace the disks in a raid5 array, the easiest
> way is to add a new disk (this can be an external USB disk if necessary)
> and re-shape to an asymmetric raid6 with parity Q on the new disk.  Now
> you have an extra redundancy for safety.  (Use asymmetric raid6 to avoid
> re-striping the existing disks.)
>
I would disagree with your assertion that it is the 'easiest' way.
I tried to do this on my system, but you didn't specify which parity
method to use.
RTFMing I found...
              These  same  layouts  are available for RAID6.  There are also 4
              layouts that will provide an intermediate stage  for  converting
              between  RAID5 and RAID6.  These provide a layout which is iden‐
              tical to  the  corresponding  RAID5  layout  on  the  first  N-1
              devices,  and  has  the  'Q' syndrome (the second 'parity' block
              used by RAID6) on the last device.  These layouts are: left-sym‐
              metric-6,  right-symmetric-6,  left-asymmetric-6, right-asymmet‐
              ric-6, and parity-first-6.
Regardless, attempting to use any of these layouts were countered with
 "that parity mode isn't available for that level raid." or something similar.

It was much 'easier' to compile the latest mdadm and use --replace
(Thanks Mikael):
  apt-get install git
  git clone git://neil.brown.name/mdadm
  make install
  mdadm --add /dev/md5 /dev/sdX
  mdadm --replace /dev/md5 /dev/sdY

Recovery has been the same >100MB/sec and it is reading from one disk
'Y', writing to another 'X' without a full resync

Thanks again everyone your awesome ideas!

Now, time to see how many of these Linux raid options exist in my new
Lenovo/Iomega ix4 NAS unit.
http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=lenovo%20ix4
Apparently it runs a flavor of Debian for ARM.  I suspect I'll be
cross compiling some modern versions of our favorite tool :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-12 12:45     ` David Brown
@ 2014-03-12 23:18       ` NeilBrown
  2014-03-13  9:27         ` David Brown
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-03-12 23:18 UTC (permalink / raw)
  To: David Brown
  Cc: Raul Dias, Scott D'Vileskis, linux-raid@vger.kernel.org List

[-- Attachment #1: Type: text/plain, Size: 375 bytes --]

On Wed, 12 Mar 2014 13:45:31 +0100 David Brown <david.brown@hesbynett.no>
wrote:

> So for raid1, you would add a new mirror device, making a 3-way mirror
> instead of a 2-way one (I don't know off-hand if md raid allows this
> re-shape).

It sure does. raid1 can be 1-way, 2-way, 3-way, 4-way ..... and "mdadm
--grow" can easily switch between them.

NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Replace RAID devices without resorting to degraded mode?
  2014-03-12 23:18       ` NeilBrown
@ 2014-03-13  9:27         ` David Brown
  0 siblings, 0 replies; 10+ messages in thread
From: David Brown @ 2014-03-13  9:27 UTC (permalink / raw)
  To: NeilBrown
  Cc: Raul Dias, Scott D'Vileskis, linux-raid@vger.kernel.org List

On 13/03/14 00:18, NeilBrown wrote:
> On Wed, 12 Mar 2014 13:45:31 +0100 David Brown <david.brown@hesbynett.no>
> wrote:
> 
>> So for raid1, you would add a new mirror device, making a 3-way mirror
>> instead of a 2-way one (I don't know off-hand if md raid allows this
>> re-shape).
> 
> It sure does. raid1 can be 1-way, 2-way, 3-way, 4-way ..... and "mdadm
> --grow" can easily switch between them.
> 

I thought that was the case, but I didn't want to promise anything
without checking.

Thanks,

David




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-03-13  9:27 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-11 17:36 Replace RAID devices without resorting to degraded mode? Scott D'Vileskis
2014-03-11 23:26 ` Adam Goryachev
2014-03-12  0:00   ` Scott D'Vileskis
2014-03-12 11:54     ` Mikael Abrahamsson
2014-03-12  9:12 ` David Brown
2014-03-12 12:23   ` Raul Dias
2014-03-12 12:45     ` David Brown
2014-03-12 23:18       ` NeilBrown
2014-03-13  9:27         ` David Brown
2014-03-12 15:38   ` Scott D'Vileskis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.