All of lore.kernel.org
 help / color / mirror / Atom feed
* Raid 5 to Raid 1 (half of the data not required)
@ 2011-08-23 23:41 Mike Viau
  2011-08-24  0:46 ` NeilBrown
  2011-08-24  8:03 ` Gordon Henderson
  0 siblings, 2 replies; 8+ messages in thread
From: Mike Viau @ 2011-08-23 23:41 UTC (permalink / raw)
  To: linux-raid


Hello,

I am trying to convert my currently running raid 5 array into a raid 1. All the guides I can see online are for the reverse direction in which one is converting/migrating a raid 1 to raid 5. I have intentionally only allocated exactly half of the total raid 5 size is. I would like to create the raid 1 over /dev/sdb1 and /dev/sdc1 with the data on the raid 5 running with the same drives plus /dev/sde1. Is this possible, I wish to have the data redundantly over two hard drive without the parity which is present in raid 5?

Thanks for any help in advance :)


# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Dec 20 09:48:07 2010
     Raid Level : raid5
     Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Tue Aug 23 11:34:00 2011
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : HOST:0  (local to host HOST)
           UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
         Events : 55750

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       3       8       65        2      active sync   /dev/sde1


-M
 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 5 to Raid 1 (half of the data not required)
  2011-08-23 23:41 Raid 5 to Raid 1 (half of the data not required) Mike Viau
@ 2011-08-24  0:46 ` NeilBrown
       [not found]   ` <BAY148-W29A568212EC52E4C4BC52EF110@phx.gbl>
  2011-08-24  8:03 ` Gordon Henderson
  1 sibling, 1 reply; 8+ messages in thread
From: NeilBrown @ 2011-08-24  0:46 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid

On Tue, 23 Aug 2011 19:41:11 -0400 Mike Viau <viaum@sheridanc.on.ca> wrote:

> 
> Hello,
> 
> I am trying to convert my currently running raid 5 array into a raid 1. All the guides I can see online are for the reverse direction in which one is converting/migrating a raid 1 to raid 5. I have intentionally only allocated exactly half of the total raid 5 size is. I would like to create the raid 1 over /dev/sdb1 and /dev/sdc1 with the data on the raid 5 running with the same drives plus /dev/sde1. Is this possible, I wish to have the data redundantly over two hard drive without the parity which is present in raid 5?

Yes this is possible, though you will need a fairly new kernel (late 30's at
least) and mdadm.

And you need to be running ext3 because I think it is the only one you can
shrink.

1/ umount filesystem
2/ resize2fs /dev/md0 490G
     This makes the array use definitely less than half the space.  It is 
     safest to leave a bit of slack for relocated metadata or something.
     If you don't make this small enough some later step will fail, and 
     you can then revert back to here and try again.

3/ mdadm --grow --array-size=490G /dev/md0
    This makes the array appear smaller without actually destroying any data.
4/ fsck -f /dev/md0
    This makes sure the filesystem inside the shrunk array is still OK.
    If there is a problem you can "mdadm --grow" to a bigger size and check
    again. 

Only if the above all looks ok, continue.  You can remount the filesystem at
this stage if you want to.

5/ mdadm --grow /dev/md0 --raid-disks=2

    If you didn't make the array-size small enough, this will fail.
    If you did it will start a 'reshape' which shuffles all the data around
    so it fits (With parity) on just two devices.

6/ mdadm --wait /dev/md0
7/ mdadm --grow /dev/md0 --level=1
    This instantly converts a 2-device RAID5 to a 2-device RAID1.
8/ mdadm --grow /dev/md0 --array-size=max
9/ resize2fs /dev/md0
     This will grow the filesystem up to fill the available space.

All done.

Please report success or failure or any interesting observations.

NeilBrown


> 
> Thanks for any help in advance :)
> 
> 
> # mdadm -D /dev/md0
> /dev/md0:
>         Version : 1.2
>   Creation Time : Mon Dec 20 09:48:07 2010
>      Raid Level : raid5
>      Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
>   Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
>    Raid Devices : 3
>   Total Devices : 3
>     Persistence : Superblock is persistent
> 
>     Update Time : Tue Aug 23 11:34:00 2011
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>            Name : HOST:0  (local to host HOST)
>            UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
>          Events : 55750
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       17        0      active sync   /dev/sdb1
>        1       8       33        1      active sync   /dev/sdc1
>        3       8       65        2      active sync   /dev/sde1
> 
> 
> -M
>  		 	   		  --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 5 to Raid 1 (half of the data not required)
       [not found]   ` <BAY148-W29A568212EC52E4C4BC52EF110@phx.gbl>
@ 2011-08-24  2:39     ` NeilBrown
  2011-08-24  7:34       ` Robin Hill
  0 siblings, 1 reply; 8+ messages in thread
From: NeilBrown @ 2011-08-24  2:39 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid

On Tue, 23 Aug 2011 22:18:12 -0400 Mike Viau <viaum@sheridanc.on.ca> wrote:

> 
> > On Wed, 24 Aug 2011 <neilb@suse.de> wrote:
> > > On Tue, 23 Aug 2011 19:41:11 -0400 Mike Viau <viaum@sheridanc.on.ca> wrote:
> > 
> > > 
> > > Hello,
> > > 
> > > I am trying to convert my currently running raid 5 array into a raid 1. All the guides I can see online are for the reverse direction in which one is converting/migrating a raid 1 to raid 5. I have intentionally only allocated exactly half of the total raid 5 size is. I would like to create the raid 1 over /dev/sdb1 and /dev/sdc1 with the data on the raid 5 running with the same drives plus /dev/sde1. Is this possible, I wish to have the data redundantly over two hard drive without the parity which is present in raid 5?
> > 
> > Yes this is possible, though you will need a fairly new kernel (late 30's at
> > least) and mdadm.
> > 
> 
> In your opinion is Debian 2.6.32-35 going to cut it? Not very late 30's, with mdadm - v3.1.4 - 31st August 2010

Should be OK.  The core functionality with in in 2.6.29.  There have been a
few bug fixes since then but they are for corner cases that you probably
won't hit.

> 
> > And you need to be running ext3 because I think it is the only one you can
> > shrink.
> > 
> > 1/ umount filesystem
> > 2/ resize2fs /dev/md0 490G
> >      This makes the array use definitely less than half the space.  It is 
> >      safest to leave a bit of slack for relocated metadata or something.
> >      If you don't make this small enough some later step will fail, and 
> >      you can then revert back to here and try again.
> > 
> 
> 
> The file system used was ext4 which is mounted off of a LVM logical volume inside of a virtual machine :P

Nice of you to keep it simple...

ext4 isn't a problem.  LVM shouldn't be, but it adds an extra step.  You
first shrink the fs, then the lv, then the pv, then the RAID...

> 
> I am still able to run the first two steps, but am considered about data loss on the underlying ext4 filesystem if I shrink the filesystem too much, 490G may not be possible. Other than that the following steps sound 'do-able' if the re-size works.
> 
> > 3/ mdadm --grow --array-size=490G /dev/md0
> >     This makes the array appear smaller without actually destroying any data.
> > 4/ fsck -f /dev/md0
> >     This makes sure the filesystem inside the shrunk array is still OK.
> >     If there is a problem you can "mdadm --grow" to a bigger size and check
> >     again. 
> > 
> > Only if the above all looks ok, continue.  You can remount the filesystem at
> > this stage if you want to.
> > 
> > 5/ mdadm --grow /dev/md0 --raid-disks=2
> > 
> >     If you didn't make the array-size small enough, this will fail.
> >     If you did it will start a 'reshape' which shuffles all the data around
> >     so it fits (With parity) on just two devices.
> > 
> > 6/ mdadm --wait /dev/md0
> > 7/ mdadm --grow /dev/md0 --level=1
> >     This instantly converts a 2-device RAID5 to a 2-device RAID1.
> > 8/ mdadm --grow /dev/md0 --array-size=max
> > 9/ resize2fs /dev/md0
> >      This will grow the filesystem up to fill the available space.
> > 
> > All done.
> > 
> > Please report success or failure or any interesting observations.
> > 
> 
> I am not sure how crack-pot of a solution this would be, but could I: 
> 
> 1/ mdadm -r /dev/md0 /dev/sde1
> Remove /dev/sde1 from the raid 5 array

Here you have lost your redundancy .... your choice I guess.

> 
> 2/ dd if=/dev/zero of=/dev/sde1 bs=512 count=1
> This clears the msdos mbr and clears the partitions
> 
> 3/ parted, fdisk or cfdisk to create a new 1TB (or less is possible as well) ext4 partition on /dev/sde
> 
> 4/ mkfs.ext4 /dev/sde1
> 
> 5/ cp -R {mounted location of degraded /dev/md0 partition} {mounted location of /dev/sde1 partition}
> Aka backup
> 
> 6/ mdadm --zero-superblock on /dev/sdb1 and /dev/sdc1
> Prep the two drive for new raid array

Probably want to stop the array (mdadm -S /dev/md0) before you do that.

> 
> 7/ mdadm create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
> Create new raid 1 array on drives
> 
> 8/ create LVM (pv,vg, and lv)
> 
> 9/ parted, fdisk or cfdisk to create a new 1TB ext4 partition on LVM
> 
> 10/ mkfs.ext4 on LV on /dev/md0
> 
> 11/ cp -R {mounted location of /dev/sde1 partition} {mounted location of new /dev/md0 partition} 
> 
> Any thought/suggestion/correction to this proposed idea?

Doing two copies seems a bit wasteful.

- fail/remove sdb1
- create a 1-device RAID1 on sdb1 (or a 2 device RAID1 with a missing device).
- do the lvm, mkfs
- copy from old filesystem to the new filesystem
- stop the old array.
- add sdc1 to the new RAID1.
- If you made it a 1-device RAID1, --grow it to 2 devices.

Only one copy operation needed.

NeilBrown



> 
> 
> Thanks again :)
> 
> > > 
> > > 
> > > # mdadm -D /dev/md0
> > > /dev/md0:
> > >         Version : 1.2
> > >   Creation Time : Mon Dec 20 09:48:07 2010
> > >      Raid Level : raid5
> > >      Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
> > >   Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
> > >    Raid Devices : 3
> > >   Total Devices : 3
> > >     Persistence : Superblock is persistent
> > > 
> > >     Update Time : Tue Aug 23 11:34:00 2011
> > >           State : clean
> > >  Active Devices : 3
> > > Working Devices : 3
> > >  Failed Devices : 0
> > >   Spare Devices : 0
> > > 
> > >          Layout : left-symmetric
> > >      Chunk Size : 512K
> > > 
> > >            Name : HOST:0  (local to host HOST)
> > >            UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> > >          Events : 55750
> > > 
> > >     Number   Major   Minor   RaidDevice State
> > >        0       8       17        0      active sync   /dev/sdb1
> > >        1       8       33        1      active sync   /dev/sdc1
> > >        3       8       65        2      active sync   /dev/sde1
> > > 
> > > 
> > > -M
> 
>  		 	   		  

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 5 to Raid 1 (half of the data not required)
  2011-08-24  2:39     ` NeilBrown
@ 2011-08-24  7:34       ` Robin Hill
  0 siblings, 0 replies; 8+ messages in thread
From: Robin Hill @ 2011-08-24  7:34 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1139 bytes --]

On Wed Aug 24, 2011 at 12:39:38 +1000, NeilBrown wrote:

> On Tue, 23 Aug 2011 22:18:12 -0400 Mike Viau <viaum@sheridanc.on.ca> wrote:
> 
> > 
> > I am not sure how crack-pot of a solution this would be, but could I: 
> > 
> > 1/ mdadm -r /dev/md0 /dev/sde1
> > Remove /dev/sde1 from the raid 5 array
> 
> Here you have lost your redundancy .... your choice I guess.
> 
> > 
> > 2/ dd if=/dev/zero of=/dev/sde1 bs=512 count=1
> > This clears the msdos mbr and clears the partitions
> > 
> > 3/ parted, fdisk or cfdisk to create a new 1TB (or less is possible as well) ext4 partition on /dev/sde
> > 
> > 4/ mkfs.ext4 /dev/sde1
> > 
> > 5/ cp -R {mounted location of degraded /dev/md0 partition} {mounted location of /dev/sde1 partition}
> > Aka backup
> > 

If you're wanting to backup, "cp -a" would be better than "cp -R",
otherwise you lose attributes & symlinks.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 5 to Raid 1 (half of the data not required)
  2011-08-23 23:41 Raid 5 to Raid 1 (half of the data not required) Mike Viau
  2011-08-24  0:46 ` NeilBrown
@ 2011-08-24  8:03 ` Gordon Henderson
  2011-08-24  8:21   ` Mikael Abrahamsson
  1 sibling, 1 reply; 8+ messages in thread
From: Gordon Henderson @ 2011-08-24  8:03 UTC (permalink / raw)
  To: linux-raid

On Tue, 23 Aug 2011, Mike Viau wrote:

>
> Hello,
>
> I am trying to convert my currently running raid 5 array into a raid 1. 
> All the guides I can see online are for the reverse direction in which 
> one is converting/migrating a raid 1 to raid 5. I have intentionally 
> only allocated exactly half of the total raid 5 size is. I would like to 
> create the raid 1 over /dev/sdb1 and /dev/sdc1 with the data on the raid 
> 5 running with the same drives plus /dev/sde1. Is this possible, I wish 
> to have the data redundantly over two hard drive without the parity 
> which is present in raid 5?

3-drive RAID5 -> 2-drive RAID1...

Neils solution is interesting (and he should know :), but personally, I'd 
probably take a much simpler approach without FS resizing, etc. so this 
allows you to change filesystems or use something other than ext3...

So start with taking a backup :)

Then verify the existing RAID5 array (echo "check" > .. etc.) and wait...

Then "break" the array by failing /dev/sdb1

Create a single-drive RAID1 using /dev/sdb1 and "missing"

mkfs the filesystem of choice on this new MD drive and mount it.

use cp -a (or rsync) to copy data from the raid5 array to the new raid1 
array.

stop the raid5

hot-add /dev/sdc1 into the new raid1

then fiddle with whatever boot options, etc. to make sure the new drive is 
assembled at boot time, mounted, etc.

This isn't as "glamorous" as Neils method involving lots of mdadm 
commands, shrinks and grows, but sometimes it's good to keep things at 
a simpler level?

Well, it works for me, anyway!

Gordon

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 5 to Raid 1 (half of the data not required)
  2011-08-24  8:03 ` Gordon Henderson
@ 2011-08-24  8:21   ` Mikael Abrahamsson
  2011-08-24  8:42     ` NeilBrown
  0 siblings, 1 reply; 8+ messages in thread
From: Mikael Abrahamsson @ 2011-08-24  8:21 UTC (permalink / raw)
  To: Gordon Henderson; +Cc: linux-raid

On Wed, 24 Aug 2011, Gordon Henderson wrote:

> This isn't as "glamorous" as Neils method involving lots of mdadm 
> commands, shrinks and grows, but sometimes it's good to keep things at a 
> simpler level?

Another way would be to add the new raid1 with missing drive to the lv, 
and pvmove all extents off of the existing raid5 md pv, then vgreduce away 
from it, stop the raid5, zero-superblock, and add one drive to add 
redundancy for the raid1.

But that has little to do with linux raid, and all to do with LVM. It also 
means you can do everything online since pvmove doesn't require to offline 
anything.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 5 to Raid 1 (half of the data not required)
  2011-08-24  8:21   ` Mikael Abrahamsson
@ 2011-08-24  8:42     ` NeilBrown
  2011-08-26  0:11       ` Mike Viau
  0 siblings, 1 reply; 8+ messages in thread
From: NeilBrown @ 2011-08-24  8:42 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: Gordon Henderson, linux-raid

On Wed, 24 Aug 2011 10:21:32 +0200 (CEST) Mikael Abrahamsson
<swmike@swm.pp.se> wrote:

> On Wed, 24 Aug 2011, Gordon Henderson wrote:
> 
> > This isn't as "glamorous" as Neils method involving lots of mdadm 
> > commands, shrinks and grows, but sometimes it's good to keep things at a 
> > simpler level?
> 
> Another way would be to add the new raid1 with missing drive to the lv, 
> and pvmove all extents off of the existing raid5 md pv, then vgreduce away 
> from it, stop the raid5, zero-superblock, and add one drive to add 
> redundancy for the raid1.
> 
> But that has little to do with linux raid, and all to do with LVM. It also 
> means you can do everything online since pvmove doesn't require to offline 
> anything.
> 

There are certainly lots of approaches. :-)
But every approach will require either coping or shrinking the filesystem and
as extX doesn't support online shrinking the filesystem will have to be
effectively off-line while that shrink happens.
(if you shrink by coping, then it could be technically on-line but it had
better not be written to).

NeilBrown

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Raid 5 to Raid 1 (half of the data not required)
  2011-08-24  8:42     ` NeilBrown
@ 2011-08-26  0:11       ` Mike Viau
  0 siblings, 0 replies; 8+ messages in thread
From: Mike Viau @ 2011-08-26  0:11 UTC (permalink / raw)
  To: neilb, swmike, linux-raid, gordon


> On Wed, 24 Aug 2011 18:42:35 +1000 <neilb@suse.de> wrote:
> > On Wed, 24 Aug 2011 10:21:32 +0200 (CEST) Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> > On Wed, 24 Aug 2011, Gordon Henderson wrote:
> > 
> > > This isn't as "glamorous" as Neils method involving lots of mdadm 
> > > commands, shrinks and grows, but sometimes it's good to keep things at a 
> > > simpler level?
> > 
> > Another way would be to add the new raid1 with missing drive to the lv, 
> > and pvmove all extents off of the existing raid5 md pv, then vgreduce away 
> > from it, stop the raid5, zero-superblock, and add one drive to add 
> > redundancy for the raid1.
> > 
> > But that has little to do with linux raid, and all to do with LVM. It also 
> > means you can do everything online since pvmove doesn't require to offline 
> > anything.
> > 
> 
> There are certainly lots of approaches. :-)
> But every approach will require either coping or shrinking the filesystem and
> as extX doesn't support online shrinking the filesystem will have to be
> effectively off-line while that shrink happens.
> (if you shrink by coping, then it could be technically on-line but it had
> better not be written to).
> 

Wow! Thank you so much everyone for your feedback, I am truly very grateful :) 

Before tackling this task I plan to delete some unnecessary files to have less to backup, then make the all so important backup, and lastly attempt the migration. I had to remember how I decided to build up the LVM on the RAID 5 array :)


Model: Linux Software RAID Array (md)
Disk /dev/md0: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2000GB  2000GB  primary               lvm

OR

 --- Physical volume ---
  PV Name               /dev/md0p1
  VG Name               masterVG
  PV Size               1.82 TiB / not usable 3.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              476932
  Free PE               261031
  Allocated PE          215901
  PV UUID               xiS8is-RR6D-Swre-IHQN-yGY2-cNmJ-wGGBY7

AND

  --- Volume group ---
  VG Name               masterVG
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.82 TiB
  PE Size               4.00 MiB
  Total PE              476932
  Alloc PE / Size       215901 / 843.36 GiB
  Free  PE / Size       261031 / 1019.65 GiB
  VG UUID               eoZgIp-50Wb-Lrhg-Sawt-rWDV-YIDy-Ez2Glr



So it looks like the entire RAID 5 array is one LVM physical volume and then one volume group.


  --- Logical volume ---
  LV Name                /dev/masterVG/backupLV
  VG Name                masterVG
  LV UUID                wc61ER-uoNn-ynXI-2v64-wpa8-ON3g-im4fo8
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                700.00 GiB
  Current LE             179200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           254:9


The logical volume has a size of 700.00 GB, so this is less than the 1 TB size in which I plan the newly migrated RAID 1 mdadm array to be (with two 1 TB drives). I don't think I will therefore have any need to shrink the ext4 filesystem, hopefully meaning I can complete the entire process over some time while keeping the data available or online.

I remember that I had good reasons for using LVM, but I will have to get reacquainted again with the commands of LVM like pv/vg/lv[move/reduce]...


Thanks again to everyone for their help :D




 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-08-26  0:11 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-23 23:41 Raid 5 to Raid 1 (half of the data not required) Mike Viau
2011-08-24  0:46 ` NeilBrown
     [not found]   ` <BAY148-W29A568212EC52E4C4BC52EF110@phx.gbl>
2011-08-24  2:39     ` NeilBrown
2011-08-24  7:34       ` Robin Hill
2011-08-24  8:03 ` Gordon Henderson
2011-08-24  8:21   ` Mikael Abrahamsson
2011-08-24  8:42     ` NeilBrown
2011-08-26  0:11       ` Mike Viau

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.