All of lore.kernel.org
 help / color / mirror / Atom feed
* Converting RAID1 to RAID5
@ 2011-09-11  3:38 Alex
  2011-09-11  3:57 ` NeilBrown
  0 siblings, 1 reply; 8+ messages in thread
From: Alex @ 2011-09-11  3:38 UTC (permalink / raw)
  To: linux-raid

Hi,
I have a a few two-disk RAID1 partitions that I'd like to convert to
three-disk RAID5 partitions using fedora15 with ext4. I've read a few
docs online, but none that are authoritative or current. Some even say
to zero the superblock first, which doesn't sound safe at all.

The partitions were created using v1.0 with mdadm-3.2.2:

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Sat Jan  1 13:07:37 2011
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Sep 10 23:11:23 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : localhost.localdomain:0
           UUID : a7af0eec:2bf1bb46:a6afa7a4:6e61d731
         Events : 181

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

I've read the man page, and it seems to indicate to use the --grow
option, but it's still a little unclear. I've read that the arrays
should be stopped first, but the man page seems to indicate it should
be performed on a running array.

Is the general process to first convert the RAID1 to a two-disk RAID5,
then --add the third disk?

Thanks,
Alex

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Converting RAID1 to RAID5
  2011-09-11  3:38 Converting RAID1 to RAID5 Alex
@ 2011-09-11  3:57 ` NeilBrown
  2011-09-11 15:40   ` Alex
                     ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: NeilBrown @ 2011-09-11  3:57 UTC (permalink / raw)
  To: Alex; +Cc: linux-raid

On Sat, 10 Sep 2011 23:38:09 -0400 Alex <mysqlstudent@gmail.com> wrote:

> Hi,
> I have a a few two-disk RAID1 partitions that I'd like to convert to
> three-disk RAID5 partitions using fedora15 with ext4. I've read a few
> docs online, but none that are authoritative or current. Some even say
> to zero the superblock first, which doesn't sound safe at all.
> 
> The partitions were created using v1.0 with mdadm-3.2.2:
> 
> # mdadm --detail /dev/md0
> /dev/md0:
>         Version : 1.0
>   Creation Time : Sat Jan  1 13:07:37 2011
>      Raid Level : raid1
>      Array Size : 511988 (500.07 MiB 524.28 MB)
>   Used Dev Size : 511988 (500.07 MiB 524.28 MB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
> 
>     Update Time : Sat Sep 10 23:11:23 2011
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
> 
>            Name : localhost.localdomain:0
>            UUID : a7af0eec:2bf1bb46:a6afa7a4:6e61d731
>          Events : 181
> 
>     Number   Major   Minor   RaidDevice State
>        0       8        1        0      active sync   /dev/sda1
>        1       8       17        1      active sync   /dev/sdb1
> 
> I've read the man page, and it seems to indicate to use the --grow
> option, but it's still a little unclear. I've read that the arrays
> should be stopped first, but the man page seems to indicate it should
> be performed on a running array.
> 
> Is the general process to first convert the RAID1 to a two-disk RAID5,
> then --add the third disk?

I strongly suggest that you create a couple of loop-back devices and
experiment.
ie.

 for i in 0 1 2
 do
    dd if=/dev/zero of=/tmp/file$i bs=1M count=100
    losetup /dev/loop$i /tmp/file$i
 done

 mdadm -C /dev/md0 -l1 -n2 -e 1.0 /dev/loop0 /dev/loop1
 mkfs /dev/md0
 mount /dev/md0 /mnt
 cp -r /lib /mnt

 then try some things. e.g.

 mdadm /dev/md0 --add /dev/loop2
 mdadm --grow /dev/md0 --level=5 --raid-devices=3

 Try failing a device during the reshape.  Check if the data is still OK.
 Try it as two separate steps and see if it makes a difference.


Experimenting will give you a lot more confidence than any mount of
authoritative statements about what it should do.

There was once a tool called raidreconf which would reshape an array while it
is offline.  That isn't supported anymore.
mdadm and the kernel md driver does reshaping while the array is online.  You
don't need to stop it first.

NeilBrown

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Converting RAID1 to RAID5
  2011-09-11  3:57 ` NeilBrown
@ 2011-09-11 15:40   ` Alex
  2011-09-15 23:50   ` Alex
       [not found]   ` <27910711.10376.1316131253095.JavaMail.mobile-sync@iagt29>
  2 siblings, 0 replies; 8+ messages in thread
From: Alex @ 2011-09-11 15:40 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Hi,

>> I have a a few two-disk RAID1 partitions that I'd like to convert to
>> three-disk RAID5 partitions using fedora15 with ext4. I've read a few
>> docs online, but none that are authoritative or current. Some even say
>> to zero the superblock first, which doesn't sound safe at all.
...
>> Is the general process to first convert the RAID1 to a two-disk RAID5,
>> then --add the third disk?
>
> I strongly suggest that you create a couple of loop-back devices and
> experiment.
> ie.
>
>  for i in 0 1 2
>  do
>    dd if=/dev/zero of=/tmp/file$i bs=1M count=100
>    losetup /dev/loop$i /tmp/file$i
>  done
>
>  mdadm -C /dev/md0 -l1 -n2 -e 1.0 /dev/loop0 /dev/loop1
>  mkfs /dev/md0
>  mount /dev/md0 /mnt
>  cp -r /lib /mnt
>
>  then try some things. e.g.
>
>  mdadm /dev/md0 --add /dev/loop2
>  mdadm --grow /dev/md0 --level=5 --raid-devices=3
>
>  Try failing a device during the reshape.  Check if the data is still OK.
>  Try it as two separate steps and see if it makes a difference.
>
> Experimenting will give you a lot more confidence than any mount of
> authoritative statements about what it should do.

Yes, definitely. I have an understanding now of what needs to be done,
and am comfortable with mdadm, just not this procedure. I've followed
your steps and they worked successfully.

I just wasn't sure from my reading whether this was a supported
procedure, or whether there were still experimental steps required.

> There was once a tool called raidreconf which would reshape an array while it
> is offline.  That isn't supported anymore.

Yes, I came across that as well, but sounds like it's just not
necessary any longer because it's so well supported in mdadm itself,
correct?

Thanks,
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Converting RAID1 to RAID5
  2011-09-11  3:57 ` NeilBrown
  2011-09-11 15:40   ` Alex
@ 2011-09-15 23:50   ` Alex
  2011-09-16  3:57     ` NeilBrown
       [not found]   ` <27910711.10376.1316131253095.JavaMail.mobile-sync@iagt29>
  2 siblings, 1 reply; 8+ messages in thread
From: Alex @ 2011-09-15 23:50 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Hi,

Last week you were helping me with trying to convert a RAID1 volume to
RAID5. I've put together a server to test, and have made some
progress, but have a few questions.

>  then try some things. e.g.
>
>  mdadm /dev/md0 --add /dev/loop2
>  mdadm --grow /dev/md0 --level=5 --raid-devices=3
>
>  Try failing a device during the reshape.  Check if the data is still OK.
>  Try it as two separate steps and see if it makes a difference.

I partitioned a third disk in the same way as the other two, and
successfully added them to their existing respective volumes as
spares.

I was able to grow md0, which is mounted on /boot, and it resynced it
and successfully converted it to RAID5.

When I try to grow the other two partitions (/ and /home), it fails
with device busy:

# mdadm --grow /dev/md2 --level=5 --raid-devices=3
mdadm: level of /dev/md2 changed to raid5
mdadm: Need to backup 128K of critical section..
mdadm: Cannot set device shape for /dev/md2: Device or resource busy
       Bitmap must be removed before shape can be changed
mdadm: aborting level change

# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid1 sdc3[2](S) sda3[0] sdb3[1]
      186366908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/2 pages [4KB], 65536KB chunk

md1 : active raid1 sdc2[2](S) sda2[0] sdb2[1]
      51198908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid5 sdc1[2] sda1[0] sdb1[1]
      1023976 blocks super 1.0 level 5, 4k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

/dev/md2 (/home) isn't mounted.

When the conversion to RAID5 is complete, how can I regenerate the
mdadm.conf to properly reflect the change?

Thanks,
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Converting RAID1 to RAID5
       [not found]   ` <27910711.10376.1316131253095.JavaMail.mobile-sync@iagt29>
@ 2011-09-16  2:54     ` Jérôme Poulin
  0 siblings, 0 replies; 8+ messages in thread
From: Jérôme Poulin @ 2011-09-16  2:54 UTC (permalink / raw)
  To: Alex; +Cc: linux-raid

On 2011-09-15, at 19:50, Alex <mysqlstudent@gmail.com> wrote:

> When I try to grow the other two partitions (/ and /home), it fails
> with device busy:
>
> # mdadm --grow /dev/md2 --level=5 --raid-devices=3
> mdadm: level of /dev/md2 changed to raid5
> mdadm: Need to backup 128K of critical section..
> mdadm: Cannot set device shape for /dev/md2: Device or resource busy
>       Bitmap must be removed before shape can be changed
> mdadm: aborting level change
>

You must remove the bitmap first as told using mdadm --grow --bitmap=none

>
> When the conversion to RAID5 is complete, how can I regenerate the
> mdadm.conf to properly reflect the change?
>

The output of mdadm -Es might be enough for you.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Converting RAID1 to RAID5
  2011-09-15 23:50   ` Alex
@ 2011-09-16  3:57     ` NeilBrown
  2011-09-16 13:56       ` Alex
  0 siblings, 1 reply; 8+ messages in thread
From: NeilBrown @ 2011-09-16  3:57 UTC (permalink / raw)
  To: Alex; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2255 bytes --]

On Thu, 15 Sep 2011 19:50:08 -0400 Alex <mysqlstudent@gmail.com> wrote:

> Hi,
> 
> Last week you were helping me with trying to convert a RAID1 volume to
> RAID5. I've put together a server to test, and have made some
> progress, but have a few questions.
> 
> >  then try some things. e.g.
> >
> >  mdadm /dev/md0 --add /dev/loop2
> >  mdadm --grow /dev/md0 --level=5 --raid-devices=3
> >
> >  Try failing a device during the reshape.  Check if the data is still OK.
> >  Try it as two separate steps and see if it makes a difference.
> 
> I partitioned a third disk in the same way as the other two, and
> successfully added them to their existing respective volumes as
> spares.
> 
> I was able to grow md0, which is mounted on /boot, and it resynced it
> and successfully converted it to RAID5.
> 
> When I try to grow the other two partitions (/ and /home), it fails
> with device busy:
> 
> # mdadm --grow /dev/md2 --level=5 --raid-devices=3
> mdadm: level of /dev/md2 changed to raid5
> mdadm: Need to backup 128K of critical section..
> mdadm: Cannot set device shape for /dev/md2: Device or resource busy
>        Bitmap must be removed before shape can be changed
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> mdadm: aborting level change
> 
> # cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md2 : active raid1 sdc3[2](S) sda3[0] sdb3[1]
>       186366908 blocks super 1.1 [2/2] [UU]
>       bitmap: 1/2 pages [4KB], 65536KB chunk
> 
> md1 : active raid1 sdc2[2](S) sda2[0] sdb2[1]
>       51198908 blocks super 1.1 [2/2] [UU]
>       bitmap: 1/1 pages [4KB], 65536KB chunk

You need to remove those bitmaps first.  Put them back after the reshape
completes.
(mdadm --grow --bitmap=none ; mdadm --grow --bitmap=internal)

> 
> md0 : active raid5 sdc1[2] sda1[0] sdb1[1]
>       1023976 blocks super 1.0 level 5, 4k chunk, algorithm 2 [3/3] [UUU]
> 
> unused devices: <none>
> 
> /dev/md2 (/home) isn't mounted.
> 
> When the conversion to RAID5 is complete, how can I regenerate the
> mdadm.conf to properly reflect the change?

I would use an editor.
The output of "mdadm -Ds" could be a helpful guide.

NeilBrown

> 
> Thanks,
> Alex


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Converting RAID1 to RAID5
  2011-09-16  3:57     ` NeilBrown
@ 2011-09-16 13:56       ` Alex
  2011-09-16 15:10         ` Robin Hill
  0 siblings, 1 reply; 8+ messages in thread
From: Alex @ 2011-09-16 13:56 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Hi,

>> I partitioned a third disk in the same way as the other two, and
>> successfully added them to their existing respective volumes as
>> spares.
>>
>> I was able to grow md0, which is mounted on /boot, and it resynced it
>> and successfully converted it to RAID5.
>>
>> When I try to grow the other two partitions (/ and /home), it fails
>> with device busy:
>>
>> # mdadm --grow /dev/md2 --level=5 --raid-devices=3
>> mdadm: level of /dev/md2 changed to raid5
>> mdadm: Need to backup 128K of critical section..
>> mdadm: Cannot set device shape for /dev/md2: Device or resource busy
>>        Bitmap must be removed before shape can be changed
>         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> mdadm: aborting level change
>>
>> # cat /proc/mdstat
>> Personalities : [raid1] [raid6] [raid5] [raid4]
>> md2 : active raid1 sdc3[2](S) sda3[0] sdb3[1]
>>       186366908 blocks super 1.1 [2/2] [UU]
>>       bitmap: 1/2 pages [4KB], 65536KB chunk
>>
>> md1 : active raid1 sdc2[2](S) sda2[0] sdb2[1]
>>       51198908 blocks super 1.1 [2/2] [UU]
>>       bitmap: 1/1 pages [4KB], 65536KB chunk
>
> You need to remove those bitmaps first.  Put them back after the reshape
> completes.
> (mdadm --grow --bitmap=none ; mdadm --grow --bitmap=internal)

Okay, great. Didn't see this documented in this way in the man page.

For completeness, these are the steps I have followed, assuming a
RAID1 array is md0:

# mdadm --grow /dev/md0 --bitmap=none
# mdadm --grow /dev/md0 --level=5 --raid-devices=3
- wait for reshape to complete
# mdadm --grow /dev/md0 --bitmap=internal

I noticed there is a difference between one array and another:

md125 : active raid5 sdb1[0] sda1[2] sdc1[1]
      1023976 blocks super 1.0 level 5, 4k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid5 sdb2[0] sda2[2] sdc2[1]
      102397816 blocks super 1.1 level 5, 4k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

Is that a reference to the superblock? Why would they be different?
They were both created at the same time with the same fedora15
versions. This is created at the time the filesystem is created,
correct?

It looks like this has also now affected grub, as the system no longer
boots. Is this expected?

When I try to reinstall grub, it fails with an error relating to /boot:

# grub-install --recheck --root-directory=/mnt/disk /dev/sda
Probing devices to guess BIOS drives. This may take a long time.
/dev/md125 does not have any corresponding BIOS drive.

Maybe /boot should be left as RAID1?

Thanks,
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Converting RAID1 to RAID5
  2011-09-16 13:56       ` Alex
@ 2011-09-16 15:10         ` Robin Hill
  0 siblings, 0 replies; 8+ messages in thread
From: Robin Hill @ 2011-09-16 15:10 UTC (permalink / raw)
  To: Alex; +Cc: NeilBrown, linux-raid

[-- Attachment #1: Type: text/plain, Size: 2463 bytes --]

On Fri Sep 16, 2011 at 09:56:58AM -0400, Alex wrote:

> Hi,
> 
> For completeness, these are the steps I have followed, assuming a
> RAID1 array is md0:
> 
> # mdadm --grow /dev/md0 --bitmap=none
> # mdadm --grow /dev/md0 --level=5 --raid-devices=3
> - wait for reshape to complete
> # mdadm --grow /dev/md0 --bitmap=internal
> 
> I noticed there is a difference between one array and another:
> 
> md125 : active raid5 sdb1[0] sda1[2] sdc1[1]
>       1023976 blocks super 1.0 level 5, 4k chunk, algorithm 2 [3/3] [UUU]
>       bitmap: 0/1 pages [0KB], 65536KB chunk
> 
> md126 : active raid5 sdb2[0] sda2[2] sdc2[1]
>       102397816 blocks super 1.1 level 5, 4k chunk, algorithm 2 [3/3] [UUU]
>       bitmap: 0/1 pages [0KB], 65536KB chunk
> 
> Is that a reference to the superblock? Why would they be different?
> They were both created at the same time with the same fedora15
> versions. This is created at the time the filesystem is created,
> correct?
> 
I assume md125 is /boot? This showed up as superblock 1.0 earlier
anyway. You need to use either 0.9 or 1.0 with grub (grub 1 anyway, I've
never used grub 2 so I'm not sure what that handles) as they place the
RAID metadata at the end of the drives. This means grub can access the
drives as though they were independent disks, ignoring the RAID. If you
set these up at install time then I assume Fedora automatically used the
correct superblock.

> It looks like this has also now affected grub, as the system no longer
> boots. Is this expected?
> 
> When I try to reinstall grub, it fails with an error relating to /boot:
> 
> # grub-install --recheck --root-directory=/mnt/disk /dev/sda
> Probing devices to guess BIOS drives. This may take a long time.
> /dev/md125 does not have any corresponding BIOS drive.
> 
> Maybe /boot should be left as RAID1?
> 
Yes, grub 1 can only boot from (what it sees as) standalone drives,
(so RAID1 with superblock 0.9 or 1.0 will work as the filesystem is in
exactly the same position as on a non-RAID drive). You'll need to
convert this back, though you can set it up as a 3-disk RAID1, giving
you extra redundancy. I doubt you'd need the extra space there anyway.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-09-16 15:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-11  3:38 Converting RAID1 to RAID5 Alex
2011-09-11  3:57 ` NeilBrown
2011-09-11 15:40   ` Alex
2011-09-15 23:50   ` Alex
2011-09-16  3:57     ` NeilBrown
2011-09-16 13:56       ` Alex
2011-09-16 15:10         ` Robin Hill
     [not found]   ` <27910711.10376.1316131253095.JavaMail.mobile-sync@iagt29>
2011-09-16  2:54     ` Jérôme Poulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.