All of lore.kernel.org
 help / color / mirror / Atom feed
* raid10,f2 Add a Controller: Which drives to move?
@ 2010-04-11 17:08 Michael McCallister
  2010-04-11 21:31 ` Michael Evans
  2010-04-12  3:41 ` Tomáš Dulík
  0 siblings, 2 replies; 10+ messages in thread
From: Michael McCallister @ 2010-04-11 17:08 UTC (permalink / raw)
  To: linux-raid

I have an existing raid10,f2 array with four drives, all running on a single
SATA controller.  I have a second controller to add to the system and I'd like
to split the existing drives between the two controllers.  I'm hoping to make
the configuration more robust against the possibility of a single controller
failure.  It would also be nice to get more performance out of the array, though
I doubt having a single controller is a bottleneck with only 4 7200RPM drives.

So with four drives sda, sdb, sdc, and sdd and two controllers C1 and C2, should
I go with

    C1: sda, sdb
    C2: sdc, sdd

or

    C1: sda, sdc
    C2: sdb, sdd

or some other configuration?

I've looked through the last six months of messages in the archives, and the
md(4) and mdadm(8) manpages, and the wiki on https://raid.wiki.kernel.org/ and
didn't see anything that quite answers this question at a level I can
understand.  If there is a reference I can consult, I'm happy to keep digging.

If it will help, the output of /proc/mdstat and "mdadm --detail" on the md
device are included below.


Mike McCallister


# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md3 : active raid10 sda5[0] sdd5[3] sdc5[2] sdb5[1]
      1445318656 blocks super 1.1 256K chunks 2 far-copies [4/4] [UUUU]
      bitmap: 6/345 pages [24KB], 2048KB chunk

# mdadm --detail /dev/md3
/dev/md3:
        Version : 01.01.03
  Creation Time : Sun Nov  9 22:47:00 2008
     Raid Level : raid10
     Array Size : 1445318656 (1378.36 GiB 1480.01 GB)
  Used Dev Size : 1445318656 (689.18 GiB 740.00 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Apr 11 11:47:09 2010
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=1, far=2
     Chunk Size : 256K

           Name : ozark:3
           UUID : e7705941:e81cfbe1:7bf6ab9f:2b979a89
         Events : 84

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       21        1      active sync   /dev/sdb5
       2       8       37        2      active sync   /dev/sdc5
       3       8       53        3      active sync   /dev/sdd5



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-11 17:08 raid10,f2 Add a Controller: Which drives to move? Michael McCallister
@ 2010-04-11 21:31 ` Michael Evans
  2010-04-12  6:17   ` Michael McCallister
  2010-04-12  3:41 ` Tomáš Dulík
  1 sibling, 1 reply; 10+ messages in thread
From: Michael Evans @ 2010-04-11 21:31 UTC (permalink / raw)
  To: Michael McCallister; +Cc: linux-raid

On Sun, Apr 11, 2010 at 10:08 AM, Michael McCallister <mike@mccllstr.com> wrote:
> I have an existing raid10,f2 array with four drives, all running on a single
> SATA controller.  I have a second controller to add to the system and I'd like
> to split the existing drives between the two controllers.  I'm hoping to make
> the configuration more robust against the possibility of a single controller
> failure.  It would also be nice to get more performance out of the array, though
> I doubt having a single controller is a bottleneck with only 4 7200RPM drives.
>
> So with four drives sda, sdb, sdc, and sdd and two controllers C1 and C2, should
> I go with
>
>    C1: sda, sdb
>    C2: sdc, sdd
>
> or
>
>    C1: sda, sdc
>    C2: sdb, sdd
>
> or some other configuration?
>
> I've looked through the last six months of messages in the archives, and the
> md(4) and mdadm(8) manpages, and the wiki on https://raid.wiki.kernel.org/ and
> didn't see anything that quite answers this question at a level I can
> understand.  If there is a reference I can consult, I'm happy to keep digging.
>
> If it will help, the output of /proc/mdstat and "mdadm --detail" on the md
> device are included below.
>
>
> Mike McCallister
>
>
> # cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
> [raid10]
> md3 : active raid10 sda5[0] sdd5[3] sdc5[2] sdb5[1]
>      1445318656 blocks super 1.1 256K chunks 2 far-copies [4/4] [UUUU]
>      bitmap: 6/345 pages [24KB], 2048KB chunk
>
> # mdadm --detail /dev/md3
> /dev/md3:
>        Version : 01.01.03
>  Creation Time : Sun Nov  9 22:47:00 2008
>     Raid Level : raid10
>     Array Size : 1445318656 (1378.36 GiB 1480.01 GB)
>  Used Dev Size : 1445318656 (689.18 GiB 740.00 GB)
>   Raid Devices : 4
>  Total Devices : 4
> Preferred Minor : 3
>    Persistence : Superblock is persistent
>
>  Intent Bitmap : Internal
>
>    Update Time : Sun Apr 11 11:47:09 2010
>          State : active
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>  Spare Devices : 0
>
>         Layout : near=1, far=2
>     Chunk Size : 256K
>
>           Name : ozark:3
>           UUID : e7705941:e81cfbe1:7bf6ab9f:2b979a89
>         Events : 84
>
>    Number   Major   Minor   RaidDevice State
>       0       8        5        0      active sync   /dev/sda5
>       1       8       21        1      active sync   /dev/sdb5
>       2       8       37        2      active sync   /dev/sdc5
>       3       8       53        3      active sync   /dev/sdd5
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

I'd use smartctl to get the drive serial numbers, then move what are
currently shown as sdb5 and sdd5 to your new controller during a
coldboot.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-11 17:08 raid10,f2 Add a Controller: Which drives to move? Michael McCallister
  2010-04-11 21:31 ` Michael Evans
@ 2010-04-12  3:41 ` Tomáš Dulík
  2010-04-12  6:07   ` Stefan /*St0fF*/ Hübner
  2010-04-12  6:07   ` Michael McCallister
  1 sibling, 2 replies; 10+ messages in thread
From: Tomáš Dulík @ 2010-04-12  3:41 UTC (permalink / raw)
  To: Michael McCallister; +Cc: linux-raid

Hi,
I am not sure if I understand your question, but I suppose you will hit 
the problem of udev naming of your disks, and if your disks are 
hotswap-able, then also the problem of hotswap, which is currently 
discussed here in this mailing list.

I have spent too much time trying to solve both these problems with the 
current linux SW RAID (mdadm v. 2.6) and here is the first part of my 
solution:
http://wiki.debian.org/Persistent_disk_names

The page is still not really  finished, I am still playing with my 
server so the documentation must wait.


Michael McCallister napsal(a):

> So with four drives sda, sdb, sdc, and sdd and two controllers C1 and C2, should
> I go with
>
>     C1: sda, sdb
>     C2: sdc, sdd
>
> or
>
>     C1: sda, sdc
>     C2: sdb, sdd
>
> or some other configuration?
>   


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-12  3:41 ` Tomáš Dulík
@ 2010-04-12  6:07   ` Stefan /*St0fF*/ Hübner
  2010-04-12  7:43     ` Tomáš Dulík
  2010-04-12  6:07   ` Michael McCallister
  1 sibling, 1 reply; 10+ messages in thread
From: Stefan /*St0fF*/ Hübner @ 2010-04-12  6:07 UTC (permalink / raw)
  To: Linux RAID

Am 12.04.2010 05:41, schrieb Tomáš Dulík:
> Hi,
> I am not sure if I understand your question, but I suppose you will hit
> the problem of udev naming of your disks, and if your disks are
> hotswap-able, then also the problem of hotswap, which is currently
> discussed here in this mailing list.
> 
> I have spent too much time trying to solve both these problems with the
> current linux SW RAID (mdadm v. 2.6) and here is the first part of my
> solution:
> http://wiki.debian.org/Persistent_disk_names

I cannot quite understand your problem.  As every part of the array
contains it's own metadata, it doesn't matter to md which /dev/sdX a
drive is.  It might matter a bit for boot-time assembly, but actually
that's what UUIDs are for.

> 
> The page is still not really  finished, I am still playing with my
> server so the documentation must wait.
> 
> 
> Michael McCallister napsal(a):
> 
>> So with four drives sda, sdb, sdc, and sdd and two controllers C1 and
>> C2, should
>> I go with
>>
>>     C1: sda, sdb
>>     C2: sdc, sdd
>>
>> or
>>
>>     C1: sda, sdc
>>     C2: sdb, sdd
>>
>> or some other configuration?
>>   

But I understand yours, Michael.  You would like to put those
RAID0-pieces of your RAID10 each on a single controller.  I believe one
must find out what "NEAR" and "FAR" is.
I'd understand it as: "NEAR" is the closer disk - yeah.  Closer in the
array-tree, or it's logical base-LBA is closer?  Neil, could you please
clarify which is which?

> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

All the best,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-12  3:41 ` Tomáš Dulík
  2010-04-12  6:07   ` Stefan /*St0fF*/ Hübner
@ 2010-04-12  6:07   ` Michael McCallister
  1 sibling, 0 replies; 10+ messages in thread
From: Michael McCallister @ 2010-04-12  6:07 UTC (permalink / raw)
  To: linux-raid

Tomáš Dulík <dulik <at> unart.cz> writes:
> I am not sure if I understand your question, but I suppose you will hit 
> the problem of udev naming of your disks, and if your disks are 
> hotswap-able, then also the problem of hotswap, which is currently 
> discussed here in this mailing list.

Thanks for the heads up.  I did the swap while powered down and only brought the
system up using an Ubuntu Live CD, so the udev on my current root partition
didn't come into play.  During that live-CD boot, I had no apparent problems
with the changing device names.  They had changed pretty significantly: sdb
became sdd, sdc became sdb, sdd became sde, and another device not part of the
array became sdc.  In spite of the shuffle and a new SATA controller, mdadm was
able to reassemble the RAID devices on those drive partitions with no input from
me.  I believe this is because mdadm scanned the partitions looking device for
the appropriate UUIDs in the metadata, and with all the devices accounted for,
it was able to "do the right thing".  I was very impressed.

When I boot using the original root partition (running Ubuntu Hardy), maybe udev
will shuffle the device names more severely.  But so long as mdadm is configured
to scan all the available partitions, I expect it will locate all the pieces,
whatever they may be named, and successfully reassemble the arrays.


Mike


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-11 21:31 ` Michael Evans
@ 2010-04-12  6:17   ` Michael McCallister
  0 siblings, 0 replies; 10+ messages in thread
From: Michael McCallister @ 2010-04-12  6:17 UTC (permalink / raw)
  To: linux-raid

Michael Evans <mjevans1983 <at> gmail.com> writes:
> I'd use smartctl to get the drive serial numbers, then move what are
> currently shown as sdb5 and sdd5 to your new controller during a
> coldboot.

Thanks.  I did this and was able to boot and reassemble successfully with a live
CD.  However, the question in the back of my mind is "is there any way to tell
whether this would have worked with one of the controllers down?"  In other
words, if I just pulled the plug on sdb and sdd instead of moving them to a new
controller, would the array have been able to be reassembled in a degraded mode
with only sda and sdc?  Or would it actually need a different combination of
drives (e.g., sda and sdb) to be able to reassemble?

Obviously I could try this, but if I am able to reassemble an array with two
drives missing, won't I have to resync the other drives when I return them to
the array?  I'd rather avoid experiments involving resyncs if possible.


Mike



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-12  6:07   ` Stefan /*St0fF*/ Hübner
@ 2010-04-12  7:43     ` Tomáš Dulík
  2010-04-13  3:26       ` Simon Matthews
  0 siblings, 1 reply; 10+ messages in thread
From: Tomáš Dulík @ 2010-04-12  7:43 UTC (permalink / raw)
  To: st0ff; +Cc: Linux RAID

Stefan /*St0fF*/ Hübner napsal(a):
> I cannot quite understand your problem.  As every part of the array
> contains it's own metadata, it doesn't matter to md which /dev/sdX a
> drive is.  It might matter a bit for boot-time assembly, but actually
> that's what UUIDs are for.
>   
I know how UUIDs work.
My problem with device names is not a "critical", it's about "user 
friendliness" of the physical disk management.
If a disk fails and I receive email "A Fail event had been detected on 
md device /dev/md2. It could be related to component device /dev/sdd3", 
how will I know which disk should be replaced, if the device name is not 
fixed/persistent? Is it the disk in the first bay, or the second? Aside 
of the solution documented on the wiki page, I could also use try a 
simpler one based on the idea here:
http://www.outsidaz.org/blog/2009/11/05/identifying-failed-drives-via-udev-and-mdadm/

But for the disk management purposes, I prefer having the disk names 
fixed according the the disk bay position. So disk in bay nr. 1 is 
/dev/sda, bay nr. 2 is /dev/sdb, etc.

The wiki page I created is just about this. I haven't found any document 
like this anywhere else, so if it helps someone, I'll be glad.

Tom
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-12  7:43     ` Tomáš Dulík
@ 2010-04-13  3:26       ` Simon Matthews
  2010-04-13  9:23         ` Keld Simonsen
  0 siblings, 1 reply; 10+ messages in thread
From: Simon Matthews @ 2010-04-13  3:26 UTC (permalink / raw)
  To: Tomáš Dulík; +Cc: st0ff, Linux RAID

2010/4/12 Tomáš Dulík <dulik@unart.cz>:
> Stefan /*St0fF*/ Hübner napsal(a):
>>
>> I cannot quite understand your problem.  As every part of the array
>> contains it's own metadata, it doesn't matter to md which /dev/sdX a
>> drive is.  It might matter a bit for boot-time assembly, but actually
>> that's what UUIDs are for.
>>
>
> I know how UUIDs work.
> My problem with device names is not a "critical", it's about "user
> friendliness" of the physical disk management.
> If a disk fails and I receive email "A Fail event had been detected on md
> device /dev/md2. It could be related to component device /dev/sdd3", how
> will I know which disk should be replaced, if the device name is not
> fixed/persistent? I

Record the serial number of the disks and  which bay they are in.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-13  3:26       ` Simon Matthews
@ 2010-04-13  9:23         ` Keld Simonsen
  2010-04-13 13:09           ` Neil Brown
  0 siblings, 1 reply; 10+ messages in thread
From: Keld Simonsen @ 2010-04-13  9:23 UTC (permalink / raw)
  To: Simon Matthews; +Cc: Tomáš Dulík, st0ff, Linux RAID

I am not sure it has been said, but for a 4 disk raid10,f2 array
you should place the first and the second disk on one controller, and then the 3rd
and 4th on the second controller. Then you would have a copy of all blocks even
if one controller fails.

I believe the order is defined by the order the disks are specified in
on the mdadm --create line.

best regards
keld

On Mon, Apr 12, 2010 at 08:26:55PM -0700, Simon Matthews wrote:
> 2010/4/12 Tomáš Dulík <dulik@unart.cz>:
> > Stefan /*St0fF*/ Hübner napsal(a):
> >>
> >> I cannot quite understand your problem.  As every part of the array
> >> contains it's own metadata, it doesn't matter to md which /dev/sdX a
> >> drive is.  It might matter a bit for boot-time assembly, but actually
> >> that's what UUIDs are for.
> >>
> >
> > I know how UUIDs work.
> > My problem with device names is not a "critical", it's about "user
> > friendliness" of the physical disk management.
> > If a disk fails and I receive email "A Fail event had been detected on md
> > device /dev/md2. It could be related to component device /dev/sdd3", how
> > will I know which disk should be replaced, if the device name is not
> > fixed/persistent? I
> 
> Record the serial number of the disks and  which bay they are in.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: raid10,f2 Add a Controller: Which drives to move?
  2010-04-13  9:23         ` Keld Simonsen
@ 2010-04-13 13:09           ` Neil Brown
  0 siblings, 0 replies; 10+ messages in thread
From: Neil Brown @ 2010-04-13 13:09 UTC (permalink / raw)
  To: Keld Simonsen
  Cc: Simon Matthews, Tomáš Dulík, st0ff, Linux RAID

On Tue, 13 Apr 2010 11:23:54 +0200
Keld Simonsen <keld@keldix.com> wrote:

> I am not sure it has been said, but for a 4 disk raid10,f2 array
> you should place the first and the second disk on one controller, and then the 3rd
> and 4th on the second controller. Then you would have a copy of all blocks even
> if one controller fails.

Uhmm... no.

Disk:   1    2    3    4
blocks  a    b    c    d
        e    f    g    h
         .........
        d    a    b    c
        h    e    f    g

If you only have first and second, then block c is missing.
You want 1st and 3rd on one controller, 2nd and 4th on the other
Then either controller can access all blocks.


> 
> I believe the order is defined by the order the disks are specified in
> on the mdadm --create line.

Correct.

NeilBrown


> 
> best regards
> keld
> 
> On Mon, Apr 12, 2010 at 08:26:55PM -0700, Simon Matthews wrote:
> > 2010/4/12 Tomáš Dulík <dulik@unart.cz>:
> > > Stefan /*St0fF*/ Hübner napsal(a):
> > >>
> > >> I cannot quite understand your problem.  As every part of the array
> > >> contains it's own metadata, it doesn't matter to md which /dev/sdX a
> > >> drive is.  It might matter a bit for boot-time assembly, but actually
> > >> that's what UUIDs are for.
> > >>
> > >
> > > I know how UUIDs work.
> > > My problem with device names is not a "critical", it's about "user
> > > friendliness" of the physical disk management.
> > > If a disk fails and I receive email "A Fail event had been detected on md
> > > device /dev/md2. It could be related to component device /dev/sdd3", how
> > > will I know which disk should be replaced, if the device name is not
> > > fixed/persistent? I
> > 
> > Record the serial number of the disks and  which bay they are in.
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-04-13 13:09 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-04-11 17:08 raid10,f2 Add a Controller: Which drives to move? Michael McCallister
2010-04-11 21:31 ` Michael Evans
2010-04-12  6:17   ` Michael McCallister
2010-04-12  3:41 ` Tomáš Dulík
2010-04-12  6:07   ` Stefan /*St0fF*/ Hübner
2010-04-12  7:43     ` Tomáš Dulík
2010-04-13  3:26       ` Simon Matthews
2010-04-13  9:23         ` Keld Simonsen
2010-04-13 13:09           ` Neil Brown
2010-04-12  6:07   ` Michael McCallister

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.