All of lore.kernel.org
 help / color / mirror / Atom feed
* raid 10 or 1+0 ?
@ 2005-04-22 13:55 yves DEGLAIN
  2005-04-23 11:26 ` Tobias DiPasquale
  0 siblings, 1 reply; 12+ messages in thread
From: yves DEGLAIN @ 2005-04-22 13:55 UTC (permalink / raw)
  To: linux-raid

hello

is raid 10 work like a raid 1+0  and witch one is the more stable/sure 
to boot on it as root fs ?


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: raid 10 or 1+0 ?
  2005-04-22 13:55 raid 10 or 1+0 ? yves DEGLAIN
@ 2005-04-23 11:26 ` Tobias DiPasquale
  2005-04-23 16:21   ` Guy
  0 siblings, 1 reply; 12+ messages in thread
From: Tobias DiPasquale @ 2005-04-23 11:26 UTC (permalink / raw)
  To: yves DEGLAIN; +Cc: linux-raid

On 4/22/05, yves DEGLAIN <admin@avallon.be> wrote:
> is raid 10 work like a raid 1+0  and witch one is the more stable/sure
> to boot on it as root fs ?

I believe that RAID 10 == RAID 1+0. They are just two notations for
the same thing. Thus, either would be equally suitable.

-- 
[ Tobias DiPasquale ]
0x636f6465736c696e67657240676d61696c2e636f6d

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: raid 10 or 1+0 ?
  2005-04-23 11:26 ` Tobias DiPasquale
@ 2005-04-23 16:21   ` Guy
  2005-04-24 13:54     ` Andre Noll
  2005-04-25 20:01     ` Molle Bestefich
  0 siblings, 2 replies; 12+ messages in thread
From: Guy @ 2005-04-23 16:21 UTC (permalink / raw)
  To: 'Tobias DiPasquale', 'yves DEGLAIN'; +Cc: linux-raid

md supports a built-in RAID10.  RAID10 is not RAID1+0, but it is similar.
RAID10 can be used with an odd number of disks and is a single array.
RAID1+0 is a single RAID0 made of 2 or more RAID1 arrays.

I have never used them, so I can't recommend one over the other.

Some people have problems with RAID1+0, seems the Kernel tries to assemble
the RAID0 array before the RAID1 arrays.  Maybe not on every system.

RAID10 is somewhat new, so may not have had much testing.  I don't recall
anyone posting comments on RAID10.

Maybe we need some success stories for RAID10 and RAID1+0 mounted on "/".

Guy

> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Tobias DiPasquale
> Sent: Saturday, April 23, 2005 7:26 AM
> To: yves DEGLAIN
> Cc: linux-raid@vger.kernel.org
> Subject: Re: raid 10 or 1+0 ?
> 
> On 4/22/05, yves DEGLAIN <admin@avallon.be> wrote:
> > is raid 10 work like a raid 1+0  and witch one is the more stable/sure
> > to boot on it as root fs ?
> 
> I believe that RAID 10 == RAID 1+0. They are just two notations for
> the same thing. Thus, either would be equally suitable.
> 
> --
> [ Tobias DiPasquale ]
> 0x636f6465736c696e67657240676d61696c2e636f6d
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: raid 10 or 1+0 ?
  2005-04-23 16:21   ` Guy
@ 2005-04-24 13:54     ` Andre Noll
  2005-04-25 20:01     ` Molle Bestefich
  1 sibling, 0 replies; 12+ messages in thread
From: Andre Noll @ 2005-04-24 13:54 UTC (permalink / raw)
  To: linux-raid

On Sat, 23 Apr 2005 12:21:17 -0400 you wrote in local.lists.linux-raid:

> Maybe we need some success stories for RAID10 and RAID1+0 mounted on "/".

I have such a setup up and running for quite some time now: 

cat /proc/mdstat 
Personalities : [raid0] [raid1] 
md3 : active raid0 md1[0] md2[1]
      156247808 blocks 64k chunks
      
md2 : active raid1 hda2[0] hdk2[1]
      78123968 blocks [2/2] [UU]
      
md1 : active raid1 hdc2[0] hdg2[1]
      78123968 blocks [2/2] [UU]
      
md0 : active raid1 hdc1[2] hda1[3] hdk1[1] hdg1[0]
      49280 blocks [4/4] [UUUU]

My roottfs is on a lv. The corresponding vg is made from md3. 

This works if you do not rely on the kernel to assemble your array
but use an initrd to achieve this.

Just use something like this in your linuxrc, right after creating the
device nodes (if you use udev):

	if test -e /proc/mdstat; then
		log "scanning for multi disk devices"
		echo "DEVICE /dev/hd[a-z] /dev/sd[a-z] /dev/md[0-9]" > /etc/mdadm.conf
		mdadm --examine --scan --config=/etc/mdadm.conf \
			>> /etc/mdadm.conf
		mdadm --assemble --scan
	fi

	if test -c /dev/mapper/control; then
		log "setting up lvm"
		vgscan --mknodes
		vgchange -a y
	fi


BTW, you should definitively use striped mirrors rather than mirrored
stripes.

However, note that you can not boot from a striped mirror. That is,
you need a tiny partition, preferably at the beginning of your discs,
to store the kernel image and the initrd, but not the rootfs. You
can make it a raid1 over all disks, like my md0 above, and use lilo
to write a mbr to _all_ discs. That way you can shuffle around your
discs and your system will still boot.

More details on request ;)
Andre
-- 
Andre Noll, http://www.mathematik.tu-darmstadt.de/~noll


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: raid 10 or 1+0 ?
  2005-04-23 16:21   ` Guy
  2005-04-24 13:54     ` Andre Noll
@ 2005-04-25 20:01     ` Molle Bestefich
  2005-04-25 20:08       ` Molle Bestefich
  1 sibling, 1 reply; 12+ messages in thread
From: Molle Bestefich @ 2005-04-25 20:01 UTC (permalink / raw)
  To: linux-raid

Guy wrote:
> md supports a built-in RAID10.
> RAID10 can be used with an odd number of disks and is a single array.

*sighs a bit*.. If that's true, someone should perhaps update
http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks#RAID_10
to reflect the fact that MD's notion of RAID10 is different from other
vendors'.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: raid 10 or 1+0 ?
  2005-04-25 20:01     ` Molle Bestefich
@ 2005-04-25 20:08       ` Molle Bestefich
  2005-04-25 21:49         ` Guy
  0 siblings, 1 reply; 12+ messages in thread
From: Molle Bestefich @ 2005-04-25 20:08 UTC (permalink / raw)
  To: linux-raid

Molle Bestefich wrote:
> Guy wrote:
> > md supports a built-in RAID10.
> > RAID10 can be used with an odd number of disks and is a single array.
> 
> *sighs a bit*..
[snip]

Doesn't look that bad actually.  There's a vendor section called
"proprietary raid levels" where "Linux MD RAID 10" would fit in
nicely.  So if anybody knows enough about how MD RAID 10 works, here's
something to do when you get bored :-).

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: raid 10 or 1+0 ?
  2005-04-25 20:08       ` Molle Bestefich
@ 2005-04-25 21:49         ` Guy
  0 siblings, 0 replies; 12+ messages in thread
From: Guy @ 2005-04-25 21:49 UTC (permalink / raw)
  To: 'Molle Bestefich', linux-raid

This is from an email Neil Brown sent 8/22/2004:
Following are 4 patches for md in 2.6.8.1-mm4

The first three are minor improvements and modifications either
required by or inspired by the fourth.

The fourth adds a new raid personality - raid10.  At 56K, I'm not 
sure it will get through the mailing list, but interested parties
can find it at:

  http://neilb.web.cse.unsw.edu.au/patches/linux-devel/2.6/2004-08-23-03

raid10 provides a combination of raid0 and raid1.
It requires mdadm 1.7.0 or later to use.  

The next release of mdadm should have better documention of raid10, but 
from the comment in the .c file:

/*
 * RAID10 provides a combination of RAID0 and RAID1 functionality.
 * The layout of data is defined by 
 *    chunk_size
 *    raid_disks
 *    near_copies (stored in low byte of layout)
 *    far_copies (stored in second byte of layout)
 *
 * The data to be stored is divided into chunks using chunksize.
 * Each device is divided into far_copies sections.
 * In each section, chunks are layed out in a style similar to raid0, but
 * near_copies copies of each chunk is stored (each on a different drive).
 * The starting device for each section is offset near_copies from the
starting
 * device of the previous section.
 * Thus there are (near_copies*far_copies) of each chunk, and each is on a
different
 * drive.
 * near_copies and far_copies must be at least one, and there product is at
most
 * raid_disks.
 */

raid10 is currently marked EXPERIMENTAL, and this should be taken seriously.
A reasonable amount of basic testing hasn't shown any bugs, and it seems to
resync
and rebuild correctly.  However wider testing would help.

NeilBrown

> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Molle Bestefich
> Sent: Monday, April 25, 2005 4:08 PM
> To: linux-raid@vger.kernel.org
> Subject: Re: raid 10 or 1+0 ?
> 
> Molle Bestefich wrote:
> > Guy wrote:
> > > md supports a built-in RAID10.
> > > RAID10 can be used with an odd number of disks and is a single array.
> >
> > *sighs a bit*..
> [snip]
> 
> Doesn't look that bad actually.  There's a vendor section called
> "proprietary raid levels" where "Linux MD RAID 10" would fit in
> nicely.  So if anybody knows enough about how MD RAID 10 works, here's
> something to do when you get bored :-).
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: raid 10 or 1+0 ?
  2005-04-26 17:24 Andrew Rechenberg Lists
@ 2005-04-26 18:53 ` Guy
  0 siblings, 0 replies; 12+ messages in thread
From: Guy @ 2005-04-26 18:53 UTC (permalink / raw)
  To: 'Andrew Rechenberg Lists', linux-raid

I would recommend you get the current version of md and mdadm and read the
documentation related to md's RAID10.  I don't have the current versions
myself, and can only hope the documentation has a good explanation.  I
believe RAID10 is for 2.6 kernels only, and I have a 2.4 kernel.

Guy

> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Andrew Rechenberg Lists
> Sent: Tuesday, April 26, 2005 1:24 PM
> To: Guy; linux-raid@vger.kernel.org
> Subject: RE: raid 10 or 1+0 ?
> 
> OK, obviously I'm missing something.  :)
> 
> Can someone explain to me the what they think the difference between
> RAID0+1,  RAID1+0, and RAID10, with document references please.  The two
> later seem to me to be the same thing and everything I can find
> referencing 1+0 is describing what I know as RAID10 - a striped array
> consisting of RAID1 arrays.
> 
> 
> 
> > -----Original Message-----
> > From: Guy [mailto:bugzilla@watkins-home.com]
> > Sent: Monday, April 25, 2005 5:49 PM
> > To: Andrew Rechenberg Lists; linux-raid@vger.kernel.org
> > Subject: RE: raid 10 or 1+0 ?
> >
> > No, the question was not related to RAID0+1.
> > I my opinion, RAID0+1 would be evil!
> > RAID1+0 or md's RAID10 would be much better.
> >
> > Guy
> >
> >
> > > -----Original Message-----
> > > From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> > > owner@vger.kernel.org] On Behalf Of Andrew Rechenberg Lists
> > > Sent: Monday, April 25, 2005 4:05 PM
> > > To: linux-raid@vger.kernel.org
> > > Subject: RE: raid 10 or 1+0 ?
> > >
> > > The subject of the mail should be "raid 10 or 0+1" I believe :)
> > >
> > > According to acnc.com:
> > >
> > > http://www.acnc.com/04_01_10.html
> > >
> > > "RAID 10 is implemented as a striped array whose segments
> > are RAID 1
> > > arrays "
> > >
> > > http://www.acnc.com/04_01_0_1.html
> > >
> > > "RAID 0+1 is implemented as a mirrored array whose segments
> > are RAID 0
> > > arrays"
> > >
> > > If a drive were to fail in a RAID0+1, what you are left with is
> > > essentially one RAID0 array.
> > >
> > > You want to use RAID10 if you need high performance and very good
> > > fault tolerance.  The disadvantage is that you end up with
> > half of the
> > > available raw space as useable.
> > >
> > > I've never seen nor tried a "/" file system on RAID10 or RAID0+1.
> > > What I usually hear recommended is /boot and or / on RAID1
> > and then if
> > > you need better performance for a database or other
> > application, then
> > > create a /data partition or something of the sort on a
> > separate RAID10
> > > array that is on different disk spindles.
> > >
> > > Here is our configuration:
> > >
> > > /: RAID1
> > > /backup: RAID0 disk backup staging area
> > > /data: LVM on a 56 SCSI disk SW RAID10 array
> > >
> > >
> > > HTH,
> > > Andy.
> > >
> > >
> > > > -----Original Message-----
> > > > From: linux-raid-owner@vger.kernel.org
> > > > [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Andre Noll
> > > > Sent: Sunday, April 24, 2005 9:54 AM
> > > > To: linux-raid@vger.kernel.org
> > > > Subject: Re: raid 10 or 1+0 ?
> > > >
> > > > On Sat, 23 Apr 2005 12:21:17 -0400 you wrote in
> > > > local.lists.linux-raid:
> > > >
> > > > > Maybe we need some success stories for RAID10 and RAID1+0
> > > > mounted on "/".
> > > >
> > > > I have such a setup up and running for quite some time now:
> > > >
> > > > cat /proc/mdstat
> > > > Personalities : [raid0] [raid1]
> > > > md3 : active raid0 md1[0] md2[1]
> > > >       156247808 blocks 64k chunks
> > > >
> > > > md2 : active raid1 hda2[0] hdk2[1]
> > > >       78123968 blocks [2/2] [UU]
> > > >
> > > > md1 : active raid1 hdc2[0] hdg2[1]
> > > >       78123968 blocks [2/2] [UU]
> > > >
> > > > md0 : active raid1 hdc1[2] hda1[3] hdk1[1] hdg1[0]
> > > >       49280 blocks [4/4] [UUUU]
> > > >
> > > > My roottfs is on a lv. The corresponding vg is made from md3.
> > > >
> > > > This works if you do not rely on the kernel to assemble
> > your array
> > > > but use an initrd to achieve this.
> > > >
> > > > Just use something like this in your linuxrc, right after
> > creating
> > > > the device nodes (if you use udev):
> > > >
> > > > 	if test -e /proc/mdstat; then
> > > > 		log "scanning for multi disk devices"
> > > > 		echo "DEVICE /dev/hd[a-z] /dev/sd[a-z] /dev/md[0-9]"
>
> > > > /etc/mdadm.conf
> > > > 		mdadm --examine --scan --config=/etc/mdadm.conf \
> > > > 			>> /etc/mdadm.conf
> > > > 		mdadm --assemble --scan
> > > > 	fi
> > > >
> > > > 	if test -c /dev/mapper/control; then
> > > > 		log "setting up lvm"
> > > > 		vgscan --mknodes
> > > > 		vgchange -a y
> > > > 	fi
> > > >
> > > >
> > > > BTW, you should definitively use striped mirrors rather than
> > > > mirrored stripes.
> > > >
> > > > However, note that you can not boot from a striped mirror.
> > > > That is, you need a tiny partition, preferably at the
> > beginning of
> > > > your discs, to store the kernel image and the initrd, but not the
> > > > rootfs. You can make it a raid1 over all disks, like my
> > md0 above,
> > > > and use lilo to write a mbr to _all_ discs. That way you
> > can shuffle
> > > > around your discs and your system will still boot.
> > > >
> > > > More details on request ;)
> > > > Andre
> > > > --
> > > > Andre Noll, http://www.mathematik.tu-darmstadt.de/~noll
> > > >
> > > > -
> > > > To unsubscribe from this list: send the line "unsubscribe
> > > > linux-raid" in the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > > >
> > > Confidentiality Notice: This e-mail message including
> > attachments, if
> > > any, is intended only for the person or entity to which it is
> > > addressed and may contain confidential and/or privileged
> > material. Any
> > > unauthorized review, use, disclosure or distribution is
> > prohibited. If
> > > you are not the intended recipient, please contact the
> > sender by reply
> > > e-mail and destroy all copies of the original message. If
> > you are the
> > > intended recipient, but do not wish to receive
> > communications through
> > > this medium, please so advise the sender immediately.
> > >
> > >
> > > -
> > > To unsubscribe from this list: send the line "unsubscribe
> > linux-raid" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
> Confidentiality Notice: This e-mail message including attachments, if any,
> is intended
> only for the person or entity to which it is addressed and may contain
> confidential
> and/or privileged material. Any unauthorized review, use, disclosure or
> distribution
> is prohibited. If you are not the intended recipient, please contact the
> sender by reply
> e-mail and destroy all copies of the original message. If you are the
> intended recipient,
> but do not wish to receive communications through this medium, please so
> advise the
> sender immediately.
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: raid 10 or 1+0 ?
@ 2005-04-26 17:24 Andrew Rechenberg Lists
  2005-04-26 18:53 ` Guy
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Rechenberg Lists @ 2005-04-26 17:24 UTC (permalink / raw)
  To: Guy, linux-raid

OK, obviously I'm missing something.  :)

Can someone explain to me the what they think the difference between
RAID0+1,  RAID1+0, and RAID10, with document references please.  The two
later seem to me to be the same thing and everything I can find
referencing 1+0 is describing what I know as RAID10 - a striped array
consisting of RAID1 arrays.



> -----Original Message-----
> From: Guy [mailto:bugzilla@watkins-home.com] 
> Sent: Monday, April 25, 2005 5:49 PM
> To: Andrew Rechenberg Lists; linux-raid@vger.kernel.org
> Subject: RE: raid 10 or 1+0 ?
> 
> No, the question was not related to RAID0+1.
> I my opinion, RAID0+1 would be evil!
> RAID1+0 or md's RAID10 would be much better.
> 
> Guy
> 
> 
> > -----Original Message-----
> > From: linux-raid-owner@vger.kernel.org [mailto:linux-raid- 
> > owner@vger.kernel.org] On Behalf Of Andrew Rechenberg Lists
> > Sent: Monday, April 25, 2005 4:05 PM
> > To: linux-raid@vger.kernel.org
> > Subject: RE: raid 10 or 1+0 ?
> > 
> > The subject of the mail should be "raid 10 or 0+1" I believe :)
> > 
> > According to acnc.com:
> > 
> > http://www.acnc.com/04_01_10.html
> > 
> > "RAID 10 is implemented as a striped array whose segments 
> are RAID 1 
> > arrays "
> > 
> > http://www.acnc.com/04_01_0_1.html
> > 
> > "RAID 0+1 is implemented as a mirrored array whose segments 
> are RAID 0 
> > arrays"
> > 
> > If a drive were to fail in a RAID0+1, what you are left with is 
> > essentially one RAID0 array.
> > 
> > You want to use RAID10 if you need high performance and very good 
> > fault tolerance.  The disadvantage is that you end up with 
> half of the 
> > available raw space as useable.
> > 
> > I've never seen nor tried a "/" file system on RAID10 or RAID0+1.  
> > What I usually hear recommended is /boot and or / on RAID1 
> and then if 
> > you need better performance for a database or other 
> application, then 
> > create a /data partition or something of the sort on a 
> separate RAID10 
> > array that is on different disk spindles.
> > 
> > Here is our configuration:
> > 
> > /: RAID1
> > /backup: RAID0 disk backup staging area
> > /data: LVM on a 56 SCSI disk SW RAID10 array
> > 
> > 
> > HTH,
> > Andy.
> > 
> > 
> > > -----Original Message-----
> > > From: linux-raid-owner@vger.kernel.org 
> > > [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Andre Noll
> > > Sent: Sunday, April 24, 2005 9:54 AM
> > > To: linux-raid@vger.kernel.org
> > > Subject: Re: raid 10 or 1+0 ?
> > >
> > > On Sat, 23 Apr 2005 12:21:17 -0400 you wrote in
> > > local.lists.linux-raid:
> > >
> > > > Maybe we need some success stories for RAID10 and RAID1+0
> > > mounted on "/".
> > >
> > > I have such a setup up and running for quite some time now:
> > >
> > > cat /proc/mdstat
> > > Personalities : [raid0] [raid1]
> > > md3 : active raid0 md1[0] md2[1]
> > >       156247808 blocks 64k chunks
> > >
> > > md2 : active raid1 hda2[0] hdk2[1]
> > >       78123968 blocks [2/2] [UU]
> > >
> > > md1 : active raid1 hdc2[0] hdg2[1]
> > >       78123968 blocks [2/2] [UU]
> > >
> > > md0 : active raid1 hdc1[2] hda1[3] hdk1[1] hdg1[0]
> > >       49280 blocks [4/4] [UUUU]
> > >
> > > My roottfs is on a lv. The corresponding vg is made from md3.
> > >
> > > This works if you do not rely on the kernel to assemble 
> your array 
> > > but use an initrd to achieve this.
> > >
> > > Just use something like this in your linuxrc, right after 
> creating 
> > > the device nodes (if you use udev):
> > >
> > > 	if test -e /proc/mdstat; then
> > > 		log "scanning for multi disk devices"
> > > 		echo "DEVICE /dev/hd[a-z] /dev/sd[a-z] /dev/md[0-9]" > 
> > > /etc/mdadm.conf
> > > 		mdadm --examine --scan --config=/etc/mdadm.conf \
> > > 			>> /etc/mdadm.conf
> > > 		mdadm --assemble --scan
> > > 	fi
> > >
> > > 	if test -c /dev/mapper/control; then
> > > 		log "setting up lvm"
> > > 		vgscan --mknodes
> > > 		vgchange -a y
> > > 	fi
> > >
> > >
> > > BTW, you should definitively use striped mirrors rather than 
> > > mirrored stripes.
> > >
> > > However, note that you can not boot from a striped mirror.
> > > That is, you need a tiny partition, preferably at the 
> beginning of 
> > > your discs, to store the kernel image and the initrd, but not the 
> > > rootfs. You can make it a raid1 over all disks, like my 
> md0 above, 
> > > and use lilo to write a mbr to _all_ discs. That way you 
> can shuffle 
> > > around your discs and your system will still boot.
> > >
> > > More details on request ;)
> > > Andre
> > > --
> > > Andre Noll, http://www.mathematik.tu-darmstadt.de/~noll
> > >
> > > -
> > > To unsubscribe from this list: send the line "unsubscribe 
> > > linux-raid" in the body of a message to majordomo@vger.kernel.org 
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >
> > Confidentiality Notice: This e-mail message including 
> attachments, if 
> > any, is intended only for the person or entity to which it is 
> > addressed and may contain confidential and/or privileged 
> material. Any 
> > unauthorized review, use, disclosure or distribution is 
> prohibited. If 
> > you are not the intended recipient, please contact the 
> sender by reply 
> > e-mail and destroy all copies of the original message. If 
> you are the 
> > intended recipient, but do not wish to receive 
> communications through 
> > this medium, please so advise the sender immediately.
> > 
> > 
> > -
> > To unsubscribe from this list: send the line "unsubscribe 
> linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
Confidentiality Notice: This e-mail message including attachments, if any, is intended
only for the person or entity to which it is addressed and may contain confidential
and/or privileged material. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by reply
e-mail and destroy all copies of the original message. If you are the intended recipient,
but do not wish to receive communications through this medium, please so advise the
sender immediately.



^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: raid 10 or 1+0 ?
  2005-04-25 20:05 Andrew Rechenberg Lists
  2005-04-25 20:26 ` Gil
@ 2005-04-25 21:48 ` Guy
  1 sibling, 0 replies; 12+ messages in thread
From: Guy @ 2005-04-25 21:48 UTC (permalink / raw)
  To: 'Andrew Rechenberg Lists', linux-raid

No, the question was not related to RAID0+1.
I my opinion, RAID0+1 would be evil!
RAID1+0 or md's RAID10 would be much better.

Guy


> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Andrew Rechenberg Lists
> Sent: Monday, April 25, 2005 4:05 PM
> To: linux-raid@vger.kernel.org
> Subject: RE: raid 10 or 1+0 ?
> 
> The subject of the mail should be "raid 10 or 0+1" I believe :)
> 
> According to acnc.com:
> 
> http://www.acnc.com/04_01_10.html
> 
> "RAID 10 is implemented as a striped array whose segments are RAID 1
> arrays "
> 
> http://www.acnc.com/04_01_0_1.html
> 
> "RAID 0+1 is implemented as a mirrored array whose segments are RAID 0
> arrays"
> 
> If a drive were to fail in a RAID0+1, what you are left with is
> essentially one RAID0 array.
> 
> You want to use RAID10 if you need high performance and very good fault
> tolerance.  The disadvantage is that you end up with half of the
> available raw space as useable.
> 
> I've never seen nor tried a "/" file system on RAID10 or RAID0+1.  What
> I usually hear recommended is /boot and or / on RAID1 and then if you
> need better performance for a database or other application, then create
> a /data partition or something of the sort on a separate RAID10 array
> that is on different disk spindles.
> 
> Here is our configuration:
> 
> /: RAID1
> /backup: RAID0 disk backup staging area
> /data: LVM on a 56 SCSI disk SW RAID10 array
> 
> 
> HTH,
> Andy.
> 
> 
> > -----Original Message-----
> > From: linux-raid-owner@vger.kernel.org
> > [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Andre Noll
> > Sent: Sunday, April 24, 2005 9:54 AM
> > To: linux-raid@vger.kernel.org
> > Subject: Re: raid 10 or 1+0 ?
> >
> > On Sat, 23 Apr 2005 12:21:17 -0400 you wrote in
> > local.lists.linux-raid:
> >
> > > Maybe we need some success stories for RAID10 and RAID1+0
> > mounted on "/".
> >
> > I have such a setup up and running for quite some time now:
> >
> > cat /proc/mdstat
> > Personalities : [raid0] [raid1]
> > md3 : active raid0 md1[0] md2[1]
> >       156247808 blocks 64k chunks
> >
> > md2 : active raid1 hda2[0] hdk2[1]
> >       78123968 blocks [2/2] [UU]
> >
> > md1 : active raid1 hdc2[0] hdg2[1]
> >       78123968 blocks [2/2] [UU]
> >
> > md0 : active raid1 hdc1[2] hda1[3] hdk1[1] hdg1[0]
> >       49280 blocks [4/4] [UUUU]
> >
> > My roottfs is on a lv. The corresponding vg is made from md3.
> >
> > This works if you do not rely on the kernel to assemble your
> > array but use an initrd to achieve this.
> >
> > Just use something like this in your linuxrc, right after
> > creating the device nodes (if you use udev):
> >
> > 	if test -e /proc/mdstat; then
> > 		log "scanning for multi disk devices"
> > 		echo "DEVICE /dev/hd[a-z] /dev/sd[a-z]
> > /dev/md[0-9]" > /etc/mdadm.conf
> > 		mdadm --examine --scan --config=/etc/mdadm.conf \
> > 			>> /etc/mdadm.conf
> > 		mdadm --assemble --scan
> > 	fi
> >
> > 	if test -c /dev/mapper/control; then
> > 		log "setting up lvm"
> > 		vgscan --mknodes
> > 		vgchange -a y
> > 	fi
> >
> >
> > BTW, you should definitively use striped mirrors rather than
> > mirrored stripes.
> >
> > However, note that you can not boot from a striped mirror.
> > That is, you need a tiny partition, preferably at the
> > beginning of your discs, to store the kernel image and the
> > initrd, but not the rootfs. You can make it a raid1 over all
> > disks, like my md0 above, and use lilo to write a mbr to
> > _all_ discs. That way you can shuffle around your discs and
> > your system will still boot.
> >
> > More details on request ;)
> > Andre
> > --
> > Andre Noll, http://www.mathematik.tu-darmstadt.de/~noll
> >
> > -
> > To unsubscribe from this list: send the line "unsubscribe
> > linux-raid" in the body of a message to
> > majordomo@vger.kernel.org More majordomo info at
> > http://vger.kernel.org/majordomo-info.html
> >
> Confidentiality Notice: This e-mail message including attachments, if any,
> is intended
> only for the person or entity to which it is addressed and may contain
> confidential
> and/or privileged material. Any unauthorized review, use, disclosure or
> distribution
> is prohibited. If you are not the intended recipient, please contact the
> sender by reply
> e-mail and destroy all copies of the original message. If you are the
> intended recipient,
> but do not wish to receive communications through this medium, please so
> advise the
> sender immediately.
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: raid 10 or 1+0 ?
  2005-04-25 20:05 Andrew Rechenberg Lists
@ 2005-04-25 20:26 ` Gil
  2005-04-25 21:48 ` Guy
  1 sibling, 0 replies; 12+ messages in thread
From: Gil @ 2005-04-25 20:26 UTC (permalink / raw)
  To: Andrew Rechenberg Lists; +Cc: linux-raid

Andrew Rechenberg Lists wrote:
> I've never seen nor tried a "/" file system on RAID10 or RAID0+1. 
> What I usually hear recommended is /boot and or / on RAID1 and then 
> if you need better performance for a database or other application, 
> then create a /data partition or something of the sort on a separate 
> RAID10 array that is on different disk spindles.

Having a RAID1 root partition is fantastic because even if you totally
flub your RAID setup elsewhere you can still boot the system without
being RAID aware.  This is a huge advantage over either RAID10 or
RAID0+1 in my mind.

--Gil

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: raid 10 or 1+0 ?
@ 2005-04-25 20:05 Andrew Rechenberg Lists
  2005-04-25 20:26 ` Gil
  2005-04-25 21:48 ` Guy
  0 siblings, 2 replies; 12+ messages in thread
From: Andrew Rechenberg Lists @ 2005-04-25 20:05 UTC (permalink / raw)
  To: linux-raid

The subject of the mail should be "raid 10 or 0+1" I believe :)

According to acnc.com:

http://www.acnc.com/04_01_10.html

"RAID 10 is implemented as a striped array whose segments are RAID 1
arrays "

http://www.acnc.com/04_01_0_1.html

"RAID 0+1 is implemented as a mirrored array whose segments are RAID 0
arrays"

If a drive were to fail in a RAID0+1, what you are left with is
essentially one RAID0 array.

You want to use RAID10 if you need high performance and very good fault
tolerance.  The disadvantage is that you end up with half of the
available raw space as useable.

I've never seen nor tried a "/" file system on RAID10 or RAID0+1.  What
I usually hear recommended is /boot and or / on RAID1 and then if you
need better performance for a database or other application, then create
a /data partition or something of the sort on a separate RAID10 array
that is on different disk spindles.

Here is our configuration:

/: RAID1
/backup: RAID0 disk backup staging area
/data: LVM on a 56 SCSI disk SW RAID10 array


HTH,
Andy.


> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org 
> [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Andre Noll
> Sent: Sunday, April 24, 2005 9:54 AM
> To: linux-raid@vger.kernel.org
> Subject: Re: raid 10 or 1+0 ?
> 
> On Sat, 23 Apr 2005 12:21:17 -0400 you wrote in 
> local.lists.linux-raid:
> 
> > Maybe we need some success stories for RAID10 and RAID1+0 
> mounted on "/".
> 
> I have such a setup up and running for quite some time now: 
> 
> cat /proc/mdstat
> Personalities : [raid0] [raid1]
> md3 : active raid0 md1[0] md2[1]
>       156247808 blocks 64k chunks
>       
> md2 : active raid1 hda2[0] hdk2[1]
>       78123968 blocks [2/2] [UU]
>       
> md1 : active raid1 hdc2[0] hdg2[1]
>       78123968 blocks [2/2] [UU]
>       
> md0 : active raid1 hdc1[2] hda1[3] hdk1[1] hdg1[0]
>       49280 blocks [4/4] [UUUU]
> 
> My roottfs is on a lv. The corresponding vg is made from md3. 
> 
> This works if you do not rely on the kernel to assemble your 
> array but use an initrd to achieve this.
> 
> Just use something like this in your linuxrc, right after 
> creating the device nodes (if you use udev):
> 
> 	if test -e /proc/mdstat; then
> 		log "scanning for multi disk devices"
> 		echo "DEVICE /dev/hd[a-z] /dev/sd[a-z] 
> /dev/md[0-9]" > /etc/mdadm.conf
> 		mdadm --examine --scan --config=/etc/mdadm.conf \
> 			>> /etc/mdadm.conf
> 		mdadm --assemble --scan
> 	fi
> 
> 	if test -c /dev/mapper/control; then
> 		log "setting up lvm"
> 		vgscan --mknodes
> 		vgchange -a y
> 	fi
> 
> 
> BTW, you should definitively use striped mirrors rather than 
> mirrored stripes.
> 
> However, note that you can not boot from a striped mirror. 
> That is, you need a tiny partition, preferably at the 
> beginning of your discs, to store the kernel image and the 
> initrd, but not the rootfs. You can make it a raid1 over all 
> disks, like my md0 above, and use lilo to write a mbr to 
> _all_ discs. That way you can shuffle around your discs and 
> your system will still boot.
> 
> More details on request ;)
> Andre
> --
> Andre Noll, http://www.mathematik.tu-darmstadt.de/~noll
> 
> -
> To unsubscribe from this list: send the line "unsubscribe 
> linux-raid" in the body of a message to 
> majordomo@vger.kernel.org More majordomo info at  
> http://vger.kernel.org/majordomo-info.html
> 
Confidentiality Notice: This e-mail message including attachments, if any, is intended
only for the person or entity to which it is addressed and may contain confidential
and/or privileged material. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by reply
e-mail and destroy all copies of the original message. If you are the intended recipient,
but do not wish to receive communications through this medium, please so advise the
sender immediately.



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2005-04-26 18:53 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-04-22 13:55 raid 10 or 1+0 ? yves DEGLAIN
2005-04-23 11:26 ` Tobias DiPasquale
2005-04-23 16:21   ` Guy
2005-04-24 13:54     ` Andre Noll
2005-04-25 20:01     ` Molle Bestefich
2005-04-25 20:08       ` Molle Bestefich
2005-04-25 21:49         ` Guy
2005-04-25 20:05 Andrew Rechenberg Lists
2005-04-25 20:26 ` Gil
2005-04-25 21:48 ` Guy
2005-04-26 17:24 Andrew Rechenberg Lists
2005-04-26 18:53 ` Guy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.