All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael" <michael@insulin-pumpers.org>
To: linux-raid@vger.kernel.org
Subject: RE: Please help with a special RAID 1 setup
Date: Wed, 7 Jan 2004 10:59:02 -0800	[thread overview]
Message-ID: <200401071858.i07Iwx228259@bzs.org> (raw)
In-Reply-To: <8075D5C3061B9441944E137377645118012F02@cinshrexc03.shermfin.com>

<snip> 
> I don't believe that the above is a correct statement.  Linux
> software RAID1 will allow mirroring across more than 2 devices:
>

This is correct. Below is info and conf file from our raid1 setup

Boot sector (md1) is 3 partition raid 1
swap and root are 2 partition raid 1 with hot spare

Personalities : [raid0] [raid1] [raid5] 
read_ahead 1024 sectors
md1 : active raid1 sdc1[2] sdb1[1] sda1[0]
      15936 blocks [3/3] [UUU]

md2 : active raid1 sdc2[2] sdb2[1] sda2[0]
      489856 blocks [2/2] [UU]

md0 : active raid1 sdc3[2] sdb3[1] sda3[0]
      16924352 blocks [2/2] [UU]


# raid-1 configuration
raiddev /dev/md1
        raid-level              1
        nr-raid-disks           3
        chunk-size              4
# Spare disks for hot reconstruction
        nr-spare-disks          0
        persistent-superblock   1
        device                  /dev/sda1
        raid-disk               0
        device                  /dev/sdb1
        raid-disk               1
        device                  /dev/sdc1
        raid-disk               2

raiddev /dev/md0
        raid-level              1
        nr-raid-disks           2
        chunk-size              32
        nr-spare-disks          1
        persistent-superblock   1
        device                  /dev/sda3
        raid-disk               0
        device                  /dev/sdb3
        raid-disk               1
        device                  /dev/sdc3
        spare-disk              0

raiddev /dev/md2
        raid-level              1
        nr-raid-disks           2
        chunk-size              4
# Spare disks for hot reconstruction
        nr-spare-disks          1
        persistent-superblock   1
        device                  /dev/sda2
        raid-disk               0
        device                  /dev/sdb2
        raid-disk               1
        device                  /dev/sdc2
        spare-disk              0

Michael@Insulin-Pumpers.org

  reply	other threads:[~2004-01-07 18:59 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-01-07 18:14 Please help with a special RAID 1 setup Rechenberg, Andrew
2004-01-07 18:59 ` Michael [this message]
2004-01-07 20:17 ` Mircea Ciocan
  -- strict thread matches above, loose matches on Subject: below --
2004-01-13 20:50 linux
2004-01-07 19:28 Cress, Andrew R
2004-01-07 20:21 ` Mircea Ciocan
2004-01-07 15:32 Dr. Greg Wettstein
2004-01-07 14:48 Cress, Andrew R
2004-01-07 15:33 ` Mircea Ciocan
2004-01-07 16:09   ` Luca Berra
2004-01-07 10:16 Mircea Ciocan
2004-01-07 12:20 ` Catalin BOIE
2004-01-07 13:23   ` Mircea Ciocan
2004-01-07 13:31     ` Catalin BOIE
2004-01-07 13:47     ` Catalin BOIE

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200401071858.i07Iwx228259@bzs.org \
    --to=michael@insulin-pumpers.org \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.