All of lore.kernel.org
 help / color / mirror / Atom feed
* Interpreting mdstat output
@ 2013-02-05 12:20 Brian Candler
  2013-02-05 12:30 ` Roy Sigurd Karlsbakk
  2013-02-05 13:02 ` Robin Hill
  0 siblings, 2 replies; 10+ messages in thread
From: Brian Candler @ 2013-02-05 12:20 UTC (permalink / raw)
  To: linux-raid

(Ubuntu 12.04.2, kernel 3.2.0-37-generic)

I created a RAID5 array with 22 data disks and 2 hot spares, like this:

    # mdadm --create /dev/md/dbs -l raid5 -n 22 -x 2 -c 512 -b internal /dev/sd{b..y}

However I'm having difficultly understanding the mdstat output.

    # cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active raid5 sdw[24] sdy[23](S) sdx[22](S) sdv[20] sdu[19] sdt[18] sds[17] sdr[16] sdq[15] sdp[14] sdo[13] sdn[12] sdm[11] sdl[10] sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
          61532835840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [22/21] [UUUUUUUUUUUUUUUUUUUUU_]
          [=>...................]  recovery =  6.0% (176470508/2930135040) finish=706.9min speed=64915K/sec
          bitmap: 0/22 pages [0KB], 65536KB chunk

    unused devices: <none>
    # 

Problems:

1. The UUUU_ and [22/21] suggests that one disk is bad, but is that true?
And if so which one?

Output from "dmesg | grep -3 sd" is at end of this mail, and it doesn't show
any errors.

All the disks have the same event counter in the metadata:

# for i in /dev/sd{b..y}; do mdadm --examine $i | grep Events; done | sort | uniq -c
     24          Events : 594

2. /proc/mdstat shows the member disks numbered 0..20 and 22..24, what
happened to 21 ?

I had previously initialised this array as a RAID6 and then changed my mind
(mdadm --stop on the RAID6, then mdadm --create as shown above). Could that
cause this?

Thanks,

Brian.


[ 1521.456236] md: bind<sdb>
[ 1521.562109] md: bind<sdc>
[ 1521.642205] md: bind<sdd>
[ 1521.742085] md: bind<sde>
[ 1521.865477] md: bind<sdf>
[ 1521.913948] md: bind<sdg>
[ 1522.032447] md: bind<sdh>
[ 1522.149983] md: bind<sdi>
[ 1522.210146] md: bind<sdj>
[ 1522.311450] md: bind<sdk>
[ 1522.394621] md: bind<sdl>
[ 1522.492218] md: bind<sdm>
[ 1522.609674] md: bind<sdn>
[ 1522.715425] md: bind<sdo>
[ 1522.793035] md: bind<sdp>
[ 1522.872313] md: bind<sdq>
[ 1522.984171] md: bind<sdr>
[ 1523.059627] md: bind<sds>
[ 1523.146203] md: bind<sdt>
[ 1523.257396] md: bind<sdu>
[ 1523.367173] md: bind<sdv>
[ 1523.459063] md: bind<sdx>
[ 1523.503167] md: bind<sdy>
[ 1523.602943] md: bind<sdw>
[ 1523.604343] md/raid:md127: device sdv operational as raid disk 20
[ 1523.604347] md/raid:md127: device sdu operational as raid disk 19
[ 1523.604350] md/raid:md127: device sdt operational as raid disk 18
[ 1523.604352] md/raid:md127: device sds operational as raid disk 17
[ 1523.604354] md/raid:md127: device sdr operational as raid disk 16
[ 1523.604356] md/raid:md127: device sdq operational as raid disk 15
[ 1523.604358] md/raid:md127: device sdp operational as raid disk 14
[ 1523.604361] md/raid:md127: device sdo operational as raid disk 13
[ 1523.604363] md/raid:md127: device sdn operational as raid disk 12
[ 1523.604365] md/raid:md127: device sdm operational as raid disk 11
[ 1523.604367] md/raid:md127: device sdl operational as raid disk 10
[ 1523.604369] md/raid:md127: device sdk operational as raid disk 9
[ 1523.604371] md/raid:md127: device sdj operational as raid disk 8
[ 1523.604373] md/raid:md127: device sdi operational as raid disk 7
[ 1523.604376] md/raid:md127: device sdh operational as raid disk 6
[ 1523.604378] md/raid:md127: device sdg operational as raid disk 5
[ 1523.604380] md/raid:md127: device sdf operational as raid disk 4
[ 1523.604382] md/raid:md127: device sde operational as raid disk 3
[ 1523.604384] md/raid:md127: device sdd operational as raid disk 2
[ 1523.604386] md/raid:md127: device sdc operational as raid disk 1
[ 1523.604388] md/raid:md127: device sdb operational as raid disk 0
[ 1523.605694] md/raid:md127: allocated 23216kB
[ 1523.605802] md/raid:md127: raid level 5 active with 21 out of 22 devices, algorithm 2
[ 1523.606250] RAID conf printout:
[ 1523.606252]  --- level:5 rd:22 wd:21
[ 1523.606254]  disk 0, o:1, dev:sdb
[ 1523.606256]  disk 1, o:1, dev:sdc
[ 1523.606258]  disk 2, o:1, dev:sdd
[ 1523.606260]  disk 3, o:1, dev:sde
[ 1523.606261]  disk 4, o:1, dev:sdf
[ 1523.606263]  disk 5, o:1, dev:sdg
[ 1523.606265]  disk 6, o:1, dev:sdh
[ 1523.606266]  disk 7, o:1, dev:sdi
[ 1523.606268]  disk 8, o:1, dev:sdj
[ 1523.606270]  disk 9, o:1, dev:sdk
[ 1523.606271]  disk 10, o:1, dev:sdl
[ 1523.606273]  disk 11, o:1, dev:sdm
[ 1523.606275]  disk 12, o:1, dev:sdn
[ 1523.606277]  disk 13, o:1, dev:sdo
[ 1523.606278]  disk 14, o:1, dev:sdp
[ 1523.606280]  disk 15, o:1, dev:sdq
[ 1523.606282]  disk 16, o:1, dev:sdr
[ 1523.606283]  disk 17, o:1, dev:sds
[ 1523.606285]  disk 18, o:1, dev:sdt
[ 1523.606287]  disk 19, o:1, dev:sdu
[ 1523.606289]  disk 20, o:1, dev:sdv
[ 1523.606440] created bitmap (22 pages) for device md127
[ 1523.609710] md127: bitmap initialized from disk: read 2/2 pages, set 44711 of 44711 bits
[ 1523.819455] md127: detected capacity change from 0 to 63009623900160
[ 1523.819488] RAID conf printout:
[ 1523.819492]  --- level:5 rd:22 wd:21
[ 1523.819495]  disk 0, o:1, dev:sdb
[ 1523.819497]  disk 1, o:1, dev:sdc
[ 1523.819499]  disk 2, o:1, dev:sdd
[ 1523.819501]  disk 3, o:1, dev:sde
[ 1523.819502]  disk 4, o:1, dev:sdf
[ 1523.819504]  disk 5, o:1, dev:sdg
[ 1523.819506]  disk 6, o:1, dev:sdh
[ 1523.819508]  disk 7, o:1, dev:sdi
[ 1523.819509]  disk 8, o:1, dev:sdj
[ 1523.819511]  disk 9, o:1, dev:sdk
[ 1523.819513]  disk 10, o:1, dev:sdl
[ 1523.819515]  disk 11, o:1, dev:sdm
[ 1523.819517]  disk 12, o:1, dev:sdn
[ 1523.819518]  disk 13, o:1, dev:sdo
[ 1523.819520]  disk 14, o:1, dev:sdp
[ 1523.819522]  disk 15, o:1, dev:sdq
[ 1523.819524]  disk 16, o:1, dev:sdr
[ 1523.819525]  disk 17, o:1, dev:sds
[ 1523.819527]  disk 18, o:1, dev:sdt
[ 1523.819529]  disk 19, o:1, dev:sdu
[ 1523.819531]  disk 20, o:1, dev:sdv
[ 1523.819532]  disk 21, o:1, dev:sdw
[ 1523.819539] RAID conf printout:
[ 1523.819541]  --- level:5 rd:22 wd:21
[ 1523.819543]  disk 0, o:1, dev:sdb
[ 1523.819544]  disk 1, o:1, dev:sdc
[ 1523.819546]  disk 2, o:1, dev:sdd
[ 1523.819548]  disk 3, o:1, dev:sde
[ 1523.819549]  disk 4, o:1, dev:sdf
[ 1523.819551]  disk 5, o:1, dev:sdg
[ 1523.819553]  disk 6, o:1, dev:sdh
[ 1523.819554]  disk 7, o:1, dev:sdi
[ 1523.819556]  disk 8, o:1, dev:sdj
[ 1523.819558]  disk 9, o:1, dev:sdk
[ 1523.819559]  disk 10, o:1, dev:sdl
[ 1523.819561]  disk 11, o:1, dev:sdm
[ 1523.819563]  disk 12, o:1, dev:sdn
[ 1523.819564]  disk 13, o:1, dev:sdo
[ 1523.819566]  disk 14, o:1, dev:sdp
[ 1523.819568]  disk 15, o:1, dev:sdq
[ 1523.819570]  disk 16, o:1, dev:sdr
[ 1523.819571]  disk 17, o:1, dev:sds
[ 1523.819573]  disk 18, o:1, dev:sdt
[ 1523.819575]  disk 19, o:1, dev:sdu
[ 1523.819576]  disk 20, o:1, dev:sdv
[ 1523.819578]  disk 21, o:1, dev:sdw
[ 1523.819579] RAID conf printout:
[ 1523.819581]  --- level:5 rd:22 wd:21
[ 1523.819583]  disk 0, o:1, dev:sdb
[ 1523.819584]  disk 1, o:1, dev:sdc
[ 1523.819586]  disk 2, o:1, dev:sdd
[ 1523.819588]  disk 3, o:1, dev:sde
[ 1523.819589]  disk 4, o:1, dev:sdf
[ 1523.819591]  disk 5, o:1, dev:sdg
[ 1523.819592]  disk 6, o:1, dev:sdh
[ 1523.819594]  disk 7, o:1, dev:sdi
[ 1523.819596]  disk 8, o:1, dev:sdj
[ 1523.819597]  disk 9, o:1, dev:sdk
[ 1523.819599]  disk 10, o:1, dev:sdl
[ 1523.819601]  disk 11, o:1, dev:sdm
[ 1523.819602]  disk 12, o:1, dev:sdn
[ 1523.819604]  disk 13, o:1, dev:sdo
[ 1523.819606]  disk 14, o:1, dev:sdp
[ 1523.819607]  disk 15, o:1, dev:sdq
[ 1523.819609]  disk 16, o:1, dev:sdr
[ 1523.819611]  disk 17, o:1, dev:sds
[ 1523.819612]  disk 18, o:1, dev:sdt
[ 1523.819614]  disk 19, o:1, dev:sdu
[ 1523.819616]  disk 20, o:1, dev:sdv
[ 1523.819617]  disk 21, o:1, dev:sdw
[ 1523.819761] md: recovery of RAID array md127
[ 1523.819765] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 1523.819767] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 12:20 Interpreting mdstat output Brian Candler
@ 2013-02-05 12:30 ` Roy Sigurd Karlsbakk
  2013-02-05 12:34   ` Brian Candler
  2013-02-05 22:43   ` Hans-Peter Jansen
  2013-02-05 13:02 ` Robin Hill
  1 sibling, 2 replies; 10+ messages in thread
From: Roy Sigurd Karlsbakk @ 2013-02-05 12:30 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

> (Ubuntu 12.04.2, kernel 3.2.0-37-generic)
> 
> I created a RAID5 array with 22 data disks and 2 hot spares, like
> this:
> 
> # mdadm --create /dev/md/dbs -l raid5 -n 22 -x 2 -c 512 -b internal
> /dev/sd{b..y}

I beleive using 22 drives in a single RAID-5 is something like BASE jumping with a large umbrella. You should at least use RAID-6, and perhaps even split up the RAID into smaller ones.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
roy@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 12:30 ` Roy Sigurd Karlsbakk
@ 2013-02-05 12:34   ` Brian Candler
  2013-02-05 12:41     ` Roy Sigurd Karlsbakk
  2013-02-05 22:43   ` Hans-Peter Jansen
  1 sibling, 1 reply; 10+ messages in thread
From: Brian Candler @ 2013-02-05 12:34 UTC (permalink / raw)
  To: Roy Sigurd Karlsbakk; +Cc: linux-raid

On Tue, Feb 05, 2013 at 01:30:41PM +0100, Roy Sigurd Karlsbakk wrote:
> I beleive using 22 drives in a single RAID-5 is something like BASE jumping with a large umbrella. You should at least use RAID-6, and perhaps even split up the RAID into smaller ones.

Point taken, but this is actually a large dataset which could be recreated
if necessary (i.e. it's derived from other data), and is also to be copied
to another server.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 12:34   ` Brian Candler
@ 2013-02-05 12:41     ` Roy Sigurd Karlsbakk
  0 siblings, 0 replies; 10+ messages in thread
From: Roy Sigurd Karlsbakk @ 2013-02-05 12:41 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

----- Opprinnelig melding -----
> On Tue, Feb 05, 2013 at 01:30:41PM +0100, Roy Sigurd Karlsbakk wrote:
> > I beleive using 22 drives in a single RAID-5 is something like BASE
> > jumping with a large umbrella. You should at least use RAID-6, and
> > perhaps even split up the RAID into smaller ones.
> 
> Point taken, but this is actually a large dataset which could be
> recreated
> if necessary (i.e. it's derived from other data), and is also to be
> copied to another server.

Still, better use raid-6 + 1 spare than raid-5 + 2 spares. It will probably take a while to rebuild the data on that one in case of a double disk failure, and with that amount of drives, the chances are pretty decent.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
roy@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 12:20 Interpreting mdstat output Brian Candler
  2013-02-05 12:30 ` Roy Sigurd Karlsbakk
@ 2013-02-05 13:02 ` Robin Hill
  2013-02-05 13:40   ` Brian Candler
  1 sibling, 1 reply; 10+ messages in thread
From: Robin Hill @ 2013-02-05 13:02 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2101 bytes --]

On Tue Feb 05, 2013 at 12:20:48PM +0000, Brian Candler wrote:

> (Ubuntu 12.04.2, kernel 3.2.0-37-generic)
> 
> I created a RAID5 array with 22 data disks and 2 hot spares, like this:
> 
>     # mdadm --create /dev/md/dbs -l raid5 -n 22 -x 2 -c 512 -b internal /dev/sd{b..y}
> 
> However I'm having difficultly understanding the mdstat output.
> 
>     # cat /proc/mdstat
>     Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
>     md127 : active raid5 sdw[24] sdy[23](S) sdx[22](S) sdv[20] sdu[19] sdt[18] sds[17] sdr[16] sdq[15] sdp[14] sdo[13] sdn[12] sdm[11] sdl[10] sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
>           61532835840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [22/21] [UUUUUUUUUUUUUUUUUUUUU_]
>           [=>...................]  recovery =  6.0% (176470508/2930135040) finish=706.9min speed=64915K/sec
>           bitmap: 0/22 pages [0KB], 65536KB chunk
> 
>     unused devices: <none>
>     # 
> 
> Problems:
> 
> 1. The UUUU_ and [22/21] suggests that one disk is bad, but is that true?
> And if so which one?
> 
No, that's normal. A RAID5 (or RAID6) array is created in a degraded
form, then the last disk(s) are recovered (it's the quickest way of
getting the array ready for use).

> Output from "dmesg | grep -3 sd" is at end of this mail, and it doesn't show
> any errors.
> 
> All the disks have the same event counter in the metadata:
> 
> # for i in /dev/sd{b..y}; do mdadm --examine $i | grep Events; done | sort | uniq -c
>      24          Events : 594
> 
> 2. /proc/mdstat shows the member disks numbered 0..20 and 22..24, what
> happened to 21 ?
> 
21 would (I think) be the "missing" one from the original array creation
(with 22..24 as the spares). The numbers themselves don't really signify
anything.

HTH,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 13:02 ` Robin Hill
@ 2013-02-05 13:40   ` Brian Candler
  2013-02-05 13:49     ` Brian Candler
  0 siblings, 1 reply; 10+ messages in thread
From: Brian Candler @ 2013-02-05 13:40 UTC (permalink / raw)
  To: linux-raid

On Tue, Feb 05, 2013 at 01:02:37PM +0000, Robin Hill wrote:
> > 1. The UUUU_ and [22/21] suggests that one disk is bad, but is that true?
> > And if so which one?
> > 
> No, that's normal. A RAID5 (or RAID6) array is created in a degraded
> form, then the last disk(s) are recovered (it's the quickest way of
> getting the array ready for use).

Ah I see. Thank you.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 13:40   ` Brian Candler
@ 2013-02-05 13:49     ` Brian Candler
  2013-02-05 13:54       ` Phil Turmel
  2013-02-05 13:59       ` Robin Hill
  0 siblings, 2 replies; 10+ messages in thread
From: Brian Candler @ 2013-02-05 13:49 UTC (permalink / raw)
  To: linux-raid

On Tue, Feb 05, 2013 at 01:40:14PM +0000, Brian Candler wrote:
> On Tue, Feb 05, 2013 at 01:02:37PM +0000, Robin Hill wrote:
> > > 1. The UUUU_ and [22/21] suggests that one disk is bad, but is that true?
> > > And if so which one?
> > > 
> > No, that's normal. A RAID5 (or RAID6) array is created in a degraded
> > form, then the last disk(s) are recovered (it's the quickest way of
> > getting the array ready for use).
> 
> Ah I see. Thank you.

The odd thing is, if I make a RAID6 I get [UUUUU] with no underscores?

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid6 sdy[23](S) sdx[22] sdw[21] sdv[20] sdu[19] sdt[18] sds[17] sdr[16] sdq[15] sdp[14] sdo[13] sdn[12] sdm[11] sdl[10] sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
      61532835840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [23/23] [UUUUUUUUUUUUUUUUUUUUUUU]
      [>....................]  resync =  0.0% (850232/2930135040) finish=1607.7min speed=30365K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

unused devices: <none>

Regards,

Brian.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 13:49     ` Brian Candler
@ 2013-02-05 13:54       ` Phil Turmel
  2013-02-05 13:59       ` Robin Hill
  1 sibling, 0 replies; 10+ messages in thread
From: Phil Turmel @ 2013-02-05 13:54 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

On 02/05/2013 08:49 AM, Brian Candler wrote:
> On Tue, Feb 05, 2013 at 01:40:14PM +0000, Brian Candler wrote:
>> On Tue, Feb 05, 2013 at 01:02:37PM +0000, Robin Hill wrote:
>>>> 1. The UUUU_ and [22/21] suggests that one disk is bad, but is that true?
>>>> And if so which one?
>>>>
>>> No, that's normal. A RAID5 (or RAID6) array is created in a degraded
>>> form, then the last disk(s) are recovered (it's the quickest way of
>>> getting the array ready for use).
>>
>> Ah I see. Thank you.
> 
> The odd thing is, if I make a RAID6 I get [UUUUU] with no underscores?
> 
> # cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
> md127 : active raid6 sdy[23](S) sdx[22] sdw[21] sdv[20] sdu[19] sdt[18] sds[17] sdr[16] sdq[15] sdp[14] sdo[13] sdn[12] sdm[11] sdl[10] sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
>       61532835840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [23/23] [UUUUUUUUUUUUUUUUUUUUUUU]
>       [>....................]  resync =  0.0% (850232/2930135040) finish=1607.7min speed=30365K/sec
>       bitmap: 22/22 pages [88KB], 65536KB chunk

Raid6 resyncs to get started where raid5 rebuilds to get started.

Regular parity is computationally identical to create from the data and
to compute one data from the parity and the rest.  So reading linearly
from all drives but one, and writing linearly to just one is the fastest
way to start raid5.

The Q syndrome in raid6 is not symmetrical.  It is more expensive to
compute data from Q, so raid6 doesn't try to do that on creation.

Phil


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 13:49     ` Brian Candler
  2013-02-05 13:54       ` Phil Turmel
@ 2013-02-05 13:59       ` Robin Hill
  1 sibling, 0 replies; 10+ messages in thread
From: Robin Hill @ 2013-02-05 13:59 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1644 bytes --]

On Tue Feb 05, 2013 at 01:49:07PM +0000, Brian Candler wrote:

> On Tue, Feb 05, 2013 at 01:40:14PM +0000, Brian Candler wrote:
> > On Tue, Feb 05, 2013 at 01:02:37PM +0000, Robin Hill wrote:
> > > > 1. The UUUU_ and [22/21] suggests that one disk is bad, but is that true?
> > > > And if so which one?
> > > > 
> > > No, that's normal. A RAID5 (or RAID6) array is created in a degraded
> > > form, then the last disk(s) are recovered (it's the quickest way of
> > > getting the array ready for use).
> > 
> > Ah I see. Thank you.
> 
> The odd thing is, if I make a RAID6 I get [UUUUU] with no underscores?
> 
> # cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
> md127 : active raid6 sdy[23](S) sdx[22] sdw[21] sdv[20] sdu[19] sdt[18] sds[17] sdr[16] sdq[15] sdp[14] sdo[13] sdn[12] sdm[11] sdl[10] sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
>       61532835840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [23/23] [UUUUUUUUUUUUUUUUUUUUUUU]
>       [>....................]  resync =  0.0% (850232/2930135040) finish=1607.7min speed=30365K/sec
>       bitmap: 22/22 pages [88KB], 65536KB chunk
> 
> unused devices: <none>
> 
Looks like my mistake. The mdadm manual page indicates that it's only
RAID5 that is done this way. RAID6 will just do a full resync to
generate the parity.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Interpreting mdstat output
  2013-02-05 12:30 ` Roy Sigurd Karlsbakk
  2013-02-05 12:34   ` Brian Candler
@ 2013-02-05 22:43   ` Hans-Peter Jansen
  1 sibling, 0 replies; 10+ messages in thread
From: Hans-Peter Jansen @ 2013-02-05 22:43 UTC (permalink / raw)
  To: linux-raid

Am Dienstag, 5. Februar 2013, 13:30:41 schrieben Sie:
> > (Ubuntu 12.04.2, kernel 3.2.0-37-generic)
> > 
> > I created a RAID5 array with 22 data disks and 2 hot spares, like
> > this:
> > 
> > # mdadm --create /dev/md/dbs -l raid5 -n 22 -x 2 -c 512 -b internal
> > /dev/sd{b..y}
> 
> I beleive using 22 drives in a single RAID-5 is something like BASE jumping
> with a large umbrella.

That is Bungee-Jumping with the cord tied to your testicles...

SCR,
Pete

https://lkml.org/lkml/2002/7/14/142


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-02-05 22:43 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-05 12:20 Interpreting mdstat output Brian Candler
2013-02-05 12:30 ` Roy Sigurd Karlsbakk
2013-02-05 12:34   ` Brian Candler
2013-02-05 12:41     ` Roy Sigurd Karlsbakk
2013-02-05 22:43   ` Hans-Peter Jansen
2013-02-05 13:02 ` Robin Hill
2013-02-05 13:40   ` Brian Candler
2013-02-05 13:49     ` Brian Candler
2013-02-05 13:54       ` Phil Turmel
2013-02-05 13:59       ` Robin Hill

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.