* raid10 array tend to two degraded raid10 array
@ 2012-07-20 9:10 vincent
2012-07-22 23:11 ` NeilBrown
0 siblings, 1 reply; 2+ messages in thread
From: vincent @ 2012-07-20 9:10 UTC (permalink / raw)
To: linux-raid
Hi, everyone:
I am Vincent, I am writing to you to ask a question about of
mdadm.
I created a raid10 array with 4 160G disks used the command: mdadm
-Cv /dev/md0 -l10 -n4 /dev/sd[abcd],
The version of my mdadm is 3.2.2, and the version of my kernel is
2.6.38
when the raid10 is in resyncing, I used the following command to
make file system for it: mkfs.ext3 /dev/md0
every was OK. The array continued to resync, but when the process
of resyncing is 3.4%, there were a lot of
IO error of "sda" and "sdc". There were bad blocks in sda and sdc.
Then I used "cat /proc/mdstat" to see the status of /dev/md0:
Personalities : [raid10]
md0 : active raid10 sdb[1] sdd[3]
310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
unused devices: <none>
/dev/sdc and /dev/sda had lost.
Then I reboot the system, but when i used "cat /proc/mdstat" to
see the status of /dev/md0:
Personalities : [raid10]
md126 : active raid10 sda[0] sdc[2]
310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [U_U_]
md0 : active raid10 sdb[1] sdd[3]
310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
unused devices: <none>
there had a array which name was md126, and consisted by /dev/sdc
/dev/sda.
I used "mdadm --assemble --scan" to assemble the md devices. the
output of the command is:
dm: /dev/md/0 exists - ignoring
md: md0 stopped.
mdadm: ignoring /dev/sda as it reports /dev/sdd as failed
mdadm: ignoring /dev/sdc as it reports /dev/sdd as failed
md: bind<sdd>
md: bind<sdb>
md/raid10:md0: active with 2 out of 4 devices
md0: detected capacity change from 0 to 317791928320
mdadm: /dev/md0 has been started with 2 drives (out of 4).
md0: unknown partition table
mdadm: /dev/md/0 exists - ignoring
md: md126 stopped.
md: bind<sdc>
md: bind<sda>
md/raid10:md126: active with 2 out of 4 devices
md126: detected capacity change from 0 to 317791928320
mdadm: /dev/md126 has been started with 2 drives (out of 4).
md126: unknown partition table
And then I used "mdadm -E /dev/sda", "mdadm -E /dev/sdb", "mdadm
-E /dev/sdc", "mdadm -E /dev/sdc" ,
"mdadm -D /dev/md0" and "mdadm -D /dev/md127" to check the
details info of sda, sdb, sdc and sdd.
I found the property of "Array UUID" of all of these devices(sda,
sdb, sdc, sdd)were the same. But the
property of "Events" and "Update Time" of "sda" and "sdc" were the
same(21, Fri Jul 6 11:02:09 2012),
the property of "Events" and "Update Time" of "sdb" and "sdd" were
the same(35, Fri Jul 6 11:06:21 2012).
Although the "Update Time" and "events" property of "sda" and
"sdc" were not equal to "sdb" and "sdd",
they had the same "Array UUID". why this array tend to two
degraded arrays those had the same uuid?
As the two arrays had the same uuid, it is difficult to
distinguish and use them. I think it is unreasonable,
could you help me ?
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: raid10 array tend to two degraded raid10 array
2012-07-20 9:10 raid10 array tend to two degraded raid10 array vincent
@ 2012-07-22 23:11 ` NeilBrown
0 siblings, 0 replies; 2+ messages in thread
From: NeilBrown @ 2012-07-22 23:11 UTC (permalink / raw)
To: vincent; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3904 bytes --]
On Fri, 20 Jul 2012 17:10:14 +0800 "vincent" <hanguozhong@meganovo.com> wrote:
> Hi, everyone:
> I am Vincent, I am writing to you to ask a question about of
> mdadm.
> I created a raid10 array with 4 160G disks used the command: mdadm
> -Cv /dev/md0 -l10 -n4 /dev/sd[abcd],
> The version of my mdadm is 3.2.2, and the version of my kernel is
> 2.6.38
> when the raid10 is in resyncing, I used the following command to
> make file system for it: mkfs.ext3 /dev/md0
> every was OK. The array continued to resync, but when the process
> of resyncing is 3.4%, there were a lot of
> IO error of "sda" and "sdc". There were bad blocks in sda and sdc.
> Then I used "cat /proc/mdstat" to see the status of /dev/md0:
>
> Personalities : [raid10]
> md0 : active raid10 sdb[1] sdd[3]
> 310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
>
> unused devices: <none>
>
> /dev/sdc and /dev/sda had lost.
> Then I reboot the system, but when i used "cat /proc/mdstat" to
> see the status of /dev/md0:
>
> Personalities : [raid10]
> md126 : active raid10 sda[0] sdc[2]
> 310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [U_U_]
>
> md0 : active raid10 sdb[1] sdd[3]
> 310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
>
> unused devices: <none>
>
> there had a array which name was md126, and consisted by /dev/sdc
> /dev/sda.
> I used "mdadm --assemble --scan" to assemble the md devices. the
> output of the command is:
>
> dm: /dev/md/0 exists - ignoring
> md: md0 stopped.
> mdadm: ignoring /dev/sda as it reports /dev/sdd as failed
> mdadm: ignoring /dev/sdc as it reports /dev/sdd as failed
> md: bind<sdd>
> md: bind<sdb>
> md/raid10:md0: active with 2 out of 4 devices
> md0: detected capacity change from 0 to 317791928320
> mdadm: /dev/md0 has been started with 2 drives (out of 4).
> md0: unknown partition table
> mdadm: /dev/md/0 exists - ignoring
> md: md126 stopped.
> md: bind<sdc>
> md: bind<sda>
> md/raid10:md126: active with 2 out of 4 devices
> md126: detected capacity change from 0 to 317791928320
> mdadm: /dev/md126 has been started with 2 drives (out of 4).
> md126: unknown partition table
>
> And then I used "mdadm -E /dev/sda", "mdadm -E /dev/sdb", "mdadm
> -E /dev/sdc", "mdadm -E /dev/sdc" ,
> "mdadm -D /dev/md0" and "mdadm -D /dev/md127" to check the
> details info of sda, sdb, sdc and sdd.
> I found the property of "Array UUID" of all of these devices(sda,
> sdb, sdc, sdd)were the same. But the
> property of "Events" and "Update Time" of "sda" and "sdc" were the
> same(21, Fri Jul 6 11:02:09 2012),
> the property of "Events" and "Update Time" of "sdb" and "sdd" were
> the same(35, Fri Jul 6 11:06:21 2012).
>
> Although the "Update Time" and "events" property of "sda" and
> "sdc" were not equal to "sdb" and "sdd",
> they had the same "Array UUID". why this array tend to two
> degraded arrays those had the same uuid?
> As the two arrays had the same uuid, it is difficult to
> distinguish and use them. I think it is unreasonable,
> could you help me ?
>
Yes, this is a known problem. Hopefully it will be fixed in the next release
of mdadm.
For now, just remove the faulty devices, or at least remove the metadata from
them with
mdadm --zero-superblock /dev/sd[ac]
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2012-07-22 23:11 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-20 9:10 raid10 array tend to two degraded raid10 array vincent
2012-07-22 23:11 ` NeilBrown
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.