* Need Help with Corrupted RAID6 Array
@ 2020-10-06 11:05 Kenneth Emerson
2020-10-06 13:17 ` Roman Mamedov
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Kenneth Emerson @ 2020-10-06 11:05 UTC (permalink / raw)
To: linux-raid
It's been several years since I asked and received help on this list.
Once again, I find myself in a bind. I have accidentally destroyed
one of my disks in a set of 5 4-TB drives set as RAID6. When I
rebooted 2 of the three arrays rebuilt correctly; however, the third
(largest and most important) would not assemble. I thought, even
though I had lost one drive, I could rebuild the array by substituting
a new, partitioned drive but I cannot get the array to start.
Can anyone help me out, please?
Regards,
Ken Emerson
This is what I see with mdstat:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid1 sdc1[2] sde1[4] sdd1[3] sdb1[1]
292800 blocks [5/4] [_UUUU]
md1 : active raid1 sdc2[4] sde2[1] sdd2[3] sdb2[2]
292968384 blocks [5/4] [_UUUU]
md3 : inactive sdc4[9](S) sde4[4](S) sdd4[2](S) sdb4[3](S)
10532388320 blocks super 1.0
All four drives are marked as spare (sda4 is the missing/destroyed partition).
If I assemble and force it to run, the drives are no longer spare but
even with a --force, the array will not go active:
root@MythTV:/home/ken# mdadm --assemble --run /dev/md3
root@MythTV:/home/ken# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid1 sdc1[2] sde1[4] sdd1[3] sdb1[1]
292800 blocks [5/4] [_UUUU]
md1 : active raid1 sdc2[4] sde2[1] sdd2[3] sdb2[2]
292968384 blocks [5/4] [_UUUU]
md3 : inactive sdc4[9] sde4[4] sdd4[2] sdb4[3]
10532388320 blocks super 1.0
The examination of each of the sdx4 drives:
root@MythTV:/home/ken# mdadm --examine /dev/sdb4
/dev/sdb4:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 2f327db6:af6ce8e0:954fbaa8:10e20661
Name : mythtv:3
Creation Time : Sun Dec 4 12:17:39 2011
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5266194160 (2511.12 GiB 2696.29 GB)
Array Size : 13165485120 (12555.59 GiB 13481.46 GB)
Used Dev Size : 5266194048 (2511.12 GiB 2696.29 GB)
Super Offset : 5266194416 sectors
Unused Space : before=0 sectors, after=368 sectors
State : clean
Device UUID : 1e3e196e:fa8efa79:9fd6479e:2fd419ca
Update Time : Mon Oct 5 14:47:48 2020
Checksum : b3bdbb19 - correct
Events : 812437
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@MythTV:/home/ken# mdadm --examine /dev/sdc4
/dev/sdc4:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 2f327db6:af6ce8e0:954fbaa8:10e20661
Name : mythtv:3
Creation Time : Sun Dec 4 12:17:39 2011
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5266194160 (2511.12 GiB 2696.29 GB)
Array Size : 13165485120 (12555.59 GiB 13481.46 GB)
Used Dev Size : 5266194048 (2511.12 GiB 2696.29 GB)
Super Offset : 5266194416 sectors
Unused Space : before=0 sectors, after=368 sectors
State : clean
Device UUID : b5b6951e:16fe5bd2:9dfc0cdc:5baa05f3
Update Time : Mon Oct 5 14:47:48 2020
Checksum : 234c9efd - correct
Events : 812437
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@MythTV:/home/ken# mdadm --examine /dev/sdd4
/dev/sdd4:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 2f327db6:af6ce8e0:954fbaa8:10e20661
Name : mythtv:3
Creation Time : Sun Dec 4 12:17:39 2011
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5266194160 (2511.12 GiB 2696.29 GB)
Array Size : 13165485120 (12555.59 GiB 13481.46 GB)
Used Dev Size : 5266194048 (2511.12 GiB 2696.29 GB)
Super Offset : 5266194416 sectors
Unused Space : before=0 sectors, after=368 sectors
State : clean
Device UUID : 242bd7e8:b11ca61d:664f7083:589eb9fa
Update Time : Mon Oct 5 14:47:48 2020
Checksum : e7ef78c5 - correct
Events : 812437
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@MythTV:/home/ken# mdadm --examine /dev/sde4
/dev/sde4:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 2f327db6:af6ce8e0:954fbaa8:10e20661
Name : mythtv:3
Creation Time : Sun Dec 4 12:17:39 2011
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5266194160 (2511.12 GiB 2696.29 GB)
Array Size : 13165485120 (12555.59 GiB 13481.46 GB)
Used Dev Size : 5266194048 (2511.12 GiB 2696.29 GB)
Super Offset : 5266194416 sectors
Unused Space : before=0 sectors, after=368 sectors
State : clean
Device UUID : 6e95b4b2:0da02e12:d79162ff:77337ed8
Update Time : Mon Oct 5 14:47:48 2020
Checksum : c3dfe - correct
Events : 812437
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 4
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
The mdadm.conf file:
root@MythTV:/home/ken# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 UUID=230f126b:0d2a4439:c44c77eb:7ee19756
spares=1
ARRAY /dev/md1 UUID=90f0aede:03a99d2a:bd811544:edcdae81
spares=1
ARRAY /dev/md/2 metadata=1.0 UUID=ca0e0cc2:de96d489:d07fd26c:685aef08
name=mythtv:2
spares=1
ARRAY /dev/md/3 metadata=1.0 UUID=2f327db6:af6ce8e0:954fbaa8:10e20661
name=mythtv:3
# This file was auto-generated on Sun, 09 Mar 2014 11:12:49 -0500
# by mkconf $Id$
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Need Help with Corrupted RAID6 Array
2020-10-06 11:05 Need Help with Corrupted RAID6 Array Kenneth Emerson
@ 2020-10-06 13:17 ` Roman Mamedov
2020-10-06 19:57 ` Mark Wagner
2020-10-06 22:10 ` antlists
2 siblings, 0 replies; 4+ messages in thread
From: Roman Mamedov @ 2020-10-06 13:17 UTC (permalink / raw)
To: Kenneth Emerson; +Cc: linux-raid
On Tue, 6 Oct 2020 06:05:40 -0500
Kenneth Emerson <kenneth.emerson@gmail.com> wrote:
> If I assemble and force it to run, the drives are no longer spare but
> even with a --force, the array will not go active:
>
> root@MythTV:/home/ken# mdadm --assemble --run /dev/md3
> root@MythTV:/home/ken# cat /proc/mdstat
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active raid1 sdc1[2] sde1[4] sdd1[3] sdb1[1]
> 292800 blocks [5/4] [_UUUU]
>
> md1 : active raid1 sdc2[4] sde2[1] sdd2[3] sdb2[2]
> 292968384 blocks [5/4] [_UUUU]
>
> md3 : inactive sdc4[9] sde4[4] sdd4[2] sdb4[3]
> 10532388320 blocks super 1.0
Would be nice to see 'mdadm --detail /dev/md3' when it's in this state, and
what it says in 'dmesg' after you tried --force.
--
With respect,
Roman
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Need Help with Corrupted RAID6 Array
2020-10-06 11:05 Need Help with Corrupted RAID6 Array Kenneth Emerson
2020-10-06 13:17 ` Roman Mamedov
@ 2020-10-06 19:57 ` Mark Wagner
2020-10-06 22:10 ` antlists
2 siblings, 0 replies; 4+ messages in thread
From: Mark Wagner @ 2020-10-06 19:57 UTC (permalink / raw)
To: Linux RAID Mailing List; +Cc: Kenneth Emerson
On Tue, Oct 6, 2020 at 4:06 AM Kenneth Emerson
<kenneth.emerson@gmail.com> wrote:
>
> It's been several years since I asked and received help on this list.
> Once again, I find myself in a bind. I have accidentally destroyed
> one of my disks in a set of 5 4-TB drives set as RAID6. When I
> rebooted 2 of the three arrays rebuilt correctly; however, the third
> (largest and most important) would not assemble. I thought, even
> though I had lost one drive, I could rebuild the array by substituting
> a new, partitioned drive but I cannot get the array to start.
> root@MythTV:/home/ken# mdadm --examine /dev/sdb4
> /dev/sdb4:
> Raid Level : raid6
> Raid Devices : 7
> Array Size : 13165485120 (12555.59 GiB 13481.46 GB)
> Used Dev Size : 5266194048 (2511.12 GiB 2696.29 GB)
> Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
You say this is a five-disk RAID-6 array, but the disk metadata says,
in three different ways, that this is a seven-disk array. Do you have
any idea what could be causing this discrepancy?
--
Mark
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Need Help with Corrupted RAID6 Array
2020-10-06 11:05 Need Help with Corrupted RAID6 Array Kenneth Emerson
2020-10-06 13:17 ` Roman Mamedov
2020-10-06 19:57 ` Mark Wagner
@ 2020-10-06 22:10 ` antlists
2 siblings, 0 replies; 4+ messages in thread
From: antlists @ 2020-10-06 22:10 UTC (permalink / raw)
To: Kenneth Emerson, linux-raid
On 06/10/2020 12:05, Kenneth Emerson wrote:
> root@MythTV:/home/ken# mdadm --assemble --run /dev/md3
> root@MythTV:/home/ken# cat /proc/mdstat
Hope I'm not teaching grandma to suck eggs, but you have remembered to
scatter "mdadm --stop" liberally everywhere between attempts?
Cheers,
Wol
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-10-06 22:10 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-06 11:05 Need Help with Corrupted RAID6 Array Kenneth Emerson
2020-10-06 13:17 ` Roman Mamedov
2020-10-06 19:57 ` Mark Wagner
2020-10-06 22:10 ` antlists
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).