* assistance recovering failed raid6 array
@ 2017-02-20 1:49 Martin Bosner
2017-02-20 15:39 ` Phil Turmel
0 siblings, 1 reply; 16+ messages in thread
From: Martin Bosner @ 2017-02-20 1:49 UTC (permalink / raw)
To: linux-raid
I am running a software raid6 with 36 x 3TB disks (sda to sdaj). All
disks have one partition (gpt, 100%, primary, raid on) and i am using
btrfs on top of the raid.
Last week one of the disks failed and was unrecoverable. I replaced the
disk (sdk) with a new one and the resync process started.
At around 80% recovery two further disks failed and the recovery process
was stopped. That failed disks are sdm and sdh.
All other disks seem to be fine and I was about the use the "mdadm
--create" command when i remembered the lines
"You have been warned! It's better to send an email to the linux-raid
mailing list with detailed information"
So here i am for an advice how to continue.
More details:
Only 35% of the raid space is used.
The disks status is:
sdk: original disk is dead and the replacement was around 80% recovered.
sdm: i was able to copy the first 2 TB with two errors (128kbyte) and
the third TB with around 200GB missing data using ddrescue to a new disk.
sdh: the original disk is dead and i replaced it with a brand new one
and created the partition sdh1.
Since the array is offline i cannot add sdh1 to the raid and trying to
assemble the array gives me:
For mdadm --assemble --force with sdh1:
mdadm: no RAID superblock on /dev/sdh1
mdadm: /dev/sdh1 has no superblock - assembly aborted
For mdadm --assemble --force without sdh1:
mdadm: /dev/md0 assembled from 33 drives, 1 rebuilding and 1 spare - not
enough to start the array.
Full status of /dev/sda1:
mdadm --examine /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : f90e9c41:5aa3c3b2:d715781b:1abbb439
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b0b57ef2 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active,
'.' == missing, 'R' == replacing)
mdadm --examine for each drive to get "Device Role":
"sda Device Role : Active device 0"
"sdb Device Role : Active device 1"
"sdc Device Role : Active device 2"
"sdd Device Role : Active device 3"
"sde Device Role : Active device 4"
"sdf Device Role : Active device 5"
"sdg Device Role : Active device 6"
"sdh" mdadm: No md superblock detected on /dev/sdh1.
"sdi Device Role : Active device 8"
"sdj Device Role : Active device 9"
"sdk Device Role : spare"
"sdl Device Role : Active device 11"
"sdm Device Role : Active device 12"
"sdn Device Role : Active device 13"
"sdo Device Role : Active device 14"
"sdp Device Role : Active device 15"
"sdq Device Role : Active device 16"
"sdr Device Role : Active device 17"
"sds Device Role : Active device 18"
"sdt Device Role : Active device 19"
"sdu Device Role : Active device 20"
"sdv Device Role : Active device 21"
"sdw Device Role : Active device 22"
"sdx Device Role : Active device 23"
"sdy Device Role : Active device 24"
"sdz Device Role : Active device 25"
"sdaa Device Role : Active device 26"
"sdab Device Role : Active device 27"
"sdac Device Role : Active device 28"
"sdad Device Role : Active device 29"
"sdae Device Role : Active device 30"
"sdaf Device Role : Active device 31"
"sdag Device Role : Active device 32"
"sdah Device Role : Active device 33"
"sdai Device Role : Active device 34"
"sdaj Device Role : Active device 35"
The system is Ubuntu 16.04.2 LTS (x86_64) with a 4.4.0-62-generic kernel.
mdadm --version gives me: mdadm - v3.3 - 3rd September 2013
--
<https://www.postbox-inc.com/?utm_source=email&utm_medium=siglink&utm_campaign=reach>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 1:49 assistance recovering failed raid6 array Martin Bosner
@ 2017-02-20 15:39 ` Phil Turmel
[not found] ` <E18A7C79-09E0-4361-9F89-68AE1E6FCBF6@bosner.de>
0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2017-02-20 15:39 UTC (permalink / raw)
To: Martin Bosner, linux-raid
On 02/19/2017 08:49 PM, Martin Bosner wrote:
> I am running a software raid6 with 36 x 3TB disks (sda to sdaj). All
> disks have one partition (gpt, 100%, primary, raid on) and i am using
> btrfs on top of the raid.
>
> Last week one of the disks failed and was unrecoverable. I replaced the
> disk (sdk) with a new one and the resync process started.
> At around 80% recovery two further disks failed and the recovery process
> was stopped. That failed disks are sdm and sdh.
>
> All other disks seem to be fine and I was about the use the "mdadm
> --create" command when i remembered the lines
> "You have been warned! It's better to send an email to the linux-raid
> mailing list with detailed information"
>
> So here i am for an advice how to continue.
More information, please. Paste inline, untrimmed, in your reply with
line wrapping disabled. Plain text only. Use multiple mails if needed.
List limit is ~100k IIRC.
# dmesg
# for x in /dev/sd[a-z] /dev/sda[a-j] ; do echo mdadm -E ${x}1 ; smartctl -iA -l scterc $x ; done
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
[not found] ` <E18A7C79-09E0-4361-9F89-68AE1E6FCBF6@bosner.de>
@ 2017-02-20 17:36 ` Phil Turmel
2017-02-20 17:48 ` Martin Bosner
2017-02-20 17:50 ` Roman Mamedov
0 siblings, 2 replies; 16+ messages in thread
From: Phil Turmel @ 2017-02-20 17:36 UTC (permalink / raw)
To: Martin Bosner; +Cc: linux-raid
Hi Martin,
{ Note: convention on kernel.org is reply-to-all, bottom post or
interleave, and trim unneeded material. }
On 02/20/2017 12:05 PM, Martin Bosner wrote:
> for x in /dev/sd[a-z] /dev/sda[a-j] ; do echo mdadm -E ${x}1 ; smartctl -iA -l scterc $x ; done
Darn. I didn't mean to leave 'echo' there. Please run this
part over again:
for x in /dev/sd[a-z] /dev/sda[a-j] ; do mdadm -E ${x}1 ; done
> smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.4.0-62-generic] (local build)
> Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
>
> === START OF INFORMATION SECTION ===
> Model Family: Seagate Barracuda 7200.14 (AF)
> Device Model: ST3000DM001-1CH166
> Serial Number: Z1F4RN82
> LU WWN Device Id: 5 000c50 0662080a6
> Firmware Version: CC27
> User Capacity: 3,000,592,982,016 bytes [3.00 TB]
> Sector Sizes: 512 bytes logical, 4096 bytes physical
> Rotation Rate: 7200 rpm
> Form Factor: 3.5 inches
> Device is: In smartctl database [for details use: -P show]
> ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b
> SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
> Local Time is: Mon Feb 20 17:04:09 2017 CET
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
>
> === START OF READ SMART DATA SECTION ===
> SMART Attributes Data Structure revision number: 10
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
> 1 Raw_Read_Error_Rate 0x000f 111 099 006 Pre-fail Always - 32613624
> 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0
> 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 4
> 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
> 7 Seek_Error_Rate 0x000f 067 060 030 Pre-fail Always - 5379264
> 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 4222
> 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
> 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 4
> 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
> 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
> 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
> 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 0 0
> 189 High_Fly_Writes 0x003a 099 099 000 Old_age Always - 1
> 190 Airflow_Temperature_Cel 0x0022 072 066 045 Old_age Always - 28 (Min/Max 26/28)
> 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
> 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 4
> 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 4
> 194 Temperature_Celsius 0x0022 028 040 000 Old_age Always - 28 (0 25 0 0 0)
> 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
> 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
> 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
> 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 4222h+01m+40.011s
> 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 6286190299
> 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 103173878942
>
> SCT Error Recovery Control command not supported
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^
Eewwww! You have desktop drives. Which means your array has
blown up due to timeout mismatch. You have some reading to
do.[1]
Phil
[1] Recommendations from the archives (whole threads):
http://marc.info/?l=linux-raid&m=139050322510249&w=2
http://marc.info/?l=linux-raid&m=135863964624202&w=2
http://marc.info/?l=linux-raid&m=135811522817345&w=1
http://marc.info/?l=linux-raid&m=133761065622164&w=2
http://marc.info/?l=linux-raid&m=132477199207506
http://marc.info/?l=linux-raid&m=133665797115876&w=2
http://marc.info/?l=linux-raid&m=142487508806844&w=3
http://marc.info/?l=linux-raid&m=144535576302583&w=2
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 17:36 ` Phil Turmel
@ 2017-02-20 17:48 ` Martin Bosner
2017-02-20 18:11 ` Phil Turmel
2017-02-20 17:50 ` Roman Mamedov
1 sibling, 1 reply; 16+ messages in thread
From: Martin Bosner @ 2017-02-20 17:48 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
Hi Phil,
> { Note: convention on kernel.org is reply-to-all, bottom post or
> interleave, and trim unneeded material. }
Sorry, hope this one is better.
> for x in /dev/sd[a-z] /dev/sda[a-j] ; do mdadm -E ${x}1 ; done
See below for full output.
> Eewwww! You have desktop drives. Which means your array has
> blown up due to timeout mismatch. You have some reading to
> do.[1]
I will be using so called “NAS” or “enterprise disks in the next cluster … especially these seagate disks were a bad decision.
> http://marc.info/?l=linux-raid&m=142487508806844&w=3
> http://marc.info/?l=linux-raid&m=144535576302583&w=2
I will read through it.
Martin
#########
for x in /dev/sd[a-z] /dev/sda[a-j] ; do mdadm -E ${x}1 ; done
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : f90e9c41:5aa3c3b2:d715781b:1abbb439
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b0b57ef2 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : bf3081d1:5afead3a:839df956:098403b9
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 83554c55 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 6e592519:9dafe1fd:6616a6b4:6de9ab52
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 8582048f - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 465ca8a2:45bd05a3:94b4d0cc:0bf9ee5d
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : d796c2dc - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 79f7fb50:15f0887f:4d7adf0c:632e9743
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 88248bf0 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 5a1097b1:7da38314:8633c3f9:d3b843ab
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : d24a9be4 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 5
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 31076c1e:97f2be2f:1ae76487:48667fc7
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 43842df - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 6
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/sdh1.
/dev/sdi1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 34ecbf22:16e15f2f:a9075b03:6f8f2de3
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9fd16018 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 8
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdj1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 390cbe6a:bceed865:5d88091f:86c7228b
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : e1ec468f - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 9
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdk1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 79b27fa7:954302fe:4f669a20:1ddf9a15
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : 42e0375d - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : spare
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdl1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : c7215fad:1c6ecbbf:2c2e0feb:aabbb208
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : c8157597 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 11
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdm1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x8b
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Recovery Offset : 296160 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 1366f174:d65cf0f7:b20f5b4f:c2263bbe
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : e1a5d782 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 12
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdn1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : c9b9b4ad:74b4989c:1c0003cd:dd402919
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 97a2aaf2 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 13
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdo1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : d5a5ec07:91695b21:087ebb12:f3e6bf3f
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : e2ec701c - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 14
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdp1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 7f4f1b78:7a0ae004:664b2208:da13eed6
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : c334b517 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 15
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdq1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : a2158ac6:82ac5c70:31d9b49b:66077e36
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 70429e7a - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 16
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdr1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : af23e4bb:04bae1e3:6b79aa24:ebf1ccb3
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : df6644c9 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 17
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sds1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 0c1675d5:0e2e9b88:3ab7545c:f5f2cd99
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : bb5bea0a - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 18
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdt1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : f2920d03:7784140f:0108652e:ef335243
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : eb024f19 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 19
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdu1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 00b49bc7:54fa2eb2:38be88f4:177812ad
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 828ee067 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 20
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdv1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : a04149c8:d99b98d4:6ad76d27:b6930004
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 2f79445d - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 21
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdw1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 3609155b:7272da25:a6ccf32b:707e27e1
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : f533c282 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 22
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdx1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : c5462d71:6f69ab33:a6ddeb6a:28210c6e
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : e4f9aae3 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 23
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdy1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 44823783:34d0edf7:03d4ae6f:522befc6
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 18ec4da1 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 24
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdz1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 063185d5:e0d3dbce:ab39c4fa:ae855b0c
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 12a9c008 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 25
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdaa1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : da6a846c:563d61bf:ee1e786a:c9e5280b
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : 8afa8c7 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 26
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdab1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 0ea3c624:8e2e59ab:ed195520:23917089
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : e10e7875 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 27
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdac1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : e6c66c40:e0c49813:e87dc481:7b57ef8b
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : c8e25cf3 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 28
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdad1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : af24f1fb:a55ddb8b:3ecd2971:6f9dc92d
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : 8de8e8dd - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 29
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdae1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : d4e87abe:00003a9d:e950d72c:f5b93939
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : 28eeef8f - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 30
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdaf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 2ea54072:67978e8b:c6b99895:76f4157f
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : 79a6e6b2 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 31
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdag1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 52ee4e92:9a60fdd5:b3393a70:f0e8e53f
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 7f956d5e - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 32
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdah1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 80818d5a:85638be1:f9974e08:081404d2
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : 7d948cde - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 33
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdai1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 6f014106:2eea458c:4a6dcf1a:a2666db0
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : c4ecbb59 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 34
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdaj1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Name : media-storage:0 (local to host media-storage)
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Raid Devices : 36
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 99624556544 (95009.38 GiB 102015.55 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 0fdc0ec2:42d30dcb:f00d2c63:26171824
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 15 14:08:28 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 7b89d039 - correct
Events : 140559
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 35
Array State : AAAAAAA.AA.AAAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 17:36 ` Phil Turmel
2017-02-20 17:48 ` Martin Bosner
@ 2017-02-20 17:50 ` Roman Mamedov
2017-02-20 18:13 ` Martin Bosner
1 sibling, 1 reply; 16+ messages in thread
From: Roman Mamedov @ 2017-02-20 17:50 UTC (permalink / raw)
To: Martin Bosner; +Cc: Phil Turmel, linux-raid
> On 02/20/2017 12:05 PM, Martin Bosner wrote:
> > === START OF INFORMATION SECTION ===
> > Model Family: Seagate Barracuda 7200.14 (AF)
> > Device Model: ST3000DM001-1CH166
So you have the most terrible hard drive possible [1][2](they WILL ALL fail),
ran in about the most terrible RAID setup possible (only a single RAID5 would
have been worse). Now you realize why the latter was a bad idea: with such a
great number of disks, should have picked a 3x12-member RAID6 or similar. Just
let this be a lesson in component choice and risk assessment, restore from
your backups and move on.
[1] https://en.wikipedia.org/wiki/ST3000DM001
[2] https://www.quora.com/What-are-some-of-the-worst-hard-drive-designs-ever
--
With respect,
Roman
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 17:48 ` Martin Bosner
@ 2017-02-20 18:11 ` Phil Turmel
2017-02-20 18:27 ` Martin Bosner
0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2017-02-20 18:11 UTC (permalink / raw)
To: Martin Bosner; +Cc: linux-raid
Hi Martin,
On 02/20/2017 12:48 PM, Martin Bosner wrote:
> Hi Phil,
>
>> { Note: convention on kernel.org is reply-to-all, bottom post or
>> interleave, and trim unneeded material. }
> Sorry, hope this one is better.
>
>
>> for x in /dev/sd[a-z] /dev/sda[a-j] ; do mdadm -E ${x}1 ; done
>
> See below for full output.
Not good.
>> Eewwww! You have desktop drives. Which means your array has
>> blown up due to timeout mismatch. You have some reading to do.[1]
Of the 36 original disks, you have 34. You have one incomplete
rebuild, meaning it is still technically a spare. One of the still
active 34 is also showing pending relocations, meaning that disk will
not be able to supply all sectors to complete any recovery.
{ /dev/sdah, serial # S1F0FPYR }
If you have any access to the two "dead" drives, there might be
a slight chance. Since they were likely kicked out due to timeout
mismatch, not a complete failure, this could be possible.
Otherwise, you are utterly screwed. Sorry.
> I will be using so called “NAS” or “enterprise disks in the next
> cluster … especially these seagate disks were a bad decision.
Yes.
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 17:50 ` Roman Mamedov
@ 2017-02-20 18:13 ` Martin Bosner
0 siblings, 0 replies; 16+ messages in thread
From: Martin Bosner @ 2017-02-20 18:13 UTC (permalink / raw)
To: Roman Mamedov; +Cc: Phil Turmel, linux-raid
> So you have the most terrible hard drive possible [1][2](they WILL ALL fail),
That is very true.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 18:11 ` Phil Turmel
@ 2017-02-20 18:27 ` Martin Bosner
2017-02-20 19:01 ` Wols Lists
2017-02-20 19:16 ` Phil Turmel
0 siblings, 2 replies; 16+ messages in thread
From: Martin Bosner @ 2017-02-20 18:27 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
> On 20 Feb 2017, at 19:11, Phil Turmel <philip@turmel.org> wrote:
>
> Of the 36 original disks, you have 34. You have one incomplete
> rebuild, meaning it is still technically a spare. One of the still
> active 34 is also showing pending relocations, meaning that disk will
> not be able to supply all sectors to complete any recovery.
> { /dev/sdah, serial # S1F0FPYR }
>
> If you have any access to the two "dead" drives, there might be
> a slight chance. Since they were likely kicked out due to timeout
> mismatch, not a complete failure, this could be possible.
>
> Otherwise, you are utterly screwed. Sorry.
The disks are dead. I already tried different boards but that did not help.
What would happen if i recreate the array with —assume-clean ? Would i be able to start the array? Can I mark disks as clean? I actually have one failed disks, one nearly recovered disk and one that has been copied by 2/3 ...
Martin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 18:27 ` Martin Bosner
@ 2017-02-20 19:01 ` Wols Lists
2017-02-20 19:11 ` Martin Bosner
2017-02-20 19:16 ` Phil Turmel
1 sibling, 1 reply; 16+ messages in thread
From: Wols Lists @ 2017-02-20 19:01 UTC (permalink / raw)
To: Martin Bosner, Phil Turmel; +Cc: linux-raid
On 20/02/17 18:27, Martin Bosner wrote:
>
>> On 20 Feb 2017, at 19:11, Phil Turmel <philip@turmel.org> wrote:
>>
>> Of the 36 original disks, you have 34. You have one incomplete
>> rebuild, meaning it is still technically a spare. One of the still
>> active 34 is also showing pending relocations, meaning that disk will
>> not be able to supply all sectors to complete any recovery.
>> { /dev/sdah, serial # S1F0FPYR }
>>
>> If you have any access to the two "dead" drives, there might be
>> a slight chance. Since they were likely kicked out due to timeout
>> mismatch, not a complete failure, this could be possible.
>>
>> Otherwise, you are utterly screwed. Sorry.
>
> The disks are dead. I already tried different boards but that did not help.
If it had been the timeout problem, you would probably have been able to
recover the array. As it isn't :-(
>
> What would happen if i recreate the array with —assume-clean ? Would i be able to start the array? Can I mark disks as clean? I actually have one failed disks, one nearly recovered disk and one that has been copied by 2/3 ...
>
You can try "--assemble --force". It sounds like you might well get away
with it.
BUT! DO NOT ATTEMPT TO USE THE ARRAY IF IT LOOKS LIKE IT'S OKAY. Are all
the disks the same age? In which case, all the other drives are on the
verge of failure, too!
I don't know whether to suggest you use smartctl to see what state the
drives are in (I've seen too many reports of allegedly healthy drives
failing, so I wouldn't trust it, especially with this particular drive.)
ddrescue your remaining drives *now*, and hope you're okay. You say you
have 35GB, across 36 drives, so you should only be using the first 1GB
of each drive. We hope ...
> Martin
>
Cheers,
Wol
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 19:01 ` Wols Lists
@ 2017-02-20 19:11 ` Martin Bosner
0 siblings, 0 replies; 16+ messages in thread
From: Martin Bosner @ 2017-02-20 19:11 UTC (permalink / raw)
To: Wols Lists; +Cc: Phil Turmel, linux-raid
> On 20 Feb 2017, at 20:01, Wols Lists <antlists@youngman.org.uk> wrote:
>>
> You can try "--assemble --force". It sounds like you might well get away
> with it.
Would it be possible to start the array by adding sdk1 (setting state as active) and resetting the state of sdm1? The array failed while i was copying stuff to another place ...
With —assemble —force i get this:
mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 18 22:46:42 2016
Raid Level : raid6
Used Dev Size : -1
Raid Devices : 36
Total Devices : 35
Persistence : Superblock is persistent
Update Time : Wed Feb 15 14:08:28 2017
State : active, FAILED, Not Started
Active Devices : 33
Working Devices : 35
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 512K
Name : media-storage:0 (local to host media-storage)
UUID : 5c7c227e:22de5fc1:ca3ebb65:9c283567
Events : 140559
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
4 8 65 4 active sync /dev/sde1
5 8 81 5 active sync /dev/sdf1
6 8 97 6 active sync /dev/sdg1
14 0 0 14 removed
8 8 129 8 active sync /dev/sdi1
9 8 145 9 active sync /dev/sdj1
20 0 0 20 removed
39 8 177 11 active sync /dev/sdl1
12 8 193 12 spare rebuilding /dev/sdm1
13 8 209 13 active sync /dev/sdn1
14 8 225 14 active sync /dev/sdo1
40 8 241 15 active sync /dev/sdp1
16 65 1 16 active sync /dev/sdq1
17 65 17 17 active sync /dev/sdr1
18 65 33 18 active sync /dev/sds1
19 65 49 19 active sync /dev/sdt1
20 65 65 20 active sync /dev/sdu1
21 65 81 21 active sync /dev/sdv1
22 65 97 22 active sync /dev/sdw1
43 65 113 23 active sync /dev/sdx1
36 65 129 24 active sync /dev/sdy1
25 65 145 25 active sync /dev/sdz1
41 65 161 26 active sync /dev/sdaa1
27 65 177 27 active sync /dev/sdab1
28 65 193 28 active sync /dev/sdac1
37 65 209 29 active sync /dev/sdad1
38 65 225 30 active sync /dev/sdae1
42 65 241 31 active sync /dev/sdaf1
32 66 1 32 active sync /dev/sdag1
33 66 17 33 active sync /dev/sdah1
34 66 33 34 active sync /dev/sdai1
35 66 49 35 active sync /dev/sdaj1
44 8 161 - spare /dev/sdk1
Cheers
Martin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 18:27 ` Martin Bosner
2017-02-20 19:01 ` Wols Lists
@ 2017-02-20 19:16 ` Phil Turmel
2017-02-20 19:31 ` Martin Bosner
2017-02-20 20:45 ` Wols Lists
1 sibling, 2 replies; 16+ messages in thread
From: Phil Turmel @ 2017-02-20 19:16 UTC (permalink / raw)
To: Martin Bosner; +Cc: linux-raid
On 02/20/2017 01:27 PM, Martin Bosner wrote:
>> If you have any access to the two "dead" drives, there might be a
>> slight chance. Since they were likely kicked out due to timeout
>> mismatch, not a complete failure, this could be possible.
>>
>> Otherwise, you are utterly screwed. Sorry.
>
> The disks are dead. I already tried different boards but that did
> not help.
Oh, well.
> What would happen if i recreate the array with —assume-clean ? Would
> i be able to start the array? Can I mark disks as clean? I actually
> have one failed disks, one nearly recovered disk and one that has
> been copied by 2/3 ...
For every stripe in the array, you need 34 devices of 36 to be
readable. Any time you fall back on ddrescue to make one of those
34, you are ensuring that some data is lost. But yes, that would
otherwise work. The 2/3 recovered disk is only useful in this (use
ddrescue to get as much of the missing disk as possible).
If you can get to 35 of 36 original disks, even with scattered errors,
you could complete a check scrub to make 35 good disks. With timeout
mismatch, you'll have to override the kernel timeouts for all devices,
so such a scrub would take a very long time, but would recover everything.
> Martin
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 19:16 ` Phil Turmel
@ 2017-02-20 19:31 ` Martin Bosner
2017-02-20 21:30 ` Phil Turmel
2017-02-20 20:45 ` Wols Lists
1 sibling, 1 reply; 16+ messages in thread
From: Martin Bosner @ 2017-02-20 19:31 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
> On 20 Feb 2017, at 20:16, Phil Turmel <philip@turmel.org> wrote:
>
> If you can get to 35 of 36 original disks, even with scattered errors,
> you could complete a check scrub to make 35 good disks. With timeout
> mismatch, you'll have to override the kernel timeouts for all devices,
> so such a scrub would take a very long time, but would recover everything.
Is there a way tell mdadm to use sdk1 as a “active” device? And how can i tell the array that it should not try to recover sdm1 but set it active ? Is there any magic to force the state ? It might not be healthy for the normal use case but might be helpful for me.
Martin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 19:16 ` Phil Turmel
2017-02-20 19:31 ` Martin Bosner
@ 2017-02-20 20:45 ` Wols Lists
2017-02-20 21:21 ` Phil Turmel
1 sibling, 1 reply; 16+ messages in thread
From: Wols Lists @ 2017-02-20 20:45 UTC (permalink / raw)
To: Phil Turmel, Martin Bosner; +Cc: linux-raid
On 20/02/17 19:16, Phil Turmel wrote:
> For every stripe in the array, you need 34 devices of 36 to be
> readable. Any time you fall back on ddrescue to make one of those
> 34, you are ensuring that some data is lost. But yes, that would
> otherwise work. The 2/3 recovered disk is only useful in this (use
> ddrescue to get as much of the missing disk as possible).
I keep on asking :-)
But there's a request on the linux wiki program for someone to write a
utility program that takes a ddrescue log and flags the duff sectors as
"soft unreadable". That would mean that if you can recover 35 drives,
provided no stripe has lost two sectors across two drives, you wouldn't
lose any data.
If you want to try and write that utility? Or if you want to email me a
ddrescue log with a bunch of failed sectors, I'll have a go at writing
it myself :-)
Cheers,
Wol
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 20:45 ` Wols Lists
@ 2017-02-20 21:21 ` Phil Turmel
2017-02-21 2:03 ` Brad Campbell
0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2017-02-20 21:21 UTC (permalink / raw)
To: Wols Lists, Martin Bosner; +Cc: linux-raid
On 02/20/2017 03:45 PM, Wols Lists wrote:
> But there's a request on the linux wiki program for someone to write a
> utility program that takes a ddrescue log and flags the duff sectors as
> "soft unreadable". That would mean that if you can recover 35 drives,
> provided no stripe has lost two sectors across two drives, you wouldn't
> lose any data.
>
> If you want to try and write that utility? Or if you want to email me a
> ddrescue log with a bunch of failed sectors, I'll have a go at writing
> it myself :-)
Check out hdparm --make-bad-sector. You can get what you are describing
by scripting that. It's marked very dangerous, but I guess if one has
nothing to lose....
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 19:31 ` Martin Bosner
@ 2017-02-20 21:30 ` Phil Turmel
0 siblings, 0 replies; 16+ messages in thread
From: Phil Turmel @ 2017-02-20 21:30 UTC (permalink / raw)
To: Martin Bosner; +Cc: linux-raid
On 02/20/2017 02:31 PM, Martin Bosner wrote:
> Is there a way tell mdadm to use sdk1 as a “active” device? And how
> can i tell the array that it should not try to recover sdm1 but set
> it active ? Is there any magic to force the state ? It might not be
> healthy for the normal use case but might be helpful for me.
No. There's no way to do that. Short of reading the kernel code to do
hex editing on the superblock. And since the spare status is there due
to an --add action, you can't trust that anything else there would be
safe for --create --assume-clean. You would scramble these eggs even
further.
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: assistance recovering failed raid6 array
2017-02-20 21:21 ` Phil Turmel
@ 2017-02-21 2:03 ` Brad Campbell
0 siblings, 0 replies; 16+ messages in thread
From: Brad Campbell @ 2017-02-21 2:03 UTC (permalink / raw)
To: Phil Turmel, Wols Lists, Martin Bosner; +Cc: linux-raid
On 21/02/17 05:21, Phil Turmel wrote:
> On 02/20/2017 03:45 PM, Wols Lists wrote:
>
>> But there's a request on the linux wiki program for someone to write a
>> utility program that takes a ddrescue log and flags the duff sectors as
>> "soft unreadable". That would mean that if you can recover 35 drives,
>> provided no stripe has lost two sectors across two drives, you wouldn't
>> lose any data.
>>
>> If you want to try and write that utility? Or if you want to email me a
>> ddrescue log with a bunch of failed sectors, I'll have a go at writing
>> it myself :-)
>
> Check out hdparm --make-bad-sector. You can get what you are describing
> by scripting that. It's marked very dangerous, but I guess if one has
> nothing to lose....
>
Wol and I have tic-tacced on that a couple of times. He suggested the
idea and I proved the viability of it by testing hdparm in a RAID to do
exactly that. Neither of us has had a chance to stitch it all together,
but the preliminary tests indicate that would do *exactly* what was
required.
Given the hdparm commands are completely reversible and non-permanent,
*and* they are being executed on a destination of a ddrescue, there is
pretty much no risk (as long as the underlying glue code to be written
gets the sector numbers and offsets right).
Regards,
Brad
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2017-02-21 2:03 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-20 1:49 assistance recovering failed raid6 array Martin Bosner
2017-02-20 15:39 ` Phil Turmel
[not found] ` <E18A7C79-09E0-4361-9F89-68AE1E6FCBF6@bosner.de>
2017-02-20 17:36 ` Phil Turmel
2017-02-20 17:48 ` Martin Bosner
2017-02-20 18:11 ` Phil Turmel
2017-02-20 18:27 ` Martin Bosner
2017-02-20 19:01 ` Wols Lists
2017-02-20 19:11 ` Martin Bosner
2017-02-20 19:16 ` Phil Turmel
2017-02-20 19:31 ` Martin Bosner
2017-02-20 21:30 ` Phil Turmel
2017-02-20 20:45 ` Wols Lists
2017-02-20 21:21 ` Phil Turmel
2017-02-21 2:03 ` Brad Campbell
2017-02-20 17:50 ` Roman Mamedov
2017-02-20 18:13 ` Martin Bosner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.