* help requested for mdadm grow error
@ 2020-05-25 17:25 Thomas Grawert
2020-05-25 18:09 ` Wols Lists
0 siblings, 1 reply; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 17:25 UTC (permalink / raw)
To: linux-raid
Hi there,
I´m pretty new here and already tried finding a solution with aunt
google - without luck. So hopefully someone of you can help me:
I´m running a NAS with 4x 12TB WD120EFAX using mdadm Raid5 on the basis
of Debian 10.
Because of capacity and speed I tried adding another WD120EFAX by simply
"mdadm --grow --raid-devices=5 /dev/md0 /dev/sd[a-e]1
--backup-file=/tmp/bu.bak"
Everything worked... but during reshape, I had a power interruption.
When power was back, I tried to restart NAS but the md disappeared.
Long story short... after asking aunt Google I managed to get the raid5
up "active, not started":
root@nas:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Used Dev Size : 18446744073709551615
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Mon May 25 16:05:38 2020
State : active, Not Started
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Delta Devices : 1, (4->5)
Name : nas:0 (local to host nas)
UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Events : 38602
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
4 8 49 3 active sync /dev/sdd1
5 8 65 4 active sync /dev/sde1
I already tried to repair the md0 using the mentioned way from
https://serverfault.com/questions/776170/mdadm-grow-power-failure-dev-md2-no-longer-detected-raid5
... however, the raid didn´t start.
Unfortunately the raid is not mountable (cannot read superblock), even
readonly. So it´s impossible to run a backup of stored data, for now.
Any help is highly appreciated.
Greetings
Thomas
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 17:25 help requested for mdadm grow error Thomas Grawert
@ 2020-05-25 18:09 ` Wols Lists
2020-05-25 18:18 ` Thomas Grawert
2020-05-25 18:24 ` Thomas Grawert
0 siblings, 2 replies; 23+ messages in thread
From: Wols Lists @ 2020-05-25 18:09 UTC (permalink / raw)
To: Thomas Grawert, linux-raid
On 25/05/20 18:25, Thomas Grawert wrote:
> Unfortunately the raid is not mountable (cannot read superblock), even
> readonly. So it´s impossible to run a backup of stored data, for now.
>
> Any help is highly appreciated.
https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
Especially
https://raid.wiki.kernel.org/index.php/Asking_for_help
More will follow but we need this info.
Cheers,
Wol
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 18:09 ` Wols Lists
@ 2020-05-25 18:18 ` Thomas Grawert
2020-05-25 18:55 ` Wols Lists
2020-05-25 18:24 ` Thomas Grawert
1 sibling, 1 reply; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 18:18 UTC (permalink / raw)
To: linux-raid
> https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
>
> Especially
>
> https://raid.wiki.kernel.org/index.php/Asking_for_help
>
> More will follow but we need this info.
You´re totally right. Sorry for not quoting.
All drives are new.
root@nas:~# smartctl --xall /dev/sda
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-12-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WDC WD120EFAX-68UNTN0
Serial Number: 8CHM61BE
LU WWN Device Id: 5 000cca 26fd6d13e
Firmware Version: 81.00A81
User Capacity: 12.000.138.625.024 bytes [12,0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon May 25 20:12:40 2020 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is: Enabled
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection:
Enabled.
Self-test execution status: ( 0) The previous self-test routine
completed
without error or no self-test
has ever
been run.
Total time to complete Offline
data collection: ( 87) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: (1197) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control
supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate PO-R-- 100 100 016 - 0
2 Throughput_Performance --S--- 129 129 054 - 104
3 Spin_Up_Time POS--- 100 100 024 - 0
4 Start_Stop_Count -O--C- 100 100 000 - 2
5 Reallocated_Sector_Ct PO--CK 100 100 005 - 0
7 Seek_Error_Rate -O-R-- 100 100 067 - 0
8 Seek_Time_Performance --S--- 140 140 020 - 15
9 Power_On_Hours -O--C- 100 100 000 - 212
10 Spin_Retry_Count -O--C- 100 100 060 - 0
12 Power_Cycle_Count -O--CK 100 100 000 - 2
22 Unknown_Attribute PO---K 100 100 025 - 100
192 Power-Off_Retract_Count -O--CK 100 100 000 - 8
193 Load_Cycle_Count -O--C- 100 100 000 - 8
194 Temperature_Celsius -O---- 196 196 000 - 33 (Min/Max
22/37)
196 Reallocated_Event_Count -O--CK 100 100 000 - 0
197 Current_Pending_Sector -O---K 100 100 000 - 0
198 Offline_Uncorrectable ---R-- 100 100 000 - 0
199 UDMA_CRC_Error_Count -O-R-- 200 200 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 1 Comprehensive SMART error log
0x03 GPL R/O 1 Ext. Comprehensive SMART error log
0x04 GPL R/O 256 Device Statistics log
0x04 SL R/O 255 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x08 GPL R/O 2 Power Conditions log
0x09 SL R/W 1 Selective self-test log
0x0c GPL R/O 5501 Pending Defects log
0x10 GPL R/O 1 SATA NCQ Queued Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x12 GPL R/O 1 SATA NCQ NON-DATA log
0x13 GPL R/O 1 SATA NCQ Send and Receive log
0x15 GPL R/W 1 SATA Rebuild Assist log
0x21 GPL R/O 1 Write stream error log
0x22 GPL R/O 1 Read stream error log
0x24 GPL R/O 256 Current Device Internal Status Data log
0x25 GPL R/O 256 Saved Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer
SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged
SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
SCT Status Version: 3
SCT Version (vendor specific): 256 (0x0100)
SCT Support Level: 0
Device State: Active (0)
Current Temperature: 33 Celsius
Power Cycle Min/Max Temperature: 26/37 Celsius
Lifetime Min/Max Temperature: 22/37 Celsius
Under/Over Temperature Limit Count: 0/0
SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/65 Celsius
Min/Max Temperature Limit: -40/70 Celsius
Temperature History Size (Index): 128 (70)
Index Estimated Time Temperature Celsius
71 2020-05-25 18:05 35 ****************
... ..( 9 skipped). .. ****************
81 2020-05-25 18:15 35 ****************
82 2020-05-25 18:16 34 ***************
83 2020-05-25 18:17 34 ***************
84 2020-05-25 18:18 34 ***************
85 2020-05-25 18:19 35 ****************
... ..( 9 skipped). .. ****************
95 2020-05-25 18:29 35 ****************
96 2020-05-25 18:30 34 ***************
... ..( 80 skipped). .. ***************
49 2020-05-25 19:51 34 ***************
50 2020-05-25 19:52 33 **************
... ..( 18 skipped). .. **************
69 2020-05-25 20:11 33 **************
70 2020-05-25 20:12 35 ****************
SCT Error Recovery Control:
Read: 70 (7,0 seconds)
Write: 70 (7,0 seconds)
Device Statistics (GP/SMART Log 0x04) not supported
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 53 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 48 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
========================================================
root@nas:~# smartctl --xall /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-12-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WDC WD120EFAX-68UNTN0
Serial Number: 8CHUDDLE
LU WWN Device Id: 5 000cca 26fd9a35a
Firmware Version: 81.00A81
User Capacity: 12.000.138.625.024 bytes [12,0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon May 25 20:13:47 2020 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is: Enabled
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection:
Enabled.
Self-test execution status: ( 0) The previous self-test routine
completed
without error or no self-test
has ever
been run.
Total time to complete Offline
data collection: ( 87) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: (1322) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control
supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate PO-R-- 100 100 016 - 0
2 Throughput_Performance --S--- 129 129 054 - 104
3 Spin_Up_Time POS--- 100 100 024 - 0
4 Start_Stop_Count -O--C- 100 100 000 - 2
5 Reallocated_Sector_Ct PO--CK 100 100 005 - 0
7 Seek_Error_Rate -O-R-- 100 100 067 - 0
8 Seek_Time_Performance --S--- 128 128 020 - 18
9 Power_On_Hours -O--C- 100 100 000 - 212
10 Spin_Retry_Count -O--C- 100 100 060 - 0
12 Power_Cycle_Count -O--CK 100 100 000 - 2
22 Unknown_Attribute PO---K 100 100 025 - 100
192 Power-Off_Retract_Count -O--CK 100 100 000 - 4
193 Load_Cycle_Count -O--C- 100 100 000 - 4
194 Temperature_Celsius -O---- 196 196 000 - 33 (Min/Max
22/37)
196 Reallocated_Event_Count -O--CK 100 100 000 - 0
197 Current_Pending_Sector -O---K 100 100 000 - 0
198 Offline_Uncorrectable ---R-- 100 100 000 - 0
199 UDMA_CRC_Error_Count -O-R-- 200 200 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 1 Comprehensive SMART error log
0x03 GPL R/O 1 Ext. Comprehensive SMART error log
0x04 GPL R/O 256 Device Statistics log
0x04 SL R/O 255 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x08 GPL R/O 2 Power Conditions log
0x09 SL R/W 1 Selective self-test log
0x0c GPL R/O 5501 Pending Defects log
0x10 GPL R/O 1 SATA NCQ Queued Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x12 GPL R/O 1 SATA NCQ NON-DATA log
0x13 GPL R/O 1 SATA NCQ Send and Receive log
0x15 GPL R/W 1 SATA Rebuild Assist log
0x21 GPL R/O 1 Write stream error log
0x22 GPL R/O 1 Read stream error log
0x24 GPL R/O 256 Current Device Internal Status Data log
0x25 GPL R/O 256 Saved Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer
SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged
SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
SCT Status Version: 3
SCT Version (vendor specific): 256 (0x0100)
SCT Support Level: 0
Device State: Active (0)
Current Temperature: 33 Celsius
Power Cycle Min/Max Temperature: 26/37 Celsius
Lifetime Min/Max Temperature: 22/37 Celsius
Under/Over Temperature Limit Count: 0/0
SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/65 Celsius
Min/Max Temperature Limit: -40/70 Celsius
Temperature History Size (Index): 128 (62)
Index Estimated Time Temperature Celsius
63 2020-05-25 18:06 35 ****************
... ..( 26 skipped). .. ****************
90 2020-05-25 18:33 35 ****************
91 2020-05-25 18:34 34 ***************
92 2020-05-25 18:35 34 ***************
93 2020-05-25 18:36 34 ***************
94 2020-05-25 18:37 35 ****************
... ..( 9 skipped). .. ****************
104 2020-05-25 18:47 35 ****************
105 2020-05-25 18:48 34 ***************
... ..( 80 skipped). .. ***************
58 2020-05-25 20:09 34 ***************
59 2020-05-25 20:10 33 **************
60 2020-05-25 20:11 33 **************
61 2020-05-25 20:12 33 **************
62 2020-05-25 20:13 35 ****************
SCT Error Recovery Control:
Read: 70 (7,0 seconds)
Write: 70 (7,0 seconds)
Device Statistics (GP/SMART Log 0x04) not supported
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 12004 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 12005 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
================================================
root@nas:~# smartctl --xall /dev/sdc
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-12-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WDC WD120EFAX-68UNTN0
Serial Number: 8CJMXNDE
LU WWN Device Id: 5 000cca 26fe53da3
Firmware Version: 81.00A81
User Capacity: 12.000.138.625.024 bytes [12,0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon May 25 20:15:17 2020 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is: Enabled
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection:
Enabled.
Self-test execution status: ( 0) The previous self-test routine
completed
without error or no self-test
has ever
been run.
Total time to complete Offline
data collection: ( 87) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: (1260) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control
supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate PO-R-- 100 100 016 - 0
2 Throughput_Performance --S--- 127 127 054 - 112
3 Spin_Up_Time POS--- 100 100 024 - 0
4 Start_Stop_Count -O--C- 100 100 000 - 2
5 Reallocated_Sector_Ct PO--CK 100 100 005 - 0
7 Seek_Error_Rate -O-R-- 100 100 067 - 0
8 Seek_Time_Performance --S--- 140 140 020 - 15
9 Power_On_Hours -O--C- 100 100 000 - 212
10 Spin_Retry_Count -O--C- 100 100 060 - 0
12 Power_Cycle_Count -O--CK 100 100 000 - 2
22 Unknown_Attribute PO---K 100 100 025 - 100
192 Power-Off_Retract_Count -O--CK 100 100 000 - 7
193 Load_Cycle_Count -O--C- 100 100 000 - 7
194 Temperature_Celsius -O---- 196 196 000 - 33 (Min/Max
23/37)
196 Reallocated_Event_Count -O--CK 100 100 000 - 0
197 Current_Pending_Sector -O---K 100 100 000 - 0
198 Offline_Uncorrectable ---R-- 100 100 000 - 0
199 UDMA_CRC_Error_Count -O-R-- 200 200 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 1 Comprehensive SMART error log
0x03 GPL R/O 1 Ext. Comprehensive SMART error log
0x04 GPL R/O 256 Device Statistics log
0x04 SL R/O 255 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x08 GPL R/O 2 Power Conditions log
0x09 SL R/W 1 Selective self-test log
0x0c GPL R/O 5501 Pending Defects log
0x10 GPL R/O 1 SATA NCQ Queued Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x12 GPL R/O 1 SATA NCQ NON-DATA log
0x13 GPL R/O 1 SATA NCQ Send and Receive log
0x15 GPL R/W 1 SATA Rebuild Assist log
0x21 GPL R/O 1 Write stream error log
0x22 GPL R/O 1 Read stream error log
0x24 GPL R/O 256 Current Device Internal Status Data log
0x25 GPL R/O 256 Saved Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer
SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged
SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
SCT Status Version: 3
SCT Version (vendor specific): 256 (0x0100)
SCT Support Level: 0
Device State: Active (0)
Current Temperature: 33 Celsius
Power Cycle Min/Max Temperature: 26/37 Celsius
Lifetime Min/Max Temperature: 23/37 Celsius
Under/Over Temperature Limit Count: 0/0
SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/65 Celsius
Min/Max Temperature Limit: -40/70 Celsius
Temperature History Size (Index): 128 (54)
Index Estimated Time Temperature Celsius
55 2020-05-25 18:08 35 ****************
... ..( 22 skipped). .. ****************
78 2020-05-25 18:31 35 ****************
79 2020-05-25 18:32 34 ***************
... ..( 5 skipped). .. ***************
85 2020-05-25 18:38 34 ***************
86 2020-05-25 18:39 35 ****************
... ..( 5 skipped). .. ****************
92 2020-05-25 18:45 35 ****************
93 2020-05-25 18:46 34 ***************
... ..( 81 skipped). .. ***************
47 2020-05-25 20:08 34 ***************
48 2020-05-25 20:09 33 **************
... ..( 4 skipped). .. **************
53 2020-05-25 20:14 33 **************
54 2020-05-25 20:15 35 ****************
SCT Error Recovery Control:
Read: 70 (7,0 seconds)
Write: 70 (7,0 seconds)
Device Statistics (GP/SMART Log 0x04) not supported
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 25 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 26 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
===============================================
root@nas:~# smartctl --xall /dev/sdd
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-12-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WDC WD120EFAX-68UNTN0
Serial Number: 8CJNZTLE
LU WWN Device Id: 5 000cca 26fe5ba06
Firmware Version: 81.00A81
User Capacity: 12.000.138.625.024 bytes [12,0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon May 25 20:16:28 2020 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is: Enabled
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection:
Enabled.
Self-test execution status: ( 0) The previous self-test routine
completed
without error or no self-test
has ever
been run.
Total time to complete Offline
data collection: ( 87) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: (1333) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control
supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate PO-R-- 100 100 016 - 0
2 Throughput_Performance --S--- 127 127 054 - 112
3 Spin_Up_Time POS--- 100 100 024 - 0
4 Start_Stop_Count -O--C- 100 100 000 - 2
5 Reallocated_Sector_Ct PO--CK 100 100 005 - 0
7 Seek_Error_Rate -O-R-- 100 100 067 - 0
8 Seek_Time_Performance --S--- 140 140 020 - 15
9 Power_On_Hours -O--C- 100 100 000 - 212
10 Spin_Retry_Count -O--C- 100 100 060 - 0
12 Power_Cycle_Count -O--CK 100 100 000 - 2
22 Unknown_Attribute PO---K 100 100 025 - 100
192 Power-Off_Retract_Count -O--CK 100 100 000 - 10
193 Load_Cycle_Count -O--C- 100 100 000 - 10
194 Temperature_Celsius -O---- 196 196 000 - 33 (Min/Max
22/37)
196 Reallocated_Event_Count -O--CK 100 100 000 - 0
197 Current_Pending_Sector -O---K 100 100 000 - 0
198 Offline_Uncorrectable ---R-- 100 100 000 - 0
199 UDMA_CRC_Error_Count -O-R-- 200 200 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 1 Comprehensive SMART error log
0x03 GPL R/O 1 Ext. Comprehensive SMART error log
0x04 GPL R/O 256 Device Statistics log
0x04 SL R/O 255 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x08 GPL R/O 2 Power Conditions log
0x09 SL R/W 1 Selective self-test log
0x0c GPL R/O 5501 Pending Defects log
0x10 GPL R/O 1 SATA NCQ Queued Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x12 GPL R/O 1 SATA NCQ NON-DATA log
0x13 GPL R/O 1 SATA NCQ Send and Receive log
0x15 GPL R/W 1 SATA Rebuild Assist log
0x21 GPL R/O 1 Write stream error log
0x22 GPL R/O 1 Read stream error log
0x24 GPL R/O 256 Current Device Internal Status Data log
0x25 GPL R/O 256 Saved Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer
SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged
SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
SCT Status Version: 3
SCT Version (vendor specific): 256 (0x0100)
SCT Support Level: 0
Device State: Active (0)
Current Temperature: 33 Celsius
Power Cycle Min/Max Temperature: 25/37 Celsius
Lifetime Min/Max Temperature: 22/37 Celsius
Under/Over Temperature Limit Count: 0/0
SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/65 Celsius
Min/Max Temperature Limit: -40/70 Celsius
Temperature History Size (Index): 128 (1)
Index Estimated Time Temperature Celsius
2 2020-05-25 18:09 34 ***************
3 2020-05-25 18:10 34 ***************
4 2020-05-25 18:11 34 ***************
5 2020-05-25 18:12 35 ****************
... ..( 2 skipped). .. ****************
8 2020-05-25 18:15 35 ****************
9 2020-05-25 18:16 34 ***************
... ..( 48 skipped). .. ***************
58 2020-05-25 19:05 34 ***************
59 2020-05-25 19:06 33 **************
... ..( 4 skipped). .. **************
64 2020-05-25 19:11 33 **************
65 2020-05-25 19:12 34 ***************
... ..( 7 skipped). .. ***************
73 2020-05-25 19:20 34 ***************
74 2020-05-25 19:21 33 **************
... ..( 6 skipped). .. **************
81 2020-05-25 19:28 33 **************
82 2020-05-25 19:29 34 ***************
... ..( 6 skipped). .. ***************
89 2020-05-25 19:36 34 ***************
90 2020-05-25 19:37 33 **************
... ..( 8 skipped). .. **************
99 2020-05-25 19:46 33 **************
100 2020-05-25 19:47 34 ***************
101 2020-05-25 19:48 34 ***************
102 2020-05-25 19:49 33 **************
103 2020-05-25 19:50 34 ***************
104 2020-05-25 19:51 33 **************
... ..( 23 skipped). .. **************
0 2020-05-25 20:15 33 **************
1 2020-05-25 20:16 34 ***************
SCT Error Recovery Control:
Read: 70 (7,0 seconds)
Write: 70 (7,0 seconds)
Device Statistics (GP/SMART Log 0x04) not supported
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 13 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 14 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
=============================================
root@nas:~# smartctl --xall /dev/sde
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-12-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WDC WD120EFAX-68UNTN0
Serial Number: 8CHU2XRE
LU WWN Device Id: 5 000cca 26fd97fc4
Firmware Version: 81.00A81
User Capacity: 12.000.138.625.024 bytes [12,0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon May 25 20:17:07 2020 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is: Enabled
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection:
Enabled.
Self-test execution status: ( 0) The previous self-test routine
completed
without error or no self-test
has ever
been run.
Total time to complete Offline
data collection: ( 87) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: (1265) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control
supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate PO-R-- 100 100 016 - 0
2 Throughput_Performance --S--- 127 127 054 - 111
3 Spin_Up_Time POS--- 100 100 024 - 0
4 Start_Stop_Count -O--C- 100 100 000 - 2
5 Reallocated_Sector_Ct PO--CK 100 100 005 - 0
7 Seek_Error_Rate -O-R-- 100 100 067 - 0
8 Seek_Time_Performance --S--- 140 140 020 - 15
9 Power_On_Hours -O--C- 100 100 000 - 265
10 Spin_Retry_Count -O--C- 100 100 060 - 0
12 Power_Cycle_Count -O--CK 100 100 000 - 2
22 Unknown_Attribute PO---K 100 100 025 - 100
192 Power-Off_Retract_Count -O--CK 100 100 000 - 12
193 Load_Cycle_Count -O--C- 100 100 000 - 12
194 Temperature_Celsius -O---- 196 196 000 - 33 (Min/Max
24/37)
196 Reallocated_Event_Count -O--CK 100 100 000 - 0
197 Current_Pending_Sector -O---K 100 100 000 - 0
198 Offline_Uncorrectable ---R-- 100 100 000 - 0
199 UDMA_CRC_Error_Count -O-R-- 200 200 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 1 Comprehensive SMART error log
0x03 GPL R/O 1 Ext. Comprehensive SMART error log
0x04 GPL R/O 256 Device Statistics log
0x04 SL R/O 255 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x08 GPL R/O 2 Power Conditions log
0x09 SL R/W 1 Selective self-test log
0x0c GPL R/O 5501 Pending Defects log
0x10 GPL R/O 1 SATA NCQ Queued Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x12 GPL R/O 1 SATA NCQ NON-DATA log
0x13 GPL R/O 1 SATA NCQ Send and Receive log
0x15 GPL R/W 1 SATA Rebuild Assist log
0x21 GPL R/O 1 Write stream error log
0x22 GPL R/O 1 Read stream error log
0x24 GPL R/O 256 Current Device Internal Status Data log
0x25 GPL R/O 256 Saved Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer
SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged
SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
SCT Status Version: 3
SCT Version (vendor specific): 256 (0x0100)
SCT Support Level: 0
Device State: Active (0)
Current Temperature: 33 Celsius
Power Cycle Min/Max Temperature: 25/37 Celsius
Lifetime Min/Max Temperature: 24/37 Celsius
Under/Over Temperature Limit Count: 0/0
SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/65 Celsius
Min/Max Temperature Limit: -40/70 Celsius
Temperature History Size (Index): 128 (76)
Index Estimated Time Temperature Celsius
77 2020-05-25 18:10 35 ****************
78 2020-05-25 18:11 34 ***************
... ..( 2 skipped). .. ***************
81 2020-05-25 18:14 34 ***************
82 2020-05-25 18:15 35 ****************
... ..( 8 skipped). .. ****************
91 2020-05-25 18:24 35 ****************
92 2020-05-25 18:25 34 ***************
... ..( 63 skipped). .. ***************
28 2020-05-25 19:29 34 ***************
29 2020-05-25 19:30 33 **************
... ..( 2 skipped). .. **************
32 2020-05-25 19:33 33 **************
33 2020-05-25 19:34 34 ***************
... ..( 9 skipped). .. ***************
43 2020-05-25 19:44 34 ***************
44 2020-05-25 19:45 33 **************
... ..( 7 skipped). .. **************
52 2020-05-25 19:53 33 **************
53 2020-05-25 19:54 34 ***************
... ..( 4 skipped). .. ***************
58 2020-05-25 19:59 34 ***************
59 2020-05-25 20:00 33 **************
... ..( 15 skipped). .. **************
75 2020-05-25 20:16 33 **************
76 2020-05-25 20:17 35 ****************
SCT Error Recovery Control:
Read: 70 (7,0 seconds)
Write: 70 (7,0 seconds)
Device Statistics (GP/SMART Log 0x04) not supported
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 13 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 14 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
==========================================
Last one, I tried is:
root@nas:~# mdadm -Av --invalid-backup --backup-file=/tmp/bu.bak
--update=resync /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 4.
mdadm: /dev/md0 has an active reshape - checking if critical section
needs to be restored
mdadm: Cannot read from /tmp/bu.bak
mdadm: No backup metadata on device-4
mdadm: Failed to find backup of critical section
mdadm: continuing without restoring backup
mdadm: added /dev/sdb1 to /dev/md0 as 1
mdadm: added /dev/sdc1 to /dev/md0 as 2
mdadm: added /dev/sdd1 to /dev/md0 as 3
mdadm: added /dev/sde1 to /dev/md0 as 4
mdadm: added /dev/sda1 to /dev/md0 as 0
mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument
root@nas:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 5
Persistence : Superblock is persistent
State : inactive
Delta Devices : 1, (-1->0)
New Level : raid5
New Layout : left-symmetric
New Chunksize : 512K
Name : nas:0 (local to host nas)
UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Events : 38602
Number Major Minor RaidDevice
- 8 65 - /dev/sde1
- 8 49 - /dev/sdd1
- 8 33 - /dev/sdc1
- 8 17 - /dev/sdb1
- 8 1 - /dev/sda1
Thanks for your help
Greetings
Thomas
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 18:09 ` Wols Lists
2020-05-25 18:18 ` Thomas Grawert
@ 2020-05-25 18:24 ` Thomas Grawert
1 sibling, 0 replies; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 18:24 UTC (permalink / raw)
To: linux-raid
> https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
>
> Especially
>
> https://raid.wiki.kernel.org/index.php/Asking_for_help
>
> More will follow but we need this info.
>
> Cheers,
> Wol
And the output of examine:
root@nas:~# mdadm -E /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x5
Array UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Name : nas:0 (local to host nas)
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 23437504512 (11175.87 GiB 12000.00 GB)
Array Size : 46875009024 (44703.49 GiB 48000.01 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 64b8c8bd:400cf9d1:d5161aac:4b8ac29a
Internal Bitmap : 8 sectors from superblock
Reshape pos'n : 0
Delta Devices : 1 (4->5)
Update Time : Mon May 25 16:05:38 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : d9c3b859 - correct
Events : 38602
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
======================================
root@nas:~# mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x5
Array UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Name : nas:0 (local to host nas)
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 23437504512 (11175.87 GiB 12000.00 GB)
Array Size : 46875009024 (44703.49 GiB 48000.01 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : d1f20899:f22a1267:6525deb0:e109960c
Internal Bitmap : 8 sectors from superblock
Reshape pos'n : 0
Delta Devices : 1 (4->5)
Update Time : Mon May 25 16:05:38 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : c0b49f9e - correct
Events : 38602
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
=========================================
root@nas:~# mdadm -E /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x5
Array UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Name : nas:0 (local to host nas)
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 23437504512 (11175.87 GiB 12000.00 GB)
Array Size : 46875009024 (44703.49 GiB 48000.01 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 6e6c5cd4:0f3d0695:fac0399d:060554d8
Internal Bitmap : 8 sectors from superblock
Reshape pos'n : 0
Delta Devices : 1 (4->5)
Update Time : Mon May 25 16:05:38 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : e215c214 - correct
Events : 38602
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
============================================
root@nas:~# mdadm -E /dev/sdd1
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x5
Array UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Name : nas:0 (local to host nas)
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 23437504512 (11175.87 GiB 12000.00 GB)
Array Size : 46875009024 (44703.49 GiB 48000.01 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 999c5ee2:180dc99f:7c65cac5:d46fac89
Internal Bitmap : 8 sectors from superblock
Reshape pos'n : 0
Delta Devices : 1 (4->5)
Update Time : Mon May 25 16:05:38 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : d4c3d19a - correct
Events : 38602
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
===============================================
root@nas:~# mdadm -E /dev/sde1
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x5
Array UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Name : nas:0 (local to host nas)
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 23437504512 (11175.87 GiB 12000.00 GB)
Array Size : 46875009024 (44703.49 GiB 48000.01 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 29cfb720:59627867:f11153ce:443a6395
Internal Bitmap : 8 sectors from superblock
Reshape pos'n : 0
Delta Devices : 1 (4->5)
Update Time : Mon May 25 16:05:38 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : ef0bd050 - correct
Events : 38602
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
========================
root@nas:~# mdadm -E /dev/sda
/dev/sda:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@nas:~# mdadm -E /dev/sdb
/dev/sdb:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@nas:~# mdadm -E /dev/sdc
/dev/sdc:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@nas:~# mdadm -E /dev/sdd
/dev/sdd:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@nas:~# mdadm -E /dev/sde
/dev/sde:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 18:18 ` Thomas Grawert
@ 2020-05-25 18:55 ` Wols Lists
2020-05-25 19:05 ` Thomas Grawert
0 siblings, 1 reply; 23+ messages in thread
From: Wols Lists @ 2020-05-25 18:55 UTC (permalink / raw)
To: Thomas Grawert, linux-raid
On 25/05/20 19:18, Thomas Grawert wrote:
>
>> https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
>>
>> Especially
>>
>> https://raid.wiki.kernel.org/index.php/Asking_for_help
>>
>> More will follow but we need this info.
>
> You´re totally right. Sorry for not quoting.
>
> All drives are new.
>
> root@nas:~# smartctl --xall /dev/sda
> smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-12-amd64] (local build)
> Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
>
> === START OF INFORMATION SECTION ===
> Device Model: WDC WD120EFAX-68UNTN0
The EFAX had me worried a moment, but these are 12TB Reds? That's fine.
A lot of the smaller drives are now shingled, ie not fit for purpose!
Debian 10 - I don't know my Debians - how up to date is that? Is it a
new kernel with not much backports, or an old kernel full of backports?
What version of mdadm?
That said, everything looks good. There are known problems - WITH FIXES
- growing a raid 5 so I suspect you've fallen foul of one. I'd sort out
a rescue disk that you can boot off as you might need it. Once we know a
bit more the fix is almost certainly a rescue disk and resume the
reshape, or a revert-reshape and then reshaping from a rescue disk. At
which point, you'll get your array back with everything intact.
Cheers,
Wol
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 18:55 ` Wols Lists
@ 2020-05-25 19:05 ` Thomas Grawert
2020-05-25 19:20 ` Wols Lists
2020-05-25 19:33 ` Mikael Abrahamsson
0 siblings, 2 replies; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 19:05 UTC (permalink / raw)
To: linux-raid
The EFAX had me worried a moment, but these are 12TB Reds? That's fine.
> A lot of the smaller drives are now shingled, ie not fit for purpose!
>
> Debian 10 - I don't know my Debians - how up to date is that? Is it a
> new kernel with not much backports, or an old kernel full of backports?
>
> What version of mdadm?
>
>
> That said, everything looks good. There are known problems - WITH FIXES
> - growing a raid 5 so I suspect you've fallen foul of one. I'd sort out
> a rescue disk that you can boot off as you might need it. Once we know a
> bit more the fix is almost certainly a rescue disk and resume the
> reshape, or a revert-reshape and then reshaping from a rescue disk. At
> which point, you'll get your array back with everything intact.
yes, that´s the 12TB WD-Red - I´m using five pieces of it.
The Debian 10 is the most recent one. Kernel version is 4.9.0-12-amd64.
mdadm-version is v3.4 from 28th Jan 2016 - seems to be the latest,
because I can´t upgrade to any newer one using apt upgrade.
I don´t think I need a rescue disk, because the raid isn´t bootable.
It´s simply a big storage.
Thanks a lot for your support.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 19:05 ` Thomas Grawert
@ 2020-05-25 19:20 ` Wols Lists
2020-05-25 21:45 ` John Stoffel
2020-05-25 19:33 ` Mikael Abrahamsson
1 sibling, 1 reply; 23+ messages in thread
From: Wols Lists @ 2020-05-25 19:20 UTC (permalink / raw)
To: Thomas Grawert, linux-raid; +Cc: Phil Turmel
On 25/05/20 20:05, Thomas Grawert wrote:
> The EFAX had me worried a moment, but these are 12TB Reds? That's fine.
>> A lot of the smaller drives are now shingled, ie not fit for purpose!
>>
>> Debian 10 - I don't know my Debians - how up to date is that? Is it a
>> new kernel with not much backports, or an old kernel full of backports?
>>
>> What version of mdadm?
>>
>>
>> That said, everything looks good. There are known problems - WITH FIXES
>> - growing a raid 5 so I suspect you've fallen foul of one. I'd sort out
>> a rescue disk that you can boot off as you might need it. Once we know a
>> bit more the fix is almost certainly a rescue disk and resume the
>> reshape, or a revert-reshape and then reshaping from a rescue disk. At
>> which point, you'll get your array back with everything intact.
>
> yes, that´s the 12TB WD-Red - I´m using five pieces of it.
>
> The Debian 10 is the most recent one. Kernel version is 4.9.0-12-amd64.
> mdadm-version is v3.4 from 28th Jan 2016 - seems to be the latest,
> because I can´t upgrade to any newer one using apt upgrade.
OW! OW! OW!
The newest mdadm is 4.1 or 4.2. UPGRADE NOW. Just download and build it
from the master repository - the instructions are in the wiki.
And if Debian 10 is the latest, kernel 4.9 will be a
franken-patched-to-hell-kernel ... I believe the latest kernel is 5.6?
>
> I don´t think I need a rescue disk, because the raid isn´t bootable.
> It´s simply a big storage.
>
Okay, the latest mdadm *might* fix your problem. However, you probably
need a proper up-to-date kernel as well, so you DO need a rescue disk.
Unless Debian has the option of upgrading the kernel to a 5.x series
kernel which hopefully isn't patched to hell and back?
This looks like the classic "I'm running ubuntu with a franken-kernel
and raid administration no longer works" problem.
I'm guessing (an educated "probably right" guess) that your reshape has
hung at 0% complete. So the fix is to get your rescue disk, use the
latest mdadm to do a revert-reshape, then use the latest kernel and
mdadm to do the reshape, before booting back in to your old Debian and
carrying as if nothing had happened.
Cheers,
Wol
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 19:05 ` Thomas Grawert
2020-05-25 19:20 ` Wols Lists
@ 2020-05-25 19:33 ` Mikael Abrahamsson
2020-05-25 19:35 ` Thomas Grawert
2020-05-25 20:30 ` Thomas Grawert
1 sibling, 2 replies; 23+ messages in thread
From: Mikael Abrahamsson @ 2020-05-25 19:33 UTC (permalink / raw)
To: Thomas Grawert; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 409 bytes --]
On Mon, 25 May 2020, Thomas Grawert wrote:
> The Debian 10 is the most recent one. Kernel version is 4.9.0-12-amd64.
> mdadm-version is v3.4 from 28th Jan 2016 - seems to be the latest,
> because I can´t upgrade to any newer one using apt upgrade.
Are you sure about this? From what I can see debian 10 ships with mdadm
v4.1 and newer kernels than 4.9.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 19:33 ` Mikael Abrahamsson
@ 2020-05-25 19:35 ` Thomas Grawert
2020-05-25 19:42 ` Andy Smith
2020-05-25 20:30 ` Thomas Grawert
1 sibling, 1 reply; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 19:35 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: linux-raid
> On Mon, 25 May 2020, Thomas Grawert wrote:
>
>> The Debian 10 is the most recent one. Kernel version is
>> 4.9.0-12-amd64. mdadm-version is v3.4 from 28th Jan 2016 - seems to
>> be the latest, because I can´t upgrade to any newer one using apt
>> upgrade.
>
> Are you sure about this? From what I can see debian 10 ships with
> mdadm v4.1 and newer kernels than 4.9.
>
I shall check before yelling out some shit...
It´s currently running Debian 9.
I´m just running an upgrade to Buster. Lets see how things will work.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 19:35 ` Thomas Grawert
@ 2020-05-25 19:42 ` Andy Smith
0 siblings, 0 replies; 23+ messages in thread
From: Andy Smith @ 2020-05-25 19:42 UTC (permalink / raw)
To: linux-raid
On Mon, May 25, 2020 at 09:35:20PM +0200, Thomas Grawert wrote:
> >Someone unattributed wrote:
> >Are you sure about this? From what I can see debian 10 ships with mdadm
> >v4.1 and newer kernels than 4.9.
> >
> I shall check before yelling out some shit...
> It´s currently running Debian 9.
$ lsb_release -d && mdadm --version && uname -r
Description: Debian GNU/Linux 10 (buster)
mdadm - v4.1 - 2018-10-01
4.19.0-9-amd64
Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 19:33 ` Mikael Abrahamsson
2020-05-25 19:35 ` Thomas Grawert
@ 2020-05-25 20:30 ` Thomas Grawert
2020-05-25 21:19 ` antlists
1 sibling, 1 reply; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 20:30 UTC (permalink / raw)
Cc: linux-raid
Am 25.05.2020 um 21:33 schrieb Mikael Abrahamsson:
> On Mon, 25 May 2020, Thomas Grawert wrote:
>
>> The Debian 10 is the most recent one. Kernel version is
>> 4.9.0-12-amd64. mdadm-version is v3.4 from 28th Jan 2016 - seems to
>> be the latest, because I can´t upgrade to any newer one using apt
>> upgrade.
>
> Are you sure about this? From what I can see debian 10 ships with
> mdadm v4.1 and newer kernels than 4.9.
>
Thanks to Mikael to hit my nose :)
System is now at Debian 10 with Kernel 5.5.0.0.bpo.2-amd64. mdadm is at 4.1:
root@nas:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@nas:~# uname -r
5.5.0-0.bpo.2-amd64
root@nas:~# mdadm -V
mdadm - v4.1 - 2018-10-01
root@nas:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Used Dev Size : 18446744073709551615
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Mon May 25 16:05:38 2020
State : active, FAILED, Not Started
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : unknown
Delta Devices : 1, (4->5)
Name : nas:0 (local to host nas)
UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Events : 38602
Number Major Minor RaidDevice State
- 0 0 0 removed
- 0 0 1 removed
- 0 0 2 removed
- 0 0 3 removed
- 0 0 4 removed
- 8 1 0 sync /dev/sda1
- 8 81 4 sync /dev/sdf1
- 8 65 3 sync /dev/sde1
- 8 49 2 sync /dev/sdd1
- 8 33 1 sync /dev/sdc1
root@nas:~#
It seems, mdadm.conf or anything else is broken?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 20:30 ` Thomas Grawert
@ 2020-05-25 21:19 ` antlists
2020-05-25 21:22 ` Thomas Grawert
0 siblings, 1 reply; 23+ messages in thread
From: antlists @ 2020-05-25 21:19 UTC (permalink / raw)
To: Thomas Grawert; +Cc: linux-raid
On 25/05/2020 21:30, Thomas Grawert wrote:
>
> Am 25.05.2020 um 21:33 schrieb Mikael Abrahamsson:
>> On Mon, 25 May 2020, Thomas Grawert wrote:
>>
>>> The Debian 10 is the most recent one. Kernel version is
>>> 4.9.0-12-amd64. mdadm-version is v3.4 from 28th Jan 2016 - seems to
>>> be the latest, because I can´t upgrade to any newer one using apt
>>> upgrade.
>>
>> Are you sure about this? From what I can see debian 10 ships with
>> mdadm v4.1 and newer kernels than 4.9.
>>
> Thanks to Mikael to hit my nose :)
>
> System is now at Debian 10 with Kernel 5.5.0.0.bpo.2-amd64. mdadm is at
> 4.1:
>
> root@nas:~# cat /etc/os-release
> PRETTY_NAME="Debian GNU/Linux 10 (buster)"
> NAME="Debian GNU/Linux"
> VERSION_ID="10"
> VERSION="10 (buster)"
> VERSION_CODENAME=buster
> ID=debian
> HOME_URL="https://www.debian.org/"
> SUPPORT_URL="https://www.debian.org/support"
> BUG_REPORT_URL="https://bugs.debian.org/"
>
> root@nas:~# uname -r
> 5.5.0-0.bpo.2-amd64
>
> root@nas:~# mdadm -V
> mdadm - v4.1 - 2018-10-01
>
> root@nas:~# mdadm -D /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Sun May 17 00:23:42 2020
> Raid Level : raid5
> Used Dev Size : 18446744073709551615
> Raid Devices : 5
> Total Devices : 5
> Persistence : Superblock is persistent
>
> Update Time : Mon May 25 16:05:38 2020
> State : active, FAILED, Not Started
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> Consistency Policy : unknown
>
> Delta Devices : 1, (4->5)
>
> Name : nas:0 (local to host nas)
> UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
> Events : 38602
>
> Number Major Minor RaidDevice State
> - 0 0 0 removed
> - 0 0 1 removed
> - 0 0 2 removed
> - 0 0 3 removed
> - 0 0 4 removed
>
> - 8 1 0 sync /dev/sda1
> - 8 81 4 sync /dev/sdf1
> - 8 65 3 sync /dev/sde1
> - 8 49 2 sync /dev/sdd1
> - 8 33 1 sync /dev/sdc1
>
> root@nas:~#
>
>
> It seems, mdadm.conf or anything else is broken?
>
I don't think I've got an mdadm.conf ... and everything to me looks okay
but just not working.
Next step - how far has the reshape got? I *think* you might get that
from "cat /proc/mdstat". Can we have that please ... I'm *hoping* it
says the reshape is at 0%.
Cheers,
Wol
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 21:19 ` antlists
@ 2020-05-25 21:22 ` Thomas Grawert
2020-05-25 22:02 ` antlists
0 siblings, 1 reply; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 21:22 UTC (permalink / raw)
Cc: linux-raid
I don't think I've got an mdadm.conf ... and everything to me looks okay
but just not working.
>
> Next step - how far has the reshape got? I *think* you might get that
> from "cat /proc/mdstat". Can we have that please ... I'm *hoping* it
> says the reshape is at 0%.
>
> Cheers,
> Wol
root@nas:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : inactive sda1[0] sdf1[5] sde1[4] sdd1[2] sdc1[1]
58593761280 blocks super 1.2
unused devices: <none>
root@nas:~#
nothing... the reshaping run about 5min. before power loss.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 19:20 ` Wols Lists
@ 2020-05-25 21:45 ` John Stoffel
2020-05-25 21:55 ` antlists
0 siblings, 1 reply; 23+ messages in thread
From: John Stoffel @ 2020-05-25 21:45 UTC (permalink / raw)
To: Wols Lists; +Cc: Thomas Grawert, linux-raid, Phil Turmel
>>>>> "Wols" == Wols Lists <antlists@youngman.org.uk> writes:
Wols> On 25/05/20 20:05, Thomas Grawert wrote:
>> The EFAX had me worried a moment, but these are 12TB Reds? That's fine.
>>> A lot of the smaller drives are now shingled, ie not fit for purpose!
>>>
>>> Debian 10 - I don't know my Debians - how up to date is that? Is it a
>>> new kernel with not much backports, or an old kernel full of backports?
>>>
>>> What version of mdadm?
>>>
>>>
>>> That said, everything looks good. There are known problems - WITH FIXES
>>> - growing a raid 5 so I suspect you've fallen foul of one. I'd sort out
>>> a rescue disk that you can boot off as you might need it. Once we know a
>>> bit more the fix is almost certainly a rescue disk and resume the
>>> reshape, or a revert-reshape and then reshaping from a rescue disk. At
>>> which point, you'll get your array back with everything intact.
>>
>> yes, that´s the 12TB WD-Red - I´m using five pieces of it.
>>
>> The Debian 10 is the most recent one. Kernel version is 4.9.0-12-amd64.
>> mdadm-version is v3.4 from 28th Jan 2016 - seems to be the latest,
>> because I can´t upgrade to any newer one using apt upgrade.
Wols> OW! OW! OW!
Wols> The newest mdadm is 4.1 or 4.2. UPGRADE NOW. Just download and
Wols> build it from the master repository - the instructions are in
Wols> the wiki.
Wols> And if Debian 10 is the latest, kernel 4.9 will be a
Wols> franken-patched-to-hell-kernel ... I believe the latest kernel
Wols> is 5.6?
This is Debian Buster, and it's he's upto date with debian packages,
he's almost certainly in a stable setup. Now maybe we need to talk
with the Debian package maintainer for mdadm and ask them to upgrade,
or maybe why they haven't updated recently.
Hmm... It looks like debian 10 had mdadm 4.1-1, so that strongly makes
me suspect he's actually running 9.12 of Debian, which is A) what my
main server runs at home, and B) also quite old. But stable.
In any case, it's easy enough to git clone the latest release and
compile it.
John
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 21:45 ` John Stoffel
@ 2020-05-25 21:55 ` antlists
0 siblings, 0 replies; 23+ messages in thread
From: antlists @ 2020-05-25 21:55 UTC (permalink / raw)
To: John Stoffel; +Cc: Thomas Grawert, linux-raid, Phil Turmel
On 25/05/2020 22:45, John Stoffel wrote:
> This is Debian Buster, and it's he's upto date with debian packages,
> he's almost certainly in a stable setup. Now maybe we need to talk
> with the Debian package maintainer for mdadm and ask them to upgrade,
> or maybe why they haven't updated recently.
If you're going to do that, I think you need to get the kernel
maintainer to upgrade that as well...
As I see it, the CAUSE of the problem is that we have an old-but-updated
frankenkernel. It doesn't matter whether mdadm is contemporary with the
original pre-franken-state kernel or contemporary with the new
up-to-date frankenkernel, the fact of the matter is that the
relationship between mdadm and the kernel isn't regression tested beyond
making sure that arrays assemble and run correctly.
Hence the inevitable screw-ups when users try to administer their arrays.
Cheers,
Wol
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 21:22 ` Thomas Grawert
@ 2020-05-25 22:02 ` antlists
2020-05-25 22:18 ` Thomas Grawert
0 siblings, 1 reply; 23+ messages in thread
From: antlists @ 2020-05-25 22:02 UTC (permalink / raw)
To: Thomas Grawert; +Cc: linux-raid
On 25/05/2020 22:22, Thomas Grawert wrote:
> I don't think I've got an mdadm.conf ... and everything to me looks okay
> but just not working.
>>
>> Next step - how far has the reshape got? I *think* you might get that
>> from "cat /proc/mdstat". Can we have that please ... I'm *hoping* it
>> says the reshape is at 0%.
>>
>> Cheers,
>> Wol
>
>
> root@nas:~# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : inactive sda1[0] sdf1[5] sde1[4] sdd1[2] sdc1[1]
> 58593761280 blocks super 1.2
>
> unused devices: <none>
> root@nas:~#
>
> nothing... the reshaping run about 5min. before power loss.
>
Just done a search, and I've found this in a previous thread ...
! # mdadm --assemble /dev/md0 --force --verbose --invalid-backup
! /dev/sda1 /dev/sdd1 /dev/sde1 /dev/sdb1 /dev/sdc1
! This command resulted in the following message:
! mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument
! The syslog contained the following line:
! md/raid:md0: reshape_position too early for auto-recovery - aborting.
! That led me to the solution to revert the grow command:
! # mdadm --assemble /dev/md0 --force --verbose --update=revert-reshape
! --invalid-backup /dev/sda1 /dev/sdd1 /dev/sde1 /dev/sdb1 /dev/sdc1
Okay, so we need to grep dmesg looking for a message like that above
about reshaping.
So let's grep for "md/raid" and see what we get ...
Cheers,
Wol
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 22:02 ` antlists
@ 2020-05-25 22:18 ` Thomas Grawert
2020-05-25 22:32 ` antlists
0 siblings, 1 reply; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 22:18 UTC (permalink / raw)
Cc: linux-raid
> Okay, so we need to grep dmesg looking for a message like that above
> about reshaping.
>
> So let's grep for "md/raid" and see what we get ...
root@nas:~# dmesg | grep md/raid
[ 321.819562] md/raid:md0: not clean -- starting background reconstruction
[ 321.819564] md/raid:md0: reshape_position too early for auto-recovery
- aborting.
(again: thanks a lot for your help. I never expected that! :-) )
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 22:18 ` Thomas Grawert
@ 2020-05-25 22:32 ` antlists
2020-05-25 23:01 ` Thomas Grawert
` (3 more replies)
0 siblings, 4 replies; 23+ messages in thread
From: antlists @ 2020-05-25 22:32 UTC (permalink / raw)
To: Thomas Grawert; +Cc: linux-raid
On 25/05/2020 23:18, Thomas Grawert wrote:
>
>> Okay, so we need to grep dmesg looking for a message like that above
>> about reshaping.
>>
>> So let's grep for "md/raid" and see what we get ...
>
> root@nas:~# dmesg | grep md/raid
> [ 321.819562] md/raid:md0: not clean -- starting background reconstruction
> [ 321.819564] md/raid:md0: reshape_position too early for auto-recovery
> - aborting.
>
>
> (again: thanks a lot for your help. I never expected that! :-) )
>
>
Okay, so go back to the previous email for an example of how to revert
the reshape - it's done exactly what I expected and the reshape has
failed to start.
Once you've reverted the reshape, it should start fine. And with an
up-to-date mdadm and kernel, it should start and finish reshaping fine.
So step 1, revert the reshape. Step 2, get the array back running. Step
3, start the reshape again.
Cheers,
Wol
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 22:32 ` antlists
@ 2020-05-25 23:01 ` Thomas Grawert
2020-05-25 23:15 ` Thomas Grawert
` (2 subsequent siblings)
3 siblings, 0 replies; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 23:01 UTC (permalink / raw)
Cc: linux-raid
> So step 1, revert the reshape. Step 2, get the array back running.
> Step 3, start the reshape again.
root@nas:~# mdadm --assemble /dev/md0 --force --verbose
--update=revert-reshape --invalid-backup /dev/sda1 /dev/sdc1 /dev/sdd1
/dev/sde1 /dev/sdf1
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 4.
mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sdf1
mdadm: Marking array /dev/md0 as 'clean'
mdadm: added /dev/sdc1 to /dev/md0 as 1
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: added /dev/sde1 to /dev/md0 as 3
mdadm: added /dev/sdf1 to /dev/md0 as 4
mdadm: added /dev/sda1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 4 drives and 1 spare.
=======================
WOW!
ok, let´s check the status again:
root@nas:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Array Size : 35156256768 (33527.62 GiB 36000.01 GB)
Used Dev Size : 11718752256 (11175.87 GiB 12000.00 GB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon May 25 16:05:38 2020
State : clean, resyncing (PENDING)
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : nas:0 (local to host nas)
UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Events : 38602
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 65 3 active sync /dev/sde1
5 8 81 - spare /dev/sdf1
===================================================
root@nas:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active (auto-read-only) raid5 sda1[0] sdf1[5](S) sde1[4] sdd1[2]
sdc1[1]
35156256768 blocks super 1.2 level 5, 512k chunk, algorithm 2
[4/4] [UUUU]
resync=PENDING
bitmap: 0/88 pages [0KB], 65536KB chunk
unused devices: <none>
==================================================
root@nas:~# mount /dev/md0 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error.
root@nas:~#
seems the filesystem got a hit. how to proceed now?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 22:32 ` antlists
2020-05-25 23:01 ` Thomas Grawert
@ 2020-05-25 23:15 ` Thomas Grawert
2020-05-25 23:31 ` Thomas Grawert
2020-05-26 1:16 ` Thomas Grawert
3 siblings, 0 replies; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 23:15 UTC (permalink / raw)
Cc: linux-raid
> So step 1, revert the reshape. Step 2, get the array back running.
> Step 3, start the reshape again.
root@nas:~# mdadm --readwrite /dev/md0
root@nas:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Array Size : 35156256768 (33527.62 GiB 36000.01 GB)
Used Dev Size : 11718752256 (11175.87 GiB 12000.00 GB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue May 26 01:13:08 2020
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : nas:0 (local to host nas)
UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Events : 38605
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 65 3 active sync /dev/sde1
5 8 81 - spare /dev/sdf1
root@nas:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sda1[0] sdf1[5](S) sde1[4] sdd1[2] sdc1[1]
35156256768 blocks super 1.2 level 5, 512k chunk, algorithm 2
[4/4] [UUUU]
bitmap: 0/88 pages [0KB], 65536KB chunk
unused devices: <none>
=================================
ok, got a bit too scared :)
it´s now working again. so I can plugin a UPS and restart grow.
guys, thank you all very much for your help.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 22:32 ` antlists
2020-05-25 23:01 ` Thomas Grawert
2020-05-25 23:15 ` Thomas Grawert
@ 2020-05-25 23:31 ` Thomas Grawert
2020-05-26 1:16 ` antlists
2020-05-26 1:16 ` Thomas Grawert
3 siblings, 1 reply; 23+ messages in thread
From: Thomas Grawert @ 2020-05-25 23:31 UTC (permalink / raw)
Cc: linux-raid
ok, maybe it´s getting out of scope now. If so, please let me know...
md0 is clean and running. no active resync. I just tried to mount the
filesystem to check if everything is fine and to proceed with growing...
thanks god, I did it this way because:
root@nas:~# mount /dev/md0 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error.
root@nas:~# df -h
Filesystem Größe Benutzt Verf. Verw% Eingehängt auf
udev 16G 0 16G 0% /dev
tmpfs 3,1G 11M 3,1G 1% /run
/dev/sdg2 203G 7,2G 186G 4% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sdg1 511M 5,2M 506M 1% /boot/efi
tmpfs 3,1G 0 3,1G 0% /run/user/0
root@nas:~# fsck /dev/md0
fsck from util-linux 2.33.1
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Ungültige magische Zahl im Superblock
fsck.ext2: Superblock ungültig, Datensicherungs-Blöcke werden versucht ...
fsck.ext2: Ungültige magische Zahl im Superblock beim Versuch, /dev/md0
zu öffnen
Der Superblock ist unlesbar bzw. beschreibt kein gültiges ext2/ext3/ext4-
Dateisystem. Wenn das Gerät gültig ist und ein ext2/ext3/ext4-
Dateisystem (kein swap oder ufs usw.) enthält, dann ist der Superblock
beschädigt, und Sie könnten versuchen, e2fsck mit einem anderen Superblock
zu starten:
e2fsck -b 8193 <Gerät>
oder
e2fsck -b 32768 <Gerät>
In /dev/md0 wurde eine gpt-Partitionstabelle gefunden
=========================================
For those who not understand German:
ext2fs_open2: Invalid magical number in superblock
fsck.ext2: Superblock invalid. Backup-blocks are tried...
fsck.ext2: Invalid magical number in superblock when trying to open
/dev/md0
Found a gpt-partition-table at /dev/md0
=========================================
there should be a valid ext4 filesystem...
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 23:31 ` Thomas Grawert
@ 2020-05-26 1:16 ` antlists
0 siblings, 0 replies; 23+ messages in thread
From: antlists @ 2020-05-26 1:16 UTC (permalink / raw)
To: Thomas Grawert; +Cc: Phil Turmel, linux-raid
On 26/05/2020 00:31, Thomas Grawert wrote:
> ok, maybe it´s getting out of scope now. If so, please let me know...
>
> md0 is clean and running. no active resync. I just tried to mount the
> filesystem to check if everything is fine and to proceed with growing...
> thanks god, I did it this way because:
>
> root@nas:~# mount /dev/md0 /mnt
> mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md0,
> missing codepage or helper program, or other error.
>
> root@nas:~# df -h
> Filesystem Größe Benutzt Verf. Verw% Eingehängt auf
> udev 16G 0 16G 0% /dev
> tmpfs 3,1G 11M 3,1G 1% /run
> /dev/sdg2 203G 7,2G 186G 4% /
> tmpfs 16G 0 16G 0% /dev/shm
> tmpfs 5,0M 4,0K 5,0M 1% /run/lock
> tmpfs 16G 0 16G 0% /sys/fs/cgroup
> /dev/sdg1 511M 5,2M 506M 1% /boot/efi
> tmpfs 3,1G 0 3,1G 0% /run/user/0
> root@nas:~# fsck /dev/md0
> fsck from util-linux 2.33.1
> e2fsck 1.44.5 (15-Dec-2018)
> ext2fs_open2: Ungültige magische Zahl im Superblock
> fsck.ext2: Superblock ungültig, Datensicherungs-Blöcke werden versucht ...
> fsck.ext2: Ungültige magische Zahl im Superblock beim Versuch, /dev/md0
> zu öffnen
>
> Der Superblock ist unlesbar bzw. beschreibt kein gültiges ext2/ext3/ext4-
> Dateisystem. Wenn das Gerät gültig ist und ein ext2/ext3/ext4-
> Dateisystem (kein swap oder ufs usw.) enthält, dann ist der Superblock
> beschädigt, und Sie könnten versuchen, e2fsck mit einem anderen Superblock
> zu starten:
> e2fsck -b 8193 <Gerät>
> oder
> e2fsck -b 32768 <Gerät>
>
> In /dev/md0 wurde eine gpt-Partitionstabelle gefunden
>
> =========================================
>
> For those who not understand German:
> ext2fs_open2: Invalid magical number in superblock
> fsck.ext2: Superblock invalid. Backup-blocks are tried...
> fsck.ext2: Invalid magical number in superblock when trying to open
> /dev/md0
>
> Found a gpt-partition-table at /dev/md0
>
> =========================================
>
> there should be a valid ext4 filesystem...
>
Oh help ...
Hopefully all that's happened is that something has written a GPT and
that's it. Unfortunately, that seems painfully common - rogue tools
format disks when they shouldn't ... I just hope it's not the Debian
upgrade that did it.
So we should hopefully just be able to recover the filesystem, except I
don't know how.
Did you actually try the e2fsck with the alternate superblocks? That's
the limit of what I can suggest. I don't think e2fsck will try them for
you - you have to explicitly tell it.
Just test it by using the "don't write anything" option, whatever that
is? If that then doesn't report many errors, it looks like we may have
our filesystem back - cross fingers ...
Cheers,
Wol
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help requested for mdadm grow error
2020-05-25 22:32 ` antlists
` (2 preceding siblings ...)
2020-05-25 23:31 ` Thomas Grawert
@ 2020-05-26 1:16 ` Thomas Grawert
3 siblings, 0 replies; 23+ messages in thread
From: Thomas Grawert @ 2020-05-26 1:16 UTC (permalink / raw)
To: linux-raid
Finally, the solution:
as mentioned earlier, I reverted the reshape
mdadm --assemble /dev/md0 --force --verbose --update=revert-reshape
--invalid-backup /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
This restarted the md auto-read-only. So I had to manually start the md
mdadm --readwrite /dev/md0
which resulted in bad superblock.
mke2fs won´t work here, because it´s limited to 16TB of diskspace, where
my md is 36TiB big. Because I´m a bit lazy to google some more, I
started gparted at the machine, choosed the md0 und initiated filesystem
integrity check.
This took 5 minutes, afterwards I was able to mount my md.
So I unmounted the md again in order to grow it again:
root@nas:~# mdadm --grow --raid-devices=5 /dev/md0
--backup-file=/tmp/bu_neu.bak
mdadm: Need to backup 6144K of critical section..
root@nas:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun May 17 00:23:42 2020
Raid Level : raid5
Array Size : 35156256768 (33527.62 GiB 36000.01 GB)
Used Dev Size : 11718752256 (11175.87 GiB 12000.00 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue May 26 02:08:22 2020
State : clean, reshaping
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Reshape Status : 0% complete
Delta Devices : 1, (4->5)
Name : nas:0 (local to host nas)
UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
Events : 38631
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 65 3 active sync /dev/sde1
5 8 81 4 active sync /dev/sdf1
root@nas:~#
As we see in mdadm details, reshaping is running. Have a look at mdstat:
root@nas:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0]
[raid1] [raid10]
md0 : active raid5 sda1[0] sdf1[5] sde1[4] sdd1[2] sdc1[1]
35156256768 blocks super 1.2 level 5, 512k chunk, algorithm 2
[5/5] [UUUUU]
[>....................] reshape = 0.8% (93989376/11718752256)
finish=8260.4min speed=23454K/sec
bitmap: 0/88 pages [0KB], 65536KB chunk
unused devices: <none>
Seems to be a bit slowly right now. I had expected a speed arount
60MByte due to 6G-Sata drives. However.. it´s running again.
I will dig into speed later.
Again: Thanks to everyone who helped me with ideas and / or advice. You
guys saved my ass :)
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2020-05-26 1:16 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-25 17:25 help requested for mdadm grow error Thomas Grawert
2020-05-25 18:09 ` Wols Lists
2020-05-25 18:18 ` Thomas Grawert
2020-05-25 18:55 ` Wols Lists
2020-05-25 19:05 ` Thomas Grawert
2020-05-25 19:20 ` Wols Lists
2020-05-25 21:45 ` John Stoffel
2020-05-25 21:55 ` antlists
2020-05-25 19:33 ` Mikael Abrahamsson
2020-05-25 19:35 ` Thomas Grawert
2020-05-25 19:42 ` Andy Smith
2020-05-25 20:30 ` Thomas Grawert
2020-05-25 21:19 ` antlists
2020-05-25 21:22 ` Thomas Grawert
2020-05-25 22:02 ` antlists
2020-05-25 22:18 ` Thomas Grawert
2020-05-25 22:32 ` antlists
2020-05-25 23:01 ` Thomas Grawert
2020-05-25 23:15 ` Thomas Grawert
2020-05-25 23:31 ` Thomas Grawert
2020-05-26 1:16 ` antlists
2020-05-26 1:16 ` Thomas Grawert
2020-05-25 18:24 ` Thomas Grawert
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.