All of lore.kernel.org
 help / color / mirror / Atom feed
* Failed Raid 5 - one Disk possibly Out of date - 2nd disk damaged
@ 2021-11-17 12:22 Martin Thoma
  2021-11-17 17:56 ` Wols Lists
  0 siblings, 1 reply; 5+ messages in thread
From: Martin Thoma @ 2021-11-17 12:22 UTC (permalink / raw)
  To: linux-raid

Hi All,

i have a Raid 5 with 6 - 3 TB Devices. After a powerfailure raid
didn't assemble automaticcaly so i force assembled it with

mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdej]1

root@nas:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdej]1
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdj1 is identified as a member of /dev/md0, slot 1.
mdadm: forcing event count in /dev/sde1(3) from 101607 upto 101616
mdadm: forcing event count in /dev/sdc1(4) from 101607 upto 101616
mdadm: forcing event count in /dev/sdb1(5) from 101607 upto 101616
mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sde1
mdadm: clearing FAULTY flag for device 2 in /dev/md0 for /dev/sdc1
mdadm: clearing FAULTY flag for device 1 in /dev/md0 for /dev/sdb1
mdadm: Marking array /dev/md0 as 'clean'
mdadm: added /dev/sdd1 to /dev/md0 as 0 (possibly out of date)
mdadm: added /dev/sda1 to /dev/md0 as 2
mdadm: added /dev/sde1 to /dev/md0 as 3
mdadm: added /dev/sdc1 to /dev/md0 as 4
mdadm: added /dev/sdb1 to /dev/md0 as 5
mdadm: added /dev/sdj1 to /dev/md0 as 1
mdadm: /dev/md0 assembled from 5 drives - not enough to start the array.

So /dev/sdd1 was considered , when i ran the command again the raid
assembled without sdd1

When i tried Reading Data after a while it stopped (propably when the
data was on /dev/sdc

dmesg showed this:
[  368.433658] sd 8:0:0:1: [sdc] tag#0 FAILED Result: hostbyte=DID_OK
driverbyte=DRIVER_SENSE
[  368.433664] sd 8:0:0:1: [sdc] tag#0 Sense Key : Medium Error [current]
[  368.433669] sd 8:0:0:1: [sdc] tag#0 Add. Sense: Unrecovered read error
[  368.433675] sd 8:0:0:1: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00
00 08 81 d8 00 00 00 08 00 00
[  368.433679] blk_update_request: critical medium error, dev sdc, sector 557528
[  368.433689] raid5_end_read_request: 77 callbacks suppressed
[  368.433692] md/raid:md0: read error not correctable (sector 555480 on sdc1).
[  375.944254] sd 8:0:0:1: [sdc] tag#0 FAILED Result: hostbyte=DID_OK
driverbyte=DRIVER_SENSE

and the Raided stopped again.

How can i force to assemble the raid including /dev/sdd1 and not
include /dev/sdc (because that drive is possibly damaged now)?
With a mdadm --create --assume-clean .. command?

I'm using  mdadm/zesty-updates,now 3.4-4ubuntu0.1 amd64 [installed] on
Linux version 4.10.0-21-generic (buildd@lgw01-12) (gcc version 6.3.0
20170406 (Ubuntu 6.3.0-12ubuntu2) )

Regards and thanks in advance

Few other Ouputs:
mdadm --examine /dev/sd[abcdej]1 |  egrep 'Event|/dev/sd'
/dev/sda1:
         Events : 101616
/dev/sdb1:
         Events : 101616
/dev/sdc1:
         Events : 101616
/dev/sdd1:
         Events : 101607
/dev/sde1:
         Events : 101616
/dev/sdj1:
         Events : 101616

 mdadm --examine /dev/sd[abcdej]1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8f33f56c:efd27830:4ac273aa:94b79171
           Name : htpc:0
  Creation Time : Thu Jan 16 20:36:01 2014
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
     Array Size : 14650667520 (13971.97 GiB 15002.28 GB)
  Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : clean
    Device UUID : e41b2a30:94a9fa78:9b8e021d:ddb50b84

    Update Time : Wed Nov 17 11:47:36 2021
       Checksum : 7f1551a8 - correct
         Events : 101616

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : .AA... ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8f33f56c:efd27830:4ac273aa:94b79171
           Name : htpc:0
  Creation Time : Thu Jan 16 20:36:01 2014
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
     Array Size : 14650667520 (13971.97 GiB 15002.28 GB)
  Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : clean
    Device UUID : d32f3e56:ab5c727d:53de3db9:e0bfadee

    Update Time : Wed Nov 17 11:47:19 2021
       Checksum : b08f725a - correct
         Events : 101616

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 5
   Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8f33f56c:efd27830:4ac273aa:94b79171
           Name : htpc:0
  Creation Time : Thu Jan 16 20:36:01 2014
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
     Array Size : 14650667520 (13971.97 GiB 15002.28 GB)
  Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : clean
    Device UUID : 23fc5428:02411e8f:ad843649:d8addbd0

    Update Time : Wed Nov 17 11:47:19 2021
       Checksum : 678b755 - correct
         Events : 101616

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 8f33f56c:efd27830:4ac273aa:94b79171
           Name : htpc:0
  Creation Time : Thu Jan 16 20:36:01 2014
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
     Array Size : 14650667520 (13971.97 GiB 15002.28 GB)
  Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
Recovery Offset : 0 sectors
   Unused Space : before=262056 sectors, after=1024 sectors
          State : clean
    Device UUID : 09baa98d:5456baf2:925a9555:5b650e7f

    Update Time : Wed Nov 17 11:47:19 2021
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 8a031835 - correct
         Events : 101607

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8f33f56c:efd27830:4ac273aa:94b79171
           Name : htpc:0
  Creation Time : Thu Jan 16 20:36:01 2014
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
     Array Size : 14650667520 (13971.97 GiB 15002.28 GB)
  Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=1024 sectors
          State : clean
    Device UUID : bb060f08:741f38fd:6006b07e:f0a9c992

    Update Time : Wed Nov 17 11:47:19 2021
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 4bbc1e74 - correct
         Events : 101616

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdj1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8f33f56c:efd27830:4ac273aa:94b79171
           Name : htpc:0
  Creation Time : Thu Jan 16 20:36:01 2014
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
     Array Size : 14650667520 (13971.97 GiB 15002.28 GB)
  Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : clean
    Device UUID : d5227db1:d9bde1c4:7b0fe2f1:4eccbbf0

    Update Time : Wed Nov 17 11:47:36 2021
       Checksum : 8df1042c - correct
         Events : 101616

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : .AAAAA ('A' == active, '.' == missing, 'R' == replacing)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Failed Raid 5 - one Disk possibly Out of date - 2nd disk damaged
  2021-11-17 12:22 Failed Raid 5 - one Disk possibly Out of date - 2nd disk damaged Martin Thoma
@ 2021-11-17 17:56 ` Wols Lists
  2021-11-17 18:10   ` Martin Thoma
  0 siblings, 1 reply; 5+ messages in thread
From: Wols Lists @ 2021-11-17 17:56 UTC (permalink / raw)
  To: Martin Thoma, linux-raid

On 17/11/2021 12:22, Martin Thoma wrote:
> Hi All,
> 


> 
> So /dev/sdd1 was considered , when i ran the command again the raid
> assembled without sdd1
> 
> When i tried Reading Data after a while it stopped (propably when the
> data was on /dev/sdc
> 
> dmesg showed this:
> [  368.433658] sd 8:0:0:1: [sdc] tag#0 FAILED Result: hostbyte=DID_OK
> driverbyte=DRIVER_SENSE
> [  368.433664] sd 8:0:0:1: [sdc] tag#0 Sense Key : Medium Error [current]
> [  368.433669] sd 8:0:0:1: [sdc] tag#0 Add. Sense: Unrecovered read error
> [  368.433675] sd 8:0:0:1: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00
> 00 08 81 d8 00 00 00 08 00 00
> [  368.433679] blk_update_request: critical medium error, dev sdc, sector 557528
> [  368.433689] raid5_end_read_request: 77 callbacks suppressed
> [  368.433692] md/raid:md0: read error not correctable (sector 555480 on sdc1).
> [  375.944254] sd 8:0:0:1: [sdc] tag#0 FAILED Result: hostbyte=DID_OK
> driverbyte=DRIVER_SENSE
> 
> and the Raided stopped again.
> 
> How can i force to assemble the raid including /dev/sdd1 and not
> include /dev/sdc (because that drive is possibly damaged now)?
> With a mdadm --create --assume-clean .. command?

NO NO NO NO NO !!!
> 
> I'm using  mdadm/zesty-updates,now 3.4-4ubuntu0.1 amd64 [installed] on
> Linux version 4.10.0-21-generic (buildd@lgw01-12) (gcc version 6.3.0
> 20170406 (Ubuntu 6.3.0-12ubuntu2) )
> 
That's an old ubuntu? and an ancient mdadm 3.4?

As a very first action, you need to source a much newer rescue disk!

As a second action, if you think sdc and sdd are dodgy, then you need to 
replace them - use dd or ddrescue to do a brute-force copy.

You don't mention what drives they are. Are they CMR? Are they suitable 
for raid? For replacement drives, I'd look at upsizing to 4TB for a bit 
of headroom maybe (or look at moving to raid 6). And look at Seagate 
IronWolf, WD Red *PRO*, or Toshiba N300. (Personally I'd pass on the WD ...)

Once you've copied sdc and sdd, you can look at doing another force 
assemble, and you'll hopefully get your array back. At least the event 
count info implies damage to the array should be minimal.

https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn

Read, learn, and inwardly digest ...

And DON'T do anything that will make changes to the disks - like a 
re-create!!!

Cheers,
Wol

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Failed Raid 5 - one Disk possibly Out of date - 2nd disk damaged
  2021-11-17 17:56 ` Wols Lists
@ 2021-11-17 18:10   ` Martin Thoma
  2021-11-17 21:06     ` Wol
  0 siblings, 1 reply; 5+ messages in thread
From: Martin Thoma @ 2021-11-17 18:10 UTC (permalink / raw)
  To: Wols Lists; +Cc: linux-raid

Thanks a lot.
Will try to get some new drives and do a dd and then will try to
assemble the Raid again.

The Drives are CMR Drives, a few Western Digital and Seagate drives.

Regards

Martin

Am Mi., 17. Nov. 2021 um 18:56 Uhr schrieb Wols Lists
<antlists@youngman.org.uk>:
>
> On 17/11/2021 12:22, Martin Thoma wrote:
> > Hi All,
> >
>
>
> >
> > So /dev/sdd1 was considered , when i ran the command again the raid
> > assembled without sdd1
> >
> > When i tried Reading Data after a while it stopped (propably when the
> > data was on /dev/sdc
> >
> > dmesg showed this:
> > [  368.433658] sd 8:0:0:1: [sdc] tag#0 FAILED Result: hostbyte=DID_OK
> > driverbyte=DRIVER_SENSE
> > [  368.433664] sd 8:0:0:1: [sdc] tag#0 Sense Key : Medium Error [current]
> > [  368.433669] sd 8:0:0:1: [sdc] tag#0 Add. Sense: Unrecovered read error
> > [  368.433675] sd 8:0:0:1: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00
> > 00 08 81 d8 00 00 00 08 00 00
> > [  368.433679] blk_update_request: critical medium error, dev sdc, sector 557528
> > [  368.433689] raid5_end_read_request: 77 callbacks suppressed
> > [  368.433692] md/raid:md0: read error not correctable (sector 555480 on sdc1).
> > [  375.944254] sd 8:0:0:1: [sdc] tag#0 FAILED Result: hostbyte=DID_OK
> > driverbyte=DRIVER_SENSE
> >
> > and the Raided stopped again.
> >
> > How can i force to assemble the raid including /dev/sdd1 and not
> > include /dev/sdc (because that drive is possibly damaged now)?
> > With a mdadm --create --assume-clean .. command?
>
> NO NO NO NO NO !!!
> >
> > I'm using  mdadm/zesty-updates,now 3.4-4ubuntu0.1 amd64 [installed] on
> > Linux version 4.10.0-21-generic (buildd@lgw01-12) (gcc version 6.3.0
> > 20170406 (Ubuntu 6.3.0-12ubuntu2) )
> >
> That's an old ubuntu? and an ancient mdadm 3.4?
>
> As a very first action, you need to source a much newer rescue disk!
>
> As a second action, if you think sdc and sdd are dodgy, then you need to
> replace them - use dd or ddrescue to do a brute-force copy.
>
> You don't mention what drives they are. Are they CMR? Are they suitable
> for raid? For replacement drives, I'd look at upsizing to 4TB for a bit
> of headroom maybe (or look at moving to raid 6). And look at Seagate
> IronWolf, WD Red *PRO*, or Toshiba N300. (Personally I'd pass on the WD ...)
>
> Once you've copied sdc and sdd, you can look at doing another force
> assemble, and you'll hopefully get your array back. At least the event
> count info implies damage to the array should be minimal.
>
> https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
>
> Read, learn, and inwardly digest ...
>
> And DON'T do anything that will make changes to the disks - like a
> re-create!!!
>
> Cheers,
> Wol



-- 
With kind regards / Mit freundlichen Grüßen

Martin Thoma

Göhrenstraße 3
72414 Rangendingen

Cell:  0176 80 16 03 68

Mail:  Thoma-Martin@gmx.net

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Failed Raid 5 - one Disk possibly Out of date - 2nd disk damaged
  2021-11-17 18:10   ` Martin Thoma
@ 2021-11-17 21:06     ` Wol
  2021-11-18 17:27       ` Martin Thoma
  0 siblings, 1 reply; 5+ messages in thread
From: Wol @ 2021-11-17 21:06 UTC (permalink / raw)
  To: Martin Thoma, Wols Lists; +Cc: linux-raid

On 17/11/2021 18:10, Martin Thoma wrote:
> Thanks a lot.
> Will try to get some new drives and do a dd and then will try to
> assemble the Raid again.
> 
> The Drives are CMR Drives, a few Western Digital and Seagate drives.

Are they "raid friendly" though? What does "smartctl" tell you? Are the 
Seagates Barracudas (I hope not)?

Whether before or after the copy, I'd look at the smartctl for the dodgy 
drives - they may have just been addled by the power fail but will be 
fine for backups, or they may have been on the way out and the power 
fail tipped them over the edge.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Failed Raid 5 - one Disk possibly Out of date - 2nd disk damaged
  2021-11-17 21:06     ` Wol
@ 2021-11-18 17:27       ` Martin Thoma
  0 siblings, 0 replies; 5+ messages in thread
From: Martin Thoma @ 2021-11-18 17:27 UTC (permalink / raw)
  To: Wol; +Cc: linux-raid

Am Mi., 17. Nov. 2021 um 22:06 Uhr schrieb Wol <antlists@youngman.org.uk>:
>
> On 17/11/2021 18:10, Martin Thoma wrote:
> > Thanks a lot.
> > Will try to get some new drives and do a dd and then will try to
> > assemble the Raid again.
> >
> > The Drives are CMR Drives, a few Western Digital and Seagate drives.
>
> Are they "raid friendly" though? What does "smartctl" tell you? Are the
> Seagates Barracudas (I hope not)?
>
> Whether before or after the copy, I'd look at the smartctl for the dodgy
> drives - they may have just been addled by the power fail but will be
> fine for backups, or they may have been on the way out and the power
> fail tipped them over the edge.
>
> Cheers,
> Wol


Hey,

some of the drives are indeed Seagate Baracudas. The two faulty drives
ist one Western Digital and the other a Toshiba
Here are the smartctl Data for those drives
/dev/sbc

smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.10.0-21-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Green
Device Model:     WDC WD30EZRX-00D8PB0
Serial Number:    WD-WMC4N0E0NN71
LU WWN Device Id: 5 0014ee 6050ae2f5
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Thu Nov 18 17:31:01 2021 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART Status not supported: Incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (39120) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: (   2) minutes.
Extended self-test routine
recommended polling time: ( 393) minutes.
Conveyance self-test routine
recommended polling time: (   5) minutes.
SCT capabilities:        (0x7035) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail
Always       -       0
  3 Spin_Up_Time            0x0027   185   166   021    Pre-fail
Always       -       5708
  4 Start_Stop_Count        0x0032   100   100   000    Old_age
Always       -       668
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail
Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age
Always       -       0
  9 Power_On_Hours          0x0032   023   021   000    Old_age
Always       -       56908
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age
Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age
Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age
Always       -       39
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age
Always       -       23
193 Load_Cycle_Count        0x0032   117   117   000    Old_age
Always       -       250280
194 Temperature_Celsius     0x0022   129   090   000    Old_age
Always       -       21
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age
Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age
Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age
Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age
Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age
Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


/dev/sdd
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.10.0-21-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     TOSHIBA HDWD130
Serial Number:    27R3EJDAS
LU WWN Device Id: 5 000039 fe6cfa767
Firmware Version: MX6OACF0
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Thu Nov 18 17:31:33 2021 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART Status not supported: Incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.

General SMART Values:
Offline data collection status:  (0x80) Offline data collection activity
was never started.
Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (21935) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: (   1) minutes.
Extended self-test routine
recommended polling time: ( 366) minutes.
SCT capabilities:        (0x003d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail
Always       -       0
  2 Throughput_Performance  0x0005   139   139   054    Pre-fail
Offline      -       71
  3 Spin_Up_Time            0x0007   130   130   024    Pre-fail
Always       -       439 (Average 440)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age
Always       -       23
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail
Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail
Always       -       0
  8 Seek_Time_Performance   0x0005   124   124   020    Pre-fail
Offline      -       33
  9 Power_On_Hours          0x0012   095   095   000    Old_age
Always       -       39181
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail
Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age
Always       -       16
192 Power-Off_Retract_Count 0x0032   099   099   000    Old_age
Always       -       1239
193 Load_Cycle_Count        0x0012   099   099   000    Old_age
Always       -       1239
194 Temperature_Celsius     0x0002   222   222   000    Old_age
Always       -       27 (Min/Max 23/67)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age
Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age
Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age
Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age
Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Again thanks a lot.

Regards

Martin

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-11-18 17:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-17 12:22 Failed Raid 5 - one Disk possibly Out of date - 2nd disk damaged Martin Thoma
2021-11-17 17:56 ` Wols Lists
2021-11-17 18:10   ` Martin Thoma
2021-11-17 21:06     ` Wol
2021-11-18 17:27       ` Martin Thoma

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.