All of lore.kernel.org
 help / color / mirror / Atom feed
* Advice please re failed Raid6
@ 2017-07-15 23:40 Bogo Mipps
  2017-07-16  0:58 ` Roman Mamedov
  2017-07-17  0:19 ` Peter Grandi
  0 siblings, 2 replies; 9+ messages in thread
From: Bogo Mipps @ 2017-07-15 23:40 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 8466 bytes --]

Hi List

I posted this to the Open Media Vault (my NAS O/S) list a few days ago 
without response. A definitive answer would be appreciated.

Have been running a 4 disk Raid 6 setup for over two years without any 
issues, until suddenly on June 27 disks on my OMV NAS became 100% full, 
including the NFS mounted volumes in the raid set. There'd been major
disk activity overnight, but foolishly didn't investigate.

Rsnapshot normally backs up two desktop machines onto the raid setup: 
the next morning found that one of the backup directories was not on 
raid but suddenly was on the root directory of the OMV machine, and
being a total backup (several Gb) this accounted for the 100% reading 
for the OMV/NAS machine.

The logs indicated that mdstat had discovered "dirty degraded array" 
presumably due to faulty sdb, so had withdrawn that disk, and then 
couldn't run the raid set (logs below show)

Bought a new disk and installed on July 4, and raid rebuilt overnight 
(see July 5 Rebuild finished below)

Since then I've been unable to mount or access any data. Have followed 
instructions as per Linux Raid Wiki's "Recovering a failed software 
RAID" & "RAID Recovery" , but still no success. I've attached the 
results of their suggestions in the attached log file 
"linux_raid_wiki_logs.txt".

Any help appreciated - even if it's just to tell me my RAID sets are hosed!

P.S. This line looks ominous? <md0: detected capacity change from 
4000528203776 to 0> !!!

===============Jun 27 16:52:21 keruru kernel: [ 2.912440] md: md0 stopped.
Jun 27 16:52:21 keruru kernel: [ 2.922315] md: bind<sdb>
Jun 27 16:52:21 keruru kernel: [ 2.922508] md: bind<sdc>
Jun 27 16:52:21 keruru kernel: [ 2.922643] md: bind<sde>
Jun 27 16:52:21 keruru kernel: [ 2.922777] md: bind<sdd>
Jun 27 16:52:21 keruru kernel: [ 2.922808] md: kicking non-fresh sdb 
from array!
Jun 27 16:52:21 keruru kernel: [ 2.922820] md: unbind<sdb>
Jun 27 16:52:21 keruru kernel: [ 2.927107] md: export_rdev(sdb)
Jun 27 16:52:21 keruru kernel: [ 2.994973] raid6: sse2x1 588 MB/s
Jun 27 16:52:21 keruru kernel: [ 3.062926] raid6: sse2x2 1395 MB/s
Jun 27 16:52:21 keruru kernel: [ 3.130841] raid6: sse2x4 2397 MB/s
Jun 27 16:52:21 keruru kernel: [ 3.130844] raid6: using algorithm sse2x4 
(2397 MB/s)
Jun 27 16:52:21 keruru kernel: [ 3.130846] raid6: using ssse3x2 recovery 
algorithm
Jun 27 16:52:21 keruru kernel: [ 3.130866] Switched to clocksource tsc
Jun 27 16:52:21 keruru kernel: [ 3.131227] xor: automatically using best 
checksumming function:
Jun 27 16:52:21 keruru kernel: [ 3.170797] avx : 6164.000 MB/sec
Jun 27 16:52:21 keruru kernel: [ 3.171121] async_tx: api initialized (async)
Jun 27 16:52:21 keruru kernel: [ 3.172809] md: raid6 personality 
registered for level 6
Jun 27 16:52:21 keruru kernel: [ 3.172812] md: raid5 personality 
registered for level 5
Jun 27 16:52:21 keruru kernel: [ 3.172815] md: raid4 personality 
registered for level 4
Jun 27 16:52:21 keruru kernel: [ 3.173218] md/raid:md0: not clean -- 
starting background reconstruction
Jun 27 16:52:21 keruru kernel: [ 3.173236] md/raid:md0: device sdd 
operational as raid disk 1
Jun 27 16:52:21 keruru kernel: [ 3.173239] md/raid:md0: device sde 
operational as raid disk 3
Jun 27 16:52:21 keruru kernel: [ 3.173242] md/raid:md0: device sdc 
operational as raid disk 2
Jun 27 16:52:21 keruru kernel: [ 3.173706] md/raid:md0: allocated 0kB
Jun 27 16:52:21 keruru kernel: [ 3.173745] md/raid:md0: cannot start 
dirty degraded array.
Jun 27 16:52:21 keruru kernel: [ 3.173811] RAID conf printout:
Jun 27 16:52:21 keruru kernel: [ 3.173814] --- level:6 rd:4 wd:3
Jun 27 16:52:21 keruru kernel: [ 3.173816] disk 1, o:1, dev:sdd
Jun 27 16:52:21 keruru kernel: [ 3.173818] disk 2, o:1, dev:sdc
Jun 27 16:52:21 keruru kernel: [ 3.173820] disk 3, o:1, dev:sde
Jun 27 16:52:21 keruru kernel: [ 3.174025] md/raid:md0: failed to run 
raid set.
Jun 27 16:52:21 keruru kernel: [ 3.174071] md: pers->run() failed ...
===============
New disk added - sdb
===============
Jul 5 21:06:18 keruru mdadm[2497]: RebuildFinished event detected on md 
device /dev/md0, component device mismatches found: 1847058224 (on raid 
level 6)
Jul 6 09:45:52 keruru kernel: [ 1195.390879] raid6: sse2x1 249 MB/s
Jul 6 09:45:52 keruru kernel: [ 1195.458735] raid6: sse2x2 476 MB/s
Jul 6 09:45:52 keruru kernel: [ 1195.526632] raid6: sse2x4 839 MB/s
Jul 6 09:45:52 keruru kernel: [ 1195.526638] raid6: using algorithm 
sse2x4 (839 MB/s)
Jul 6 09:45:52 keruru kernel: [ 1195.526644] raid6: using ssse3x2 
recovery algorithm
Jul 6 09:45:52 keruru kernel: [ 1195.578970] md: raid6 personality 
registered for level 6
Jul 6 09:45:52 keruru kernel: [ 1195.578980] md: raid5 personality 
registered for level 5
Jul 6 09:45:52 keruru kernel: [ 1195.578985] md: raid4 personality 
registered for level 4
Jul 6 09:45:52 keruru kernel: [ 1195.580003] md/raid:md0: device sdb 
operational as raid disk 0
Jul 6 09:45:52 keruru kernel: [ 1195.580012] md/raid:md0: device sde 
operational as raid disk 3
Jul 6 09:45:52 keruru kernel: [ 1195.580018] md/raid:md0: device sdd 
operational as raid disk 2
Jul 6 09:45:52 keruru kernel: [ 1195.580025] md/raid:md0: device sdc 
operational as raid disk 1
Jul 6 09:45:52 keruru kernel: [ 1195.581091] md/raid:md0: allocated 0kB
Jul 6 09:45:52 keruru kernel: [ 1195.581180] md/raid:md0: raid level 6 
active with 4 out of 4 devices, algorithm 2
Jul 6 09:52:30 keruru kernel: [ 4.186106] raid6: sse2x1 602 MB/s
Jul 6 09:52:30 keruru kernel: [ 4.254006] raid6: sse2x2 906 MB/s
Jul 6 09:52:30 keruru kernel: [ 4.186106] raid6: sse2x1 602 MB/s
Jul 6 09:52:30 keruru kernel: [ 4.254006] raid6: sse2x2 906 MB/s
Jul 6 09:52:30 keruru kernel: [ 4.321957] raid6: sse2x4 1130 MB/s
Jul 6 09:52:30 keruru kernel: [ 4.321964] raid6: using algorithm sse2x4 
(1130 MB/s)
Jul 6 09:52:30 keruru kernel: [ 4.321967] raid6: using ssse3x2 recovery 
algorithm
Jul 6 09:52:30 keruru kernel: [ 4.368478] md: raid6 personality 
registered for level 6
Jul 6 09:52:30 keruru kernel: [ 4.368486] md: raid5 personality 
registered for level 5
Jul 6 09:52:30 keruru kernel: [ 4.368490] md: raid4 personality 
registered for level 4
Jul 6 09:52:30 keruru kernel: [ 4.369179] md/raid:md0: device sdb 
operational as raid disk 0
Jul 6 09:52:30 keruru kernel: [ 4.369185] md/raid:md0: device sde 
operational as raid disk 3
Jul 6 09:52:30 keruru kernel: [ 4.369189] md/raid:md0: device sdd 
operational as raid disk 2
Jul 6 09:52:30 keruru kernel: [ 4.369194] md/raid:md0: device sdc 
operational as raid disk 1
Jul 6 09:52:30 keruru kernel: [ 4.369974] md/raid:md0: allocated 0kB
Jul 6 09:52:30 keruru kernel: [ 4.372062] md/raid:md0: raid level 6 
active with 4 out of 4 devices, algorithm 2
Jul 6 12:56:15 keruru kernel: [ 4.442184] raid6: sse2x1 739 MB/s
Jul 6 12:56:15 keruru kernel: [ 4.510060] raid6: sse2x2 1480 MB/s
Jul 6 12:56:15 keruru kernel: [ 4.577985] raid6: sse2x4 1605 MB/s
Jul 6 12:56:15 keruru kernel: [ 4.577993] raid6: using algorithm sse2x4 
(1605 MB/s)
Jul 6 12:56:15 keruru kernel: [ 4.577997] raid6: using ssse3x2 recovery 
algorithm
Jul 6 12:56:15 keruru kernel: [ 4.622570] md: raid6 personality 
registered for level 6
Jul 6 12:56:15 keruru kernel: [ 4.622577] md: raid5 personality 
registered for level 5
Jul 6 12:56:15 keruru kernel: [ 4.622580] md: raid4 personality 
registered for level 4
Jul 6 12:56:15 keruru kernel: [ 4.623261] md/raid:md0: device sdb 
operational as raid disk 0
Jul 6 12:56:15 keruru kernel: [ 4.623266] md/raid:md0: device sde 
operational as raid disk 3
Jul 6 12:56:15 keruru kernel: [ 4.623269] md/raid:md0: device sdd 
operational as raid disk 2
Jul 6 12:56:15 keruru kernel: [ 4.623273] md/raid:md0: device sdc 
operational as raid disk 1
Jul 6 12:56:15 keruru kernel: [ 4.624064] md/raid:md0: allocated 0kB
Jul 6 12:56:15 keruru kernel: [ 4.624131] md/raid:md0: raid level 6 
active with 4 out of 4 devices, algorithm 2
Jul 6 16:54:43 keruru kernel: [14401.858429] md/raid:md0: device sdb 
operational as raid disk 0
Jul 6 16:54:43 keruru kernel: [14401.858442] md/raid:md0: device sde 
operational as raid disk 3
Jul 6 16:54:43 keruru kernel: [14401.858449] md/raid:md0: device sdd 
operational as raid disk 2
Jul 6 16:54:43 keruru kernel: [14401.858455] md/raid:md0: device sdc 
operational as raid disk 1
Jul 6 16:54:43 keruru kernel: [14401.859915] md/raid:md0: allocated 0kB
Jul 6 16:54:43 keruru kernel: [14401.860000] md/raid:md0: raid level 6 
active with 4 out of 4 devices, algorithm 2

[-- Attachment #2: linux_raid_wiki_logs.txt --]
[-- Type: text/plain, Size: 7819 bytes --]

root@keruru:/var/log# mdadm --examine /dev/sd[bedc] >> raid.status
root@keruru:/var/log# cat raid.status 
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b1e6af5d:e5848ebe:63727445:2ab99719
           Name : keruru:0  (local to host keruru)
  Creation Time : Fri Jun 30 15:42:27 2017
     Raid Level : raid6
   Raid Devices : 4

 Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
     Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
  Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 79e4933f:dfe5923f:5ba03ae7:3efe38eb

    Update Time : Wed Jul  5 21:06:18 2017
       Checksum : 9ff2b025 - correct
         Events : 119

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing)
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b1e6af5d:e5848ebe:63727445:2ab99719
           Name : keruru:0  (local to host keruru)
  Creation Time : Fri Jun 30 15:42:27 2017
     Raid Level : raid6
   Raid Devices : 4

 Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
     Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
  Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : f1e1a946:711886a6:2604780f:8eba4a2d

    Update Time : Wed Jul  5 21:06:18 2017
       Checksum : 784b0046 - correct
         Events : 119

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing)
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b1e6af5d:e5848ebe:63727445:2ab99719
           Name : keruru:0  (local to host keruru)
  Creation Time : Fri Jun 30 15:42:27 2017
     Raid Level : raid6
   Raid Devices : 4

 Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
     Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
  Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : cf3bc8a7:9feed87d:945d8e77:08f7f32d

    Update Time : Wed Jul  5 21:06:18 2017
       Checksum : 197bc63c - correct
         Events : 119

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing)
/dev/sde:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b1e6af5d:e5848ebe:63727445:2ab99719
           Name : keruru:0  (local to host keruru)
  Creation Time : Fri Jun 30 15:42:27 2017
     Raid Level : raid6
   Raid Devices : 4

 Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
     Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
  Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : b3323d81:279b7c7b:a0c534ed:46d0e6fc

    Update Time : Wed Jul  5 21:06:18 2017
       Checksum : 352daaf4 - correct
         Events : 119

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing)
================
root@keruru:/var/log# mdadm --examine /dev/sd[bedc] | egrep 'Event|/dev/sd'
/dev/sdb:
         Events : 119
/dev/sdc:
         Events : 119
/dev/sdd:
         Events : 119
/dev/sde:
         Events : 119
===============
root@keruru:/var/log# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
root@keruru:/var/log# mdadm --assemble --force /dev/md0 /dev/sdb /dev/sde /dev/sdd /dev/sdc
mdadm: /dev/md0 has been started with 4 drives.
root@keruru:/var/log# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active (auto-read-only) raid6 sdb[4] sde[3] sdd[2] sdc[1]
      3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
===============
root@keruru:/var/log# grep Role raid.status
   Device Role : Active device 0
   Device Role : Active device 1
   Device Role : Active device 2
   Device Role : Active device 3
===============
root@keruru:/var/log# grep Used raid.status
  Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
===============
root@keruru:/var/log# mdadm --create --assume-clean --level=6 --raid-devices=4 --size=1953382912 /dev/md0 /dev/sdb /dev/sde /dev/sdd /dev/sdc
mdadm: /dev/sdb appears to be part of a raid array:
    level=raid6 devices=4 ctime=Fri Jun 30 15:42:27 2017
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sde appears to be part of a raid array:
    level=raid6 devices=4 ctime=Fri Jun 30 15:42:27 2017
mdadm: /dev/sdd appears to be part of a raid array:
    level=raid6 devices=4 ctime=Fri Jun 30 15:42:27 2017
mdadm: /dev/sdc appears to be part of a raid array:
    level=raid6 devices=4 ctime=Fri Jun 30 15:42:27 2017
Continue creating array? n
mdadm: create aborted.
===============
root@keruru:/var/log# mdadm --create --assume-clean --level=6 --raid-devices=4 --size=1953382912 /dev/md0 /dev/sdb /dev/sde /dev/sdd /dev/sdc
mdadm: /dev/sdb appears to be part of a raid array:
    level=raid6 devices=4 ctime=Fri Jun 30 15:42:27 2017
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sde appears to be part of a raid array:
    level=raid6 devices=4 ctime=Fri Jun 30 15:42:27 2017
mdadm: /dev/sdd appears to be part of a raid array:
    level=raid6 devices=4 ctime=Fri Jun 30 15:42:27 2017
mdadm: /dev/sdc appears to be part of a raid array:
    level=raid6 devices=4 ctime=Fri Jun 30 15:42:27 2017
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
===============
root@keruru:/# mount -t ext4 /dev/md0 /mnt/md0
mount: wrong fs type, bad option, bad superblock on /dev/md0,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so
===============
root@keruru:/# dmesg | tail
[448318.800806]  --- level:6 rd:4 wd:4
[448318.800812]  disk 0, o:1, dev:sdb
[448318.800817]  disk 1, o:1, dev:sde
[448318.800822]  disk 2, o:1, dev:sdd
[448318.800827]  disk 3, o:1, dev:sdc
[448318.800951] md0: detected capacity change from 0 to 4000528203776
[448318.809375]  md0: unknown partition table
[448358.704189] EXT4-fs (md0): Unrecognized mount option "\x08" or missing value
[448358.706680] EXT4-fs (md0): failed to parse options in superblock: \x08
[448358.706690] EXT4-fs (md0): Number of reserved GDT blocks insanely large: 9216
=============== 
Jul 11 17:23:15 keruru kernel: [447719.812775] md: export_rdev(sdd)
Jul 11 17:23:19 keruru kernel: [447724.396327] md: md0 stopped.
Jul 11 17:23:19 keruru kernel: [447724.400278] md: bind<sdc>
Jul 11 17:32:29 keruru kernel: [448273.001687] md0: detected capacity change from 4000528203776 to 0
Jul 11 17:32:29 keruru kernel: [448273.001714] md: md0 stopped.
Jul 11 17:32:29 keruru kernel: [448273.001729] md: unbind<sdc>
Jul 11 17:32:29 keruru kernel: [448273.022972] md: export_rdev(sdc)
Jul 11 17:32:29 keruru kernel: [448273.023143] md: unbind<sde>
Jul 11 17:32:29 keruru kernel: [448273.054889] md: export_rdev(sde)
Jul 11 17:32:29 keruru kernel: [448273.055035] md: unbind<sdd>
Jul 11 17:32:29 keruru kernel: [448273.086870] md: export_rdev(sdd)
Jul 11 17:33:14 keruru kernel: [448318.800827]  disk 3, o:1, dev:sdc
===============


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Advice please re failed Raid6
  2017-07-15 23:40 Advice please re failed Raid6 Bogo Mipps
@ 2017-07-16  0:58 ` Roman Mamedov
  2017-07-17  0:19 ` Peter Grandi
  1 sibling, 0 replies; 9+ messages in thread
From: Roman Mamedov @ 2017-07-16  0:58 UTC (permalink / raw)
  To: Bogo Mipps; +Cc: linux-raid

Hello,

One thing that I spotted:

Jun 27 16:52:21 keruru kernel: [ 2.922808] md: kicking non-fresh sdb from array!
Jun 27 16:52:21 keruru kernel: [ 3.173236] md/raid:md0: device sdd operational as raid disk 1
Jun 27 16:52:21 keruru kernel: [ 3.173239] md/raid:md0: device sde operational as raid disk 3
Jun 27 16:52:21 keruru kernel: [ 3.173242] md/raid:md0: device sdc operational as raid disk 2

The disk order here was "b d c e"

Jul 6 09:45:52 keruru kernel: [ 1195.580003] md/raid:md0: device sdb operational as raid disk 0
Jul 6 09:45:52 keruru kernel: [ 1195.580012] md/raid:md0: device sde operational as raid disk 3
Jul 6 09:45:52 keruru kernel: [ 1195.580018] md/raid:md0: device sdd operational as raid disk 2
Jul 6 09:45:52 keruru kernel: [ 1195.580025] md/raid:md0: device sdc operational as raid disk 1

But here the order changes to "b c d e"

Unless this is across reboots and your hardware detects disks in a random
order, something weird is going on here.

Also, your array has creation time of "Fri Jun 30 15:42:27 2017", but you
provide no dmesg or logs from that date.
Maybe you forgot to use --assume-clean on one of the attempts (and that's when
you nuked the array entirely)?
Or perhaps --assume-clean also rewrites the creation date, I am not sure.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Advice please re failed Raid6
  2017-07-15 23:40 Advice please re failed Raid6 Bogo Mipps
  2017-07-16  0:58 ` Roman Mamedov
@ 2017-07-17  0:19 ` Peter Grandi
  2017-07-19  1:52   ` Bogo Mipps
  1 sibling, 1 reply; 9+ messages in thread
From: Peter Grandi @ 2017-07-17  0:19 UTC (permalink / raw)
  To: Linux RAID

> mdadm --create --assume-clean [ ... ]

That's a very dangerous recovery method that you need to get
exactly right or it will cause trouble.

Also it should be used in very rare cases, not routinely to
recover from one missing disk.

> root@keruru:/var/log# mdadm --examine /dev/sd[bedc] >> raid.status
> root@keruru:/var/log# cat raid.status 
> /dev/sdb:
>      Array UUID : b1e6af5d:e5848ebe:63727445:2ab99719
>      Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
>   Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
>     Data Offset : 262144 sectors
>     Device UUID : 79e4933f:dfe5923f:5ba03ae7:3efe38eb
>          Events : 119
>      Chunk Size : 512K
>    Device Role : Active device 0
>    Array State : AAAA ('A' == active, '.' == missing)
> /dev/sdc:
>      Array UUID : b1e6af5d:e5848ebe:63727445:2ab99719
>      Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
>   Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
>     Data Offset : 262144 sectors
>     Device UUID : f1e1a946:711886a6:2604780f:8eba4a2d
>          Events : 119
>      Chunk Size : 512K
>    Device Role : Active device 1
> /dev/sdd:
>      Array UUID : b1e6af5d:e5848ebe:63727445:2ab99719
>      Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
>   Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
>     Data Offset : 262144 sectors
>     Device UUID : cf3bc8a7:9feed87d:945d8e77:08f7f32d
>          Events : 119
>      Chunk Size : 512K
>    Device Role : Active device 2
> /dev/sde:
>      Array UUID : b1e6af5d:e5848ebe:63727445:2ab99719
>      Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
>   Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
>     Data Offset : 262144 sectors
>          Events : 119
>      Chunk Size : 512K
>    Device Role : Active device 3

That AAAA meant that the array was fine. The important field of
'--examine' confirm that. The "dirty degraded array" meant most
likely some slight event count difference, usually one just
forces assembly in that case.

The line that worries a bit is this:

  Jul 5 21:06:18 keruru mdadm[2497]: RebuildFinished event
  detected on md  device /dev/md0, component device mismatches
  found: 1847058224 (on raid  level 6)

That seems to indicate that pretty much every block was a
mismatch. Which would have happened if you put in a blank drive
and then used '--create --assume-clean' instead of '--assemble
--force'. But '--assume-clean" explicitly skips a rebuild, and I
wonder whether you omitted that yoiu have triggered a "repair"
in 'sync_action'. Also the message is reported by 'mdadm', and
it may be that 'mdadm' was running in daemon mode and triggering
a periodic "repair". I can't remember the defaults.

HOWEVER there is a very subtle detail: the order of the devices
from '--examine' is: 0: 'sdb', 1: 'sdc', 2: 'sdd', 3: 'sde' but
you recreated the set in a different order. The order of the
devices does not matter if they have MD superblocks,.but here
you are using '--create' to make new superblocks, and they order
must match exactly the original order.

> root@keruru:/var/log# mdadm --create --assume-clean --level=6 --raid-devices=4 --size=1953382912 /dev/md0 /dev/sdb /dev/sde /dev/sdd /dev/sdc

Probably the best thing you can do is to rerun this with members
"missing /dev/sdc /dev/sdd /dev/sde". and then use 'blkid /dev/md0'
to check whether the data in it is recognized again. If so add
'/dev/sdb'. I did a quick test here of something close to that
and it worked...

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Advice please re failed Raid6
  2017-07-17  0:19 ` Peter Grandi
@ 2017-07-19  1:52   ` Bogo Mipps
  2017-07-19 12:36     ` Peter Grandi
  0 siblings, 1 reply; 9+ messages in thread
From: Bogo Mipps @ 2017-07-19  1:52 UTC (permalink / raw)
  To: Peter Grandi, Linux Raid

On 07/17/2017 12:19 PM, Peter Grandi wrote:
> The line that worries a bit is this:
> 
>    Jul 5 21:06:18 keruru mdadm[2497]: RebuildFinished event
>    detected on md  device /dev/md0, component device mismatches
>    found: 1847058224 (on raid  level 6)
> 
> That seems to indicate that pretty much every block was a
> mismatch. Which would have happened if you put in a blank drive
> and then used '--create --assume-clean' instead of '--assemble
> --force'. But '--assume-clean" explicitly skips a rebuild, and I
> wonder whether you omitted that yoiu have triggered a "repair"
> in 'sync_action'. Also the message is reported by 'mdadm', and
> it may be that 'mdadm' was running in daemon mode and triggering
> a periodic "repair". I can't remember the defaults.
> 
> HOWEVER there is a very subtle detail: the order of the devices
> from '--examine' is: 0: 'sdb', 1: 'sdc', 2: 'sdd', 3: 'sde' but
> you recreated the set in a different order. The order of the
> devices does not matter if they have MD superblocks,.but here
> you are using '--create' to make new superblocks, and they order
> must match exactly the original order.
> 
>> root@keruru:/var/log# mdadm --create --assume-clean --level=6 --raid-devices=4 --size=1953382912 /dev/md0 /dev/sdb /dev/sde /dev/sdd /dev/sdc
> 
> Probably the best thing you can do is to rerun this with members
> "missing /dev/sdc /dev/sdd /dev/sde". and then use 'blkid /dev/md0'
> to check whether the data in it is recognized again. If so add
> '/dev/sdb'. I did a quick test here of something close to that
> and it worked...

Thanks for responding, Peter.  Much appreciated.

Did I do it right? (See below)

root@keruru:~# mdadm --create --assume-clean --level=6 --raid-devices=4 
--size=1953382912 /dev/md0 missing /dev/sdc /dev/sdd /dev/sde
mdadm: /dev/sdc appears to be part of a raid array:
     level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
mdadm: /dev/sdd appears to be part of a raid array:
     level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
mdadm: /dev/sde appears to be part of a raid array:
     level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

root@keruru:~# blkid /dev/md0
root@keruru:~#

root@keruru:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid6 sde[3] sdd[2] sdc[1]
       3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 
[4/3] [_UUU]

unused devices: <none>

No reading from blkid, so I am assuming 'not good'.  But I've done 
nothing further in case I misunderstood your suggestion.




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Advice please re failed Raid6
  2017-07-19  1:52   ` Bogo Mipps
@ 2017-07-19 12:36     ` Peter Grandi
  2017-07-20  3:59       ` Bogo Mipps
       [not found]       ` <cf9aac00-91b3-3cb5-bceb-df5d7113b933@gmail.com>
  0 siblings, 2 replies; 9+ messages in thread
From: Peter Grandi @ 2017-07-19 12:36 UTC (permalink / raw)
  To: Linux Raid

> Did I do it right? (See below)

> root@keruru:~# mdadm --create --assume-clean --level=6 --raid-devices=4 
> --size=1953382912 /dev/md0 missing /dev/sdc /dev/sdd /dev/sde
> mdadm: /dev/sdc appears to be part of a raid array:
>      level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
> mdadm: /dev/sdd appears to be part of a raid array:
>      level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
> mdadm: /dev/sde appears to be part of a raid array:
>      level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
> Continue creating array? y
> mdadm: Defaulting to version 1.2 metadata
> mdadm: array /dev/md0 started.

This looks good, but is based on your original '--examine'
report as to the order of the devices, and whether they are
still bound to the same names 'sd[bcde]'.

> root@keruru:~# blkid /dev/md0

> root@keruru:~# cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active (auto-read-only) raid6 sde[3] sdd[2] sdc[1]
>        3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 
> [4/3] [_UUU]

> unused devices: <none>

The 'mdstat' actually looks good, but 'blkid' should have
worked.

As I was saying, it is not clear to me whether the 'mdadm' daemon
instance triggered a 'check' or a 'repair' (bad news). I hope
that you disabled that in the meantime while you try to fix the
mistake.

Trigger a 'check' and see if the set is consistent; if it is
consistent but the content cannot be read/mounted then 'repair'
rewrote it, if it is not consistent, try a different order or
3-way subset of 'sd[bcde]'.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Advice please re failed Raid6
  2017-07-19 12:36     ` Peter Grandi
@ 2017-07-20  3:59       ` Bogo Mipps
       [not found]       ` <cf9aac00-91b3-3cb5-bceb-df5d7113b933@gmail.com>
  1 sibling, 0 replies; 9+ messages in thread
From: Bogo Mipps @ 2017-07-20  3:59 UTC (permalink / raw)
  To: Peter Grandi, Linux Raid

On 07/20/2017 12:36 AM, Peter Grandi wrote:
>> Did I do it right? (See below)
> 
>> root@keruru:~# mdadm --create --assume-clean --level=6 --raid-devices=4
>> --size=1953382912 /dev/md0 missing /dev/sdc /dev/sdd /dev/sde
>> mdadm: /dev/sdc appears to be part of a raid array:
>>       level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
>> mdadm: /dev/sdd appears to be part of a raid array:
>>       level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
>> mdadm: /dev/sde appears to be part of a raid array:
>>       level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
>> Continue creating array? y
>> mdadm: Defaulting to version 1.2 metadata
>> mdadm: array /dev/md0 started.
> 
> This looks good, but is based on your original '--examine'
> report as to the order of the devices, and whether they are
> still bound to the same names 'sd[bcde]'.
> 
>> root@keruru:~# blkid /dev/md0
> 
>> root@keruru:~# cat /proc/mdstat
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active (auto-read-only) raid6 sde[3] sdd[2] sdc[1]
>>         3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2
>> [4/3] [_UUU]
> 
>> unused devices: <none>
> 
> The 'mdstat' actually looks good, but 'blkid' should have
> worked.
> 
> As I was saying, it is not clear to me whether the 'mdadm' daemon
> instance triggered a 'check' or a 'repair' (bad news). I hope
> that you disabled that in the meantime while you try to fix the
> mistake.
> 
> Trigger a 'check' and see if the set is consistent; if it is
> consistent but the content cannot be read/mounted then 'repair'
> rewrote it, if it is not consistent, try a different order or
> 3-way subset of 'sd[bcde]'.

Tried different order: sde, sdc, sdd. blkid worked. Added sdb as you 
suggested.  Currently rebuilding. Log below. Fingers crossed. Will 
report result.

  root@keruru:~# mdadm --create --assume-clean --level=6 
--raid-devices=4 /dev/md0 /dev/sde /dev/sdc /dev/sdd missing
mdadm: /dev/sde appears to be part of a raid array:
     level=raid6 devices=4 ctime=Wed Jul 19 13:40:04 2017
mdadm: /dev/sdc appears to be part of a raid array:
     level=raid6 devices=4 ctime=Wed Jul 19 13:40:04 2017
mdadm: /dev/sdd appears to be part of a raid array:
     level=raid6 devices=4 ctime=Wed Jul 19 13:40:04 2017
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@keruru:~# blkid
/dev/sda1: UUID="3a26cce9-4598-4000-a921-1cde5ba78682" TYPE="ext4"
/dev/sda5: UUID="9761c2be-fef2-4ec6-8fff-dd69aa9e4eb2" TYPE="swap"
/dev/sdd: UUID="ca038387-a44e-2bb8-c2fd-abe8e9062ffd" 
UUID_SUB="8e502cd5-74ef-7b15-1bc3-6f2465e7a695" LABEL="keruru:0" 
TYPE="linux_raid_member"
/dev/sdc: UUID="ca038387-a44e-2bb8-c2fd-abe8e9062ffd" 
UUID_SUB="aeb6626b-3dc2-3fa8-0fd5-947081866c49" LABEL="keruru:0" 
TYPE="linux_raid_member"
/dev/sde: UUID="ca038387-a44e-2bb8-c2fd-abe8e9062ffd" 
UUID_SUB="bfff697b-c3ae-9e70-9ca9-ba168a30cbd2" LABEL="keruru:0" 
TYPE="linux_raid_member"

root@keruru:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid6 sdd[2] sdc[1] sde[0]
       3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 
[4/3] [UUU_]
unused devices: <none>

root@keruru:~# mdadm --add /dev/md0 /dev/sdb
mdadm: added /dev/sdb

root@keruru:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdb[4] sdd[2] sdc[1] sde[0]
       3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 
[4/3] [UUU_]
       [>....................]  recovery =  0.0% (1258088/1953382912) 
finish=413.7min speed=78630K/sec

unused devices: <none>
root@keruru:~#



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Advice please re failed Raid6
       [not found]       ` <cf9aac00-91b3-3cb5-bceb-df5d7113b933@gmail.com>
@ 2017-07-21  0:44         ` Bogo Mipps
  2017-07-21  9:48           ` Peter Grandi
  0 siblings, 1 reply; 9+ messages in thread
From: Bogo Mipps @ 2017-07-21  0:44 UTC (permalink / raw)
  To: Peter Grandi, Linux Raid

On 07/20/2017 03:55 PM, Bogo Mipps wrote:
> On 07/20/2017 12:36 AM, Peter Grandi wrote:
>>> Did I do it right? (See below)
>>
>>> root@keruru:~# mdadm --create --assume-clean --level=6 --raid-devices=4
>>> --size=1953382912 /dev/md0 missing /dev/sdc /dev/sdd /dev/sde
>>> mdadm: /dev/sdc appears to be part of a raid array:
>>>       level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
>>> mdadm: /dev/sdd appears to be part of a raid array:
>>>       level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
>>> mdadm: /dev/sde appears to be part of a raid array:
>>>       level=raid6 devices=4 ctime=Tue Jul 11 17:33:12 2017
>>> Continue creating array? y
>>> mdadm: Defaulting to version 1.2 metadata
>>> mdadm: array /dev/md0 started.
>>
>> This looks good, but is based on your original '--examine'
>> report as to the order of the devices, and whether they are
>> still bound to the same names 'sd[bcde]'.
>>
>>> root@keruru:~# blkid /dev/md0
>>
>>> root@keruru:~# cat /proc/mdstat
>>> Personalities : [raid6] [raid5] [raid4]
>>> md0 : active (auto-read-only) raid6 sde[3] sdd[2] sdc[1]
>>>         3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2
>>> [4/3] [_UUU]
>>
>>> unused devices: <none>
>>
>> The 'mdstat' actually looks good, but 'blkid' should have
>> worked.
>>
>> As I was saying, it is not clear to me whether the 'mdadm' daemon
>> instance triggered a 'check' or a 'repair' (bad news). I hope
>> that you disabled that in the meantime while you try to fix the
>> mistake.
>>
>> Trigger a 'check' and see if the set is consistent; if it is
>> consistent but the content cannot be read/mounted then 'repair'
>> rewrote it, if it is not consistent, try a different order or
>> 3-way subset of 'sd[bcde]'.
> 
> Tried different order: sde, sdc, sdd and blkid worked. Added sdb as you 
> suggested.  Currently rebuilding. Log below. Fingers crossed. Will 
> report result.

Peter, here is where I come unstuck.  Where to from here? Raid6 has 
rebuilt, apparently successfully, but I can't mount. I hesitate to make 
another move without advice ...

root@keruru:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdb[4] sdd[2] sdc[1] sde[0]
       3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 
[4/3] [UUU_]
       [=============>.......]  recovery = 69.3% (1353992192/1953382912) 
finish=162.5min speed=61440K/sec

unused devices: <none>

root@keruru:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdb[4] sdd[2] sdc[1] sde[0]
       3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 
[4/4] [UUUU]

unused devices: <none>

root@keruru:/# mount /dev/md0 /mnt/md0
mount: you must specify the filesystem type

root@keruru:/# mount -t ext4 /dev/md0 /mnt/md0
mount: wrong fs type, bad option, bad superblock on /dev/md0,
        missing codepage or helper program, or other error
        In some cases useful info is found in syslog - try
        dmesg | tail  or so

root@keruru:/# dmesg | tail
[29458.547966] RAID conf printout:
[29458.547981]  --- level:6 rd:4 wd:4
[29458.547989]  disk 0, o:1, dev:sde
[29458.547995]  disk 1, o:1, dev:sdc
[29458.548001]  disk 2, o:1, dev:sdd
[29458.548007]  disk 3, o:1, dev:sdb
[48138.300934] EXT4-fs (md0): VFS: Can't find ext4 filesystem
[48138.301411] EXT4-fs (md0): VFS: Can't find ext4 filesystem
[48138.301856] EXT4-fs (md0): VFS: Can't find ext4 filesystem
[48155.451147] EXT4-fs (md0): VFS: Can't find ext4 filesystem


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Advice please re failed Raid6
  2017-07-21  0:44         ` Bogo Mipps
@ 2017-07-21  9:48           ` Peter Grandi
  2017-07-23  0:13             ` Bogo Mipps
  0 siblings, 1 reply; 9+ messages in thread
From: Peter Grandi @ 2017-07-21  9:48 UTC (permalink / raw)
  To: Linux RAID

>> Tried different order: sde, sdc, sdd and blkid worked.

It is not clear what "blkid worked" means here. It should have
reported an 'ext4' filesystem.

>> Added sdb as you suggested.

I actually wrote: "try a different order or 3-way subset of
'sd[bcde]'." Perhaps "3-way subset" was not clear. Only when the
right subset in the right order were found adding a fourth
member was worth it.

Also it matter enormously whether "Added sdb" was done after
recreating the set with four members with 'missing' or just 3.
It is not clear what you have done.

Also I had written: "not clear to me whether the 'mdadm' daemon
instance triggered a 'check' or a 'repair'" and you seem to have
not looked into that.

Also I had written: "I hope that you disabled that in the
meantime" and it is not clear whether you have done so.

Also I had written: "Trigger a 'check' and see if the set is
consistent", and I have no idea whether that happened and what
the result was.

Your actions and reports seem to be somewhat lackadaisical and
distracted as to what is a quite subtle situation.

>> Currently rebuilding.

Adding back 'sdb' and rebuilding: you can leave that to the
point where you have found the right order. Also before adding
'sdb' you would have used 'wipefs'/'mdadm --zero' it, I hope.

> Peter, here is where I come unstuck.  Where to from here?
> Raid6 has rebuilt, apparently successfully, but I can't mount.

It's difficult to say, because it is not clear what is going on,
because if the right order of members is (sdb sde sdc sdd) the
original output of 'mdadm --examine' is not consistent with that.

The issue here continues to be what is the right order of the
devices as members, and I am not sure that you know which
devices are which. I don't know how accurate are your reports
as to what happened and as to what you are doing.

> [29458.547989]  disk 0, o:1, dev:sde
> [29458.547995]  disk 1, o:1, dev:sdc
> [29458.548001]  disk 2, o:1, dev:sdd
> [29458.548007]  disk 3, o:1, dev:sdb

To me it seems pretty unlikely that 'sdb' would be member 3, but
again given your conflicting information as to past and current
actions, I cannot guess what is really going on.

But then your situation should be pretty easy: according to your
reports, you have a set of 4 devices in RAID6, which means that
any 2 devices of the 4 are sufficient to make the set work. The
only problem is knowing in which positions.

For the first stripe, the first 512KiB on each drive, the layout
will be:

     member 0: the first 512KiB of the 'ext4', with the superblock.
     member 1: the second 512KiB of the 'ext4', with a distinctive layout.
     member 2: 512KiB of P (XOR parity), looking like gibberish.
     member 3: 512KiB of Q (syndrome), looking like gibberish.

It might be interesting to see the output of:

   for D in c d e
   do
     echo
     echo "*** $D"
     blkid /dev/sd$D
     dd bs=512K count=1 if=/dev/sd$D | file -
     dd bs=512K count=1 if=/dev/sd$D | strings -a
   done

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Advice please re failed Raid6
  2017-07-21  9:48           ` Peter Grandi
@ 2017-07-23  0:13             ` Bogo Mipps
  0 siblings, 0 replies; 9+ messages in thread
From: Bogo Mipps @ 2017-07-23  0:13 UTC (permalink / raw)
  To: Peter Grandi, Linux Raid

On 07/21/2017 09:48 PM, Peter Grandi wrote:
>>> Tried different order: sde, sdc, sdd and blkid worked.
> 
> It is not clear what "blkid worked" means here. It should have
> reported an 'ext4' filesystem.
> 
>>> Added sdb as you suggested.
> 
> I actually wrote: "try a different order or 3-way subset of
> 'sd[bcde]'." Perhaps "3-way subset" was not clear. Only when the
> right subset in the right order were found adding a fourth
> member was worth it.
> 
> Also it matter enormously whether "Added sdb" was done after
> recreating the set with four members with 'missing' or just 3.
> It is not clear what you have done.
> 
> Also I had written: "not clear to me whether the 'mdadm' daemon
> instance triggered a 'check' or a 'repair'" and you seem to have
> not looked into that.
> 
> Also I had written: "I hope that you disabled that in the
> meantime" and it is not clear whether you have done so.
> 
> Also I had written: "Trigger a 'check' and see if the set is
> consistent", and I have no idea whether that happened and what
> the result was.
> 
> Your actions and reports seem to be somewhat lackadaisical and
> distracted as to what is a quite subtle situation.
> 
>>> Currently rebuilding.
> 
> Adding back 'sdb' and rebuilding: you can leave that to the
> point where you have found the right order. Also before adding
> 'sdb' you would have used 'wipefs'/'mdadm --zero' it, I hope.
> 
>> Peter, here is where I come unstuck.  Where to from here?
>> Raid6 has rebuilt, apparently successfully, but I can't mount.
> 
> It's difficult to say, because it is not clear what is going on,
> because if the right order of members is (sdb sde sdc sdd) the
> original output of 'mdadm --examine' is not consistent with that.
> 
> The issue here continues to be what is the right order of the
> devices as members, and I am not sure that you know which
> devices are which. I don't know how accurate are your reports
> as to what happened and as to what you are doing.
> 
>> [29458.547989]  disk 0, o:1, dev:sde
>> [29458.547995]  disk 1, o:1, dev:sdc
>> [29458.548001]  disk 2, o:1, dev:sdd
>> [29458.548007]  disk 3, o:1, dev:sdb
> 
> To me it seems pretty unlikely that 'sdb' would be member 3, but
> again given your conflicting information as to past and current
> actions, I cannot guess what is really going on.
> 
> But then your situation should be pretty easy: according to your
> reports, you have a set of 4 devices in RAID6, which means that
> any 2 devices of the 4 are sufficient to make the set work. The
> only problem is knowing in which positions.
> 
> For the first stripe, the first 512KiB on each drive, the layout
> will be:
> 
>       member 0: the first 512KiB of the 'ext4', with the superblock.
>       member 1: the second 512KiB of the 'ext4', with a distinctive layout.
>       member 2: 512KiB of P (XOR parity), looking like gibberish.
>       member 3: 512KiB of Q (syndrome), looking like gibberish.
> 
> It might be interesting to see the output of:
> 
>     for D in c d e
>     do
>       echo
>       echo "*** $D"
>       blkid /dev/sd$D
>       dd bs=512K count=1 if=/dev/sd$D | file -
>       dd bs=512K count=1 if=/dev/sd$D | strings -a
>     done

Peter, thank you for your detailed response. Much appreciated. My major 
regret is not coming to this list earlier. I only discovered, far too 
late, that I should have taken expert advice before I attempted any 
remedial work. Too much erroneous information flying around the 'net.

I will now carefully follow your suggestions as above and report back in 
a couple of days. The data on this Raid set is irreplaceable, and I want 
to do everything I can to regain access.

Regards.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-07-23  0:13 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-15 23:40 Advice please re failed Raid6 Bogo Mipps
2017-07-16  0:58 ` Roman Mamedov
2017-07-17  0:19 ` Peter Grandi
2017-07-19  1:52   ` Bogo Mipps
2017-07-19 12:36     ` Peter Grandi
2017-07-20  3:59       ` Bogo Mipps
     [not found]       ` <cf9aac00-91b3-3cb5-bceb-df5d7113b933@gmail.com>
2017-07-21  0:44         ` Bogo Mipps
2017-07-21  9:48           ` Peter Grandi
2017-07-23  0:13             ` Bogo Mipps

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.