All of lore.kernel.org
 help / color / mirror / Atom feed
* hung grow
@ 2017-10-04 17:18 Curt
  2017-10-04 17:51 ` Anthony Youngman
  0 siblings, 1 reply; 59+ messages in thread
From: Curt @ 2017-10-04 17:18 UTC (permalink / raw)
  To: linux-raid

Hello all,

So, I got my raid6 is a half fucked state you could say. My raid6 lost
3 drives and then starting throwing I/O errors on the mount point.  If
I tried to restart with the good drives, I got too few to start raid.
So one that got marked faulty, almost matched the even count of the
others, only off by a few.  So I did a assemble force, which seemed to
work and I could see my data.  I should have just pulled it off at
that point and saved what I could, but no.

So I replaced 2 of the bad drives and added them to the raid, it went
through recovery, but only marked the 2 new drives as spares and
showed the bad removed, 2 spares, and it marked one faulty aging, see
the below.

uname -a
Linux dev 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux

mdadm --detail /dev/md127
/dev/md127:
           Version : 0.90
     Creation Time : Fri Jun 15 15:52:05 2012
        Raid Level : raid6
        Array Size : 9767519360 (9315.03 GiB 10001.94 GB)
     Used Dev Size : 1953503872 (1863.01 GiB 2000.39 GB)
      Raid Devices : 7
     Total Devices : 7
   Preferred Minor : 127
       Persistence : Superblock is persistent

       Update Time : Tue Oct  3 22:22:06 2017
             State : clean, FAILED
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 1
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : unknown

              UUID : 714a612d:9bd35197:36c91ae3:c168144d
            Events : 0.11559613

    Number   Major   Minor   RaidDevice State
       0       8       97        0      active sync   /dev/sdg1
       1       8       49        1      active sync   /dev/sdd1
       2       8       33        2      active sync   /dev/sdc1
       3       8        1        3      active sync   /dev/sda1
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed

       7       8       80        -      spare   /dev/sdf
       8       8       16        -      spare   /dev/sdb
       9       8       65        -      faulty   /dev/sde1

after several tries to reassemble, the spares wouldn't got active.  So
on the advice of somenone, I set the raid to grow to 8, the theory
being it would make on spare active. Which somewhat worked, but grow
froze at 0% and when I do a detail on md127 it just hangs, it returned
once when this first started and showed the spare in spare rebuilding
status, but sync_action showed reshape and mdstat.


examine returns this, it's the same for all as far as I can see
mdadm --examine /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 0.91.00
           UUID : 714a612d:9bd35197:36c91ae3:c168144d
  Creation Time : Fri Jun 15 15:52:05 2012
     Raid Level : raid6
  Used Dev Size : 1953503872 (1863.01 GiB 2000.39 GB)
     Array Size : 11721023232 (11178.04 GiB 12002.33 GB)
   Raid Devices : 8
  Total Devices : 6
Preferred Minor : 127

  Reshape pos'n : 3799296 (3.62 GiB 3.89 GB)
  Delta Devices : 1 (7->8)

    Update Time : Wed Oct  4 10:10:37 2017
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 2
  Spare Devices : 0
       Checksum : ce71846f - correct
         Events : 11559679

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8        1        3      active sync   /dev/sda1

   0     0       8       97        0      active sync   /dev/sdg1
   1     1       8       49        1      active sync   /dev/sdd1
   2     2       8       33        2      active sync   /dev/sdc1
   3     3       8        1        3      active sync   /dev/sda1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       0        0        5      faulty removed
   6     6       8       16        6      active   /dev/sdb
   7     7       0        0        7      faulty removed


Is my raid completely fucked or can I still recover some data with
doing the create assume clean?

Cheers,
Curt

^ permalink raw reply	[flat|nested] 59+ messages in thread

end of thread, other threads:[~2017-10-12  6:15 UTC | newest]

Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-04 17:18 hung grow Curt
2017-10-04 17:51 ` Anthony Youngman
2017-10-04 18:16   ` Curt
2017-10-04 18:29     ` Joe Landman
2017-10-04 18:37       ` Curt
2017-10-04 18:44         ` Joe Landman
2017-10-04 19:01           ` Anthony Youngman
2017-10-04 19:09             ` Curt
2017-10-04 19:46               ` Anthony Youngman
2017-10-04 20:01                 ` Curt
2017-10-04 21:08                   ` Anthony Youngman
2017-10-04 21:53                     ` Phil Turmel
     [not found]                       ` <CADg2FGbnMzLBqWthKY5Uo__ANC2kAqH_8B1G23nhW+7hWJ=KeA@mail.gmail.com>
2017-10-06  1:25                         ` Curt
2017-10-06 11:16                           ` Wols Lists
     [not found]                         ` <CADg2FGYc-sPjwukuhonoUUCr3ze3PQWv8gtZPnUT=E4CvsQftg@mail.gmail.com>
2017-10-06 13:13                           ` Phil Turmel
2017-10-06 14:07                             ` Curt
2017-10-06 14:27                               ` Joe Landman
2017-10-06 14:27                               ` Phil Turmel
2017-10-07  3:09                                 ` Curt
2017-10-07  3:15                                   ` Curt
2017-10-07 20:45                                     ` Curt
2017-10-07 21:29                                       ` Phil Turmel
2017-10-08 22:40                                         ` Curt
2017-10-09  1:23                                           ` NeilBrown
2017-10-09  1:40                                             ` Curt
2017-10-09  4:28                                               ` NeilBrown
2017-10-09  4:59                                                 ` Curt
2017-10-09  5:47                                                   ` NeilBrown
2017-10-09 12:41                                                 ` Curt
2017-10-10 12:08                                                   ` Curt
2017-10-10 13:06                                                     ` Phil Turmel
2017-10-10 13:37                                                       ` Anthony Youngman
2017-10-10 14:00                                                         ` Phil Turmel
2017-10-10 14:11                                                           ` Curt
2017-10-10 14:14                                                             ` Reindl Harald
2017-10-10 14:15                                                             ` Phil Turmel
2017-10-10 14:23                                                               ` Curt
2017-10-10 18:06                                                                 ` Phil Turmel
2017-10-10 19:25                                                                   ` Curt
2017-10-10 19:42                                                                     ` Phil Turmel
2017-10-10 19:49                                                                       ` Curt
2017-10-10 19:51                                                                         ` Curt
2017-10-10 20:18                                                                           ` Phil Turmel
2017-10-10 20:29                                                                             ` Curt
2017-10-10 20:31                                                                               ` Phil Turmel
2017-10-10 20:48                                                                                 ` Curt
2017-10-10 20:47                                                                     ` NeilBrown
2017-10-10 20:58                                                                       ` Curt
2017-10-10 21:23                                                                         ` Curt
2017-10-10 21:56                                                                           ` NeilBrown
2017-10-11  0:26                                                                             ` Curt
2017-10-11  4:46                                                                               ` NeilBrown
2017-10-11  2:20                                                                       ` Curt
2017-10-11  4:49                                                                         ` NeilBrown
2017-10-11 15:38                                                                           ` Curt
2017-10-12  6:15                                                                             ` NeilBrown
2017-10-10 14:12                                                           ` Anthony Youngman
2017-10-04 19:06         ` Anthony Youngman
2017-10-04 18:57     ` Anthony Youngman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.