All of lore.kernel.org
 help / color / mirror / Atom feed
* Failed RAID6
       [not found] <1788692158.60895.1330105517283.JavaMail.root@zimbra.cis.uab.edu>
@ 2012-02-24 17:56 ` Larry Owen
  2012-02-24 20:52   ` NeilBrown
  0 siblings, 1 reply; 2+ messages in thread
From: Larry Owen @ 2012-02-24 17:56 UTC (permalink / raw)
  To: linux-raid

We have a raid6 array with 3 "failed" discs.  However, smart 
status for all drives is ok.  I'm also not seeing any io errors
in my logs (except when the discs were initially dropped). I 
also tried reading raw data from one of the failed drives, 
using dd.  I was able to get data, still no io errors.

We're thinking about trying to re-add or assemble --force,
in hopes of getting it up to recover some data (did I mention
the users put important data on here knowing it was not backed
up).

Our main concern is that we add the discs and the system start 
rebuilding.  Any suggestions on how we should proceed?

Thanks,
Larry Owen

################### MDADM OUTPUT ##############################

mdadm --detail /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Tue Feb  1 16:25:03 2011
     Raid Level : raid6
  Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
   Raid Devices : 22
  Total Devices : 21
    Persistence : Superblock is persistent

    Update Time : Wed Feb 22 23:42:11 2012
          State : active, FAILED, Not Started
 Active Devices : 19
Working Devices : 21
 Failed Devices : 0
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

           Name : kddm-nas:0  (local to host kddm-nas)
           UUID : 9fc31e38:7723974a:1f37af62:5a75ec91
         Events : 5412

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde
       4       8       80        4      active sync   /dev/sdf
       5       8       96        5      active sync   /dev/sdg
       6       8      112        6      active sync   /dev/sdh
       7       0        0        7      removed
       8       8      144        8      active sync   /dev/sdj
       9       8      160        9      active sync   /dev/sdk
      10       8      176       10      active sync   /dev/sdl
      11       0        0       11      removed
      12       8      208       12      active sync   /dev/sdn
      13       8      224       13      active sync   /dev/sdo
      14       8      240       14      active sync   /dev/sdp
      15       0        0       15      removed
      16      65       16       16      active sync   /dev/sdr
      17      65       32       17      active sync   /dev/sds
      18      65       48       18      active sync   /dev/sdt
      19      65       64       19      active sync   /dev/sdu
      20      65       80       20      active sync   /dev/sdv
      21      65       96       21      active sync   /dev/sdw

      22      65      112        -      spare   /dev/sdx
      23      65      128        -      spare   /dev/sdy




^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Failed RAID6
  2012-02-24 17:56 ` Failed RAID6 Larry Owen
@ 2012-02-24 20:52   ` NeilBrown
  0 siblings, 0 replies; 2+ messages in thread
From: NeilBrown @ 2012-02-24 20:52 UTC (permalink / raw)
  To: Larry Owen; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3568 bytes --]

On Fri, 24 Feb 2012 11:56:39 -0600 (CST) Larry Owen <larryowen@cis.uab.edu>
wrote:

> We have a raid6 array with 3 "failed" discs.  However, smart 
> status for all drives is ok.  I'm also not seeing any io errors
> in my logs (except when the discs were initially dropped). I 
> also tried reading raw data from one of the failed drives, 
> using dd.  I was able to get data, still no io errors.
> 
> We're thinking about trying to re-add or assemble --force,
> in hopes of getting it up to recover some data (did I mention
> the users put important data on here knowing it was not backed
> up).
> 
> Our main concern is that we add the discs and the system start 
> rebuilding.  Any suggestions on how we should proceed?

Use assemble --force - this situation is exactly what it is for.

mdadm -S /dev/md0
mdadm -A --force /dev/md0

You might need to list all the component devices, depending on what is in
your mdadm.conf

NeilBrown


> 
> Thanks,
> Larry Owen
> 
> ################### MDADM OUTPUT ##############################
> 
> mdadm --detail /dev/md0 
> /dev/md0:
>         Version : 1.2
>   Creation Time : Tue Feb  1 16:25:03 2011
>      Raid Level : raid6
>   Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
>    Raid Devices : 22
>   Total Devices : 21
>     Persistence : Superblock is persistent
> 
>     Update Time : Wed Feb 22 23:42:11 2012
>           State : active, FAILED, Not Started
>  Active Devices : 19
> Working Devices : 21
>  Failed Devices : 0
>   Spare Devices : 2
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>            Name : kddm-nas:0  (local to host kddm-nas)
>            UUID : 9fc31e38:7723974a:1f37af62:5a75ec91
>          Events : 5412
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       16        0      active sync   /dev/sdb
>        1       8       32        1      active sync   /dev/sdc
>        2       8       48        2      active sync   /dev/sdd
>        3       8       64        3      active sync   /dev/sde
>        4       8       80        4      active sync   /dev/sdf
>        5       8       96        5      active sync   /dev/sdg
>        6       8      112        6      active sync   /dev/sdh
>        7       0        0        7      removed
>        8       8      144        8      active sync   /dev/sdj
>        9       8      160        9      active sync   /dev/sdk
>       10       8      176       10      active sync   /dev/sdl
>       11       0        0       11      removed
>       12       8      208       12      active sync   /dev/sdn
>       13       8      224       13      active sync   /dev/sdo
>       14       8      240       14      active sync   /dev/sdp
>       15       0        0       15      removed
>       16      65       16       16      active sync   /dev/sdr
>       17      65       32       17      active sync   /dev/sds
>       18      65       48       18      active sync   /dev/sdt
>       19      65       64       19      active sync   /dev/sdu
>       20      65       80       20      active sync   /dev/sdv
>       21      65       96       21      active sync   /dev/sdw
> 
>       22      65      112        -      spare   /dev/sdx
>       23      65      128        -      spare   /dev/sdy
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-02-24 20:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1788692158.60895.1330105517283.JavaMail.root@zimbra.cis.uab.edu>
2012-02-24 17:56 ` Failed RAID6 Larry Owen
2012-02-24 20:52   ` NeilBrown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.