Linux-Raid Archives on lore.kernel.org
 help / color / Atom feed
* Repairing IMSM RAID array "active, FAILED, not started"
       [not found] <CC93341E865248F8AB635929EF587792@Tosh10Pro>
@ 2021-02-06  3:19 ` 19 Devices linuxraid
  2021-02-08 13:28   ` Tkaczyk, Mariusz
  0 siblings, 1 reply; 2+ messages in thread
From: 19 Devices linuxraid @ 2021-02-06  3:19 UTC (permalink / raw)
  To: linux-raid

Hi, I'm hoping you can help me repair this RAID array (md125 below).  It 
failed after a repeated series of power interruptions.  There are 4 x 1TB 
drives with 2 RAID 5 arrays spread across them.  One array is working 
(md126) as are all 4 drives.

The boot drive was on the failed array so the system is running from a 
Fedora 33 Live USB.  Details of the 3 arrays and 4 drives follow.

[root@localhost-live ~]# mdadm -D /dev/md125
/dev/md125:
         Container : /dev/md/imsm0, member 0
      Raid Devices : 4
     Total Devices : 3

             State : active, FAILED, Not Started
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown


              UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       -       0        0        2      removed
       -       0        0        3      removed

       -       8       32        0      sync   /dev/sdc
       -       8        0        1      sync   /dev/sda
       -       8       48        3      sync   /dev/sdd
[root@localhost-live ~]#

[root@localhost-live ~]# mdadm -D /dev/md126
/dev/md126:
         Container : /dev/md/imsm0, member 1
        Raid Level : raid5
        Array Size : 99116032 (94.52 GiB 101.49 GB)
     Used Dev Size : 33038976 (31.51 GiB 33.83 GB)
      Raid Devices : 4
     Total Devices : 4

             State : clean, degraded, recovering
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-asymmetric
        Chunk Size : 128K

Consistency Policy : resync

    Rebuild Status : 35% complete


              UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
    Number   Major   Minor   RaidDevice State
       3       8       32        0      active sync   /dev/sdc
       2       8        0        1      active sync   /dev/sda
       1       8       16        2      spare rebuilding   /dev/sdb
       0       8       48        3      active sync   /dev/sdd
[root@localhost-live ~]#

[root@localhost-live ~]# mdadm -D /dev/md127
/dev/md127:
           Version : imsm
        Raid Level : container
     Total Devices : 4

   Working Devices : 4


              UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
     Member Arrays : /dev/md125 /dev/md/md1_0

    Number   Major   Minor   RaidDevice

       -       8       32        -        /dev/sdc
       -       8        0        -        /dev/sda
       -       8       48        -        /dev/sdd
       -       8       16        -        /dev/sdb
[root@localhost-live ~]#


[root@localhost-live ~]# mdadm --examine /dev/sda
/dev/sda:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : ab386e31
         Family : 775b3841
     Generation : 00458337
     Attributes : All supported
           UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
       Checksum : f25e8e6d correct
    MPB Sectors : 2
          Disks : 5
   RAID Devices : 2

  Disk01 Serial : WD-WCC3F1681668
          State : active
             Id : 00000001
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

[md0]:
           UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
     RAID Level : 5
        Members : 4
          Slots : [UU_U]
    Failed disk : 2
      This Slot : 1
    Sector Size : 512
     Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
   Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
  Sector Offset : 0
    Num Stripes : 7372800
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : degraded
    Dirty State : dirty
     RWH Policy : off

[md1]:
           UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
     RAID Level : 5
        Members : 4
          Slots : [UUUU]
    Failed disk : none
      This Slot : 1
    Sector Size : 512
     Array Size : 198232064 (94.52 GiB 101.49 GB)
   Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
  Sector Offset : 1887440896
    Num Stripes : 258117
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean
     RWH Policy : <unknown:128>

  Disk00 Serial : S13PJDWS608386
          State : active
             Id : 00000003
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk02 Serial : D-WMC3F2148323:0
          State : active
             Id : ffffffff
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk03 Serial : S13PJDWS608384
          State : active
             Id : 00000004
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk04 Serial : WD-WMC3F2148323
          State : active
             Id : 00000002
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
[root@localhost-live ~]#


[root@localhost-live ~]# mdadm --examine /dev/sdb
/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : ab386e31
         Family : 775b3841
     Generation : 00458337
     Attributes : All supported
           UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
       Checksum : f25e8e6d correct
    MPB Sectors : 2
          Disks : 5
   RAID Devices : 2

  Disk04 Serial : WD-WMC3F2148323
          State : active
             Id : 00000002
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

[md0]:
           UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
     RAID Level : 5
        Members : 4
          Slots : [UU_U]
    Failed disk : 2
      This Slot : ?
    Sector Size : 512
     Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
   Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
  Sector Offset : 0
    Num Stripes : 7372800
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : degraded
    Dirty State : dirty
     RWH Policy : off

[md1]:
           UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
     RAID Level : 5
        Members : 4
          Slots : [UUUU]
    Failed disk : none
      This Slot : 2
    Sector Size : 512
     Array Size : 198232064 (94.52 GiB 101.49 GB)
   Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
  Sector Offset : 1887440896
    Num Stripes : 258117
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean
     RWH Policy : <unknown:128>

  Disk00 Serial : S13PJDWS608386
          State : active
             Id : 00000003
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk01 Serial : WD-WCC3F1681668
          State : active
             Id : 00000001
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk02 Serial : D-WMC3F2148323:0
          State : active
             Id : ffffffff
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk03 Serial : S13PJDWS608384
          State : active
             Id : 00000004
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
[root@localhost-live ~]#


[root@localhost-live ~]# mdadm --examine /dev/sdc
/dev/sdc:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : ab386e31
         Family : 775b3841
     Generation : 00458337
     Attributes : All supported
           UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
       Checksum : f25e8e6d correct
    MPB Sectors : 2
          Disks : 5
   RAID Devices : 2

  Disk00 Serial : S13PJDWS608386
          State : active
             Id : 00000003
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

[md0]:
           UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
     RAID Level : 5
        Members : 4
          Slots : [UU_U]
    Failed disk : 2
      This Slot : 0
    Sector Size : 512
     Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
   Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
  Sector Offset : 0
    Num Stripes : 7372800
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : degraded
    Dirty State : dirty
     RWH Policy : off

[md1]:
           UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
     RAID Level : 5
        Members : 4
          Slots : [UUUU]
    Failed disk : none
      This Slot : 0
    Sector Size : 512
     Array Size : 198232064 (94.52 GiB 101.49 GB)
   Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
  Sector Offset : 1887440896
    Num Stripes : 258117
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean
     RWH Policy : <unknown:128>

  Disk01 Serial : WD-WCC3F1681668
          State : active
             Id : 00000001
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk02 Serial : D-WMC3F2148323:0
          State : active
             Id : ffffffff
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk03 Serial : S13PJDWS608384
          State : active
             Id : 00000004
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk04 Serial : WD-WMC3F2148323
          State : active
             Id : 00000002
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
[root@localhost-live ~]#


[root@localhost-live ~]# mdadm --examine /dev/sdd
/dev/sdd:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : ab386e31
         Family : 775b3841
     Generation : 00458337
     Attributes : All supported
           UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
       Checksum : f25e8e6d correct
    MPB Sectors : 2
          Disks : 5
   RAID Devices : 2

  Disk03 Serial : S13PJDWS608384
          State : active
             Id : 00000004
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

[md0]:
           UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
     RAID Level : 5
        Members : 4
          Slots : [UU_U]
    Failed disk : 2
      This Slot : 3
    Sector Size : 512
     Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
   Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
  Sector Offset : 0
    Num Stripes : 7372800
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : degraded
    Dirty State : dirty
     RWH Policy : off

[md1]:
           UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
     RAID Level : 5
        Members : 4
          Slots : [UUUU]
    Failed disk : none
      This Slot : 3
    Sector Size : 512
     Array Size : 198232064 (94.52 GiB 101.49 GB)
   Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
  Sector Offset : 1887440896
    Num Stripes : 258117
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean
     RWH Policy : <unknown:128>

  Disk00 Serial : S13PJDWS608386
          State : active
             Id : 00000003
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk01 Serial : WD-WCC3F1681668
          State : active
             Id : 00000001
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk02 Serial : D-WMC3F2148323:0
          State : active
             Id : ffffffff
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)

  Disk04 Serial : WD-WMC3F2148323
          State : active
             Id : 00000002
    Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
[root@localhost-live ~]#

Thanks

ps. Why was my Outlook.com email address rejected by this server? 


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Repairing IMSM RAID array "active, FAILED, not started"
  2021-02-06  3:19 ` Repairing IMSM RAID array "active, FAILED, not started" 19 Devices linuxraid
@ 2021-02-08 13:28   ` Tkaczyk, Mariusz
  0 siblings, 0 replies; 2+ messages in thread
From: Tkaczyk, Mariusz @ 2021-02-08 13:28 UTC (permalink / raw)
  To: 19 Devices linuxraid; +Cc: linux-raid

Hello,

You achieved dirty-degraded RAID5 scenario:
 >       Map State : degraded
 >     Dirty State : dirty

Support for assembling IMSM dirty degraded array has been
added recently to mdadm:
https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=7b99edab2834d5d08ef774b4cff784caaa1a186f

This array cannot be assembled automatically, Incremental mode doesn't
support it. To start it you need to do following steps:
1. Backup the data on the drives first.

2. Check that mdadm has this fix included. The simplest way is to
download source package and check %patches section in mdadm.spec file.
If no, then compile your mdadm or please find distribution with fix
included.

3. Stop this inactive array:
# mdadm -S /dev/md/md0

4. Call assemble on the container with force flag:
# mdadm -A /dev/md127 /dev/md/md0 --force


You will see prompt:
"%s array state forced to clean. It may cause data corruption."
That is true, some data may be invalid. There is no safe way to start
your array.

Regards,
Mariusz

On 06.02.2021 04:19, 19 Devices linuxraid wrote:
> Hi, I'm hoping you can help me repair this RAID array (md125 below).  It failed 
> after a repeated series of power interruptions.  There are 4 x 1TB drives with 2 
> RAID 5 arrays spread across them.  One array is working (md126) as are all 4 
> drives.
> 
> The boot drive was on the failed array so the system is running from a Fedora 33 
> Live USB.  Details of the 3 arrays and 4 drives follow.
> 
> [root@localhost-live ~]# mdadm -D /dev/md125
> /dev/md125:
>          Container : /dev/md/imsm0, member 0
>       Raid Devices : 4
>      Total Devices : 3
> 
>              State : active, FAILED, Not Started
>     Active Devices : 3
>    Working Devices : 3
>     Failed Devices : 0
>      Spare Devices : 0
> 
> Consistency Policy : unknown
> 
> 
>               UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>     Number   Major   Minor   RaidDevice State
>        -       0        0        0      removed
>        -       0        0        1      removed
>        -       0        0        2      removed
>        -       0        0        3      removed
> 
>        -       8       32        0      sync   /dev/sdc
>        -       8        0        1      sync   /dev/sda
>        -       8       48        3      sync   /dev/sdd
> [root@localhost-live ~]#
> 
> [root@localhost-live ~]# mdadm -D /dev/md126
> /dev/md126:
>          Container : /dev/md/imsm0, member 1
>         Raid Level : raid5
>         Array Size : 99116032 (94.52 GiB 101.49 GB)
>      Used Dev Size : 33038976 (31.51 GiB 33.83 GB)
>       Raid Devices : 4
>      Total Devices : 4
> 
>              State : clean, degraded, recovering
>     Active Devices : 3
>    Working Devices : 4
>     Failed Devices : 0
>      Spare Devices : 1
> 
>             Layout : left-asymmetric
>         Chunk Size : 128K
> 
> Consistency Policy : resync
> 
>     Rebuild Status : 35% complete
> 
> 
>               UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>     Number   Major   Minor   RaidDevice State
>        3       8       32        0      active sync   /dev/sdc
>        2       8        0        1      active sync   /dev/sda
>        1       8       16        2      spare rebuilding   /dev/sdb
>        0       8       48        3      active sync   /dev/sdd
> [root@localhost-live ~]#
> 
> [root@localhost-live ~]# mdadm -D /dev/md127
> /dev/md127:
>            Version : imsm
>         Raid Level : container
>      Total Devices : 4
> 
>    Working Devices : 4
> 
> 
>               UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>      Member Arrays : /dev/md125 /dev/md/md1_0
> 
>     Number   Major   Minor   RaidDevice
> 
>        -       8       32        -        /dev/sdc
>        -       8        0        -        /dev/sda
>        -       8       48        -        /dev/sdd
>        -       8       16        -        /dev/sdb
> [root@localhost-live ~]#
> 
> 
> [root@localhost-live ~]# mdadm --examine /dev/sda
> /dev/sda:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.3.00
>     Orig Family : ab386e31
>          Family : 775b3841
>      Generation : 00458337
>      Attributes : All supported
>            UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>        Checksum : f25e8e6d correct
>     MPB Sectors : 2
>           Disks : 5
>    RAID Devices : 2
> 
>   Disk01 Serial : WD-WCC3F1681668
>           State : active
>              Id : 00000001
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
> [md0]:
>            UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>      RAID Level : 5
>         Members : 4
>           Slots : [UU_U]
>     Failed disk : 2
>       This Slot : 1
>     Sector Size : 512
>      Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
>    Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
>   Sector Offset : 0
>     Num Stripes : 7372800
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : degraded
>     Dirty State : dirty
>      RWH Policy : off
> 
> [md1]:
>            UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>      RAID Level : 5
>         Members : 4
>           Slots : [UUUU]
>     Failed disk : none
>       This Slot : 1
>     Sector Size : 512
>      Array Size : 198232064 (94.52 GiB 101.49 GB)
>    Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
>   Sector Offset : 1887440896
>     Num Stripes : 258117
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
>      RWH Policy : <unknown:128>
> 
>   Disk00 Serial : S13PJDWS608386
>           State : active
>              Id : 00000003
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk02 Serial : D-WMC3F2148323:0
>           State : active
>              Id : ffffffff
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk03 Serial : S13PJDWS608384
>           State : active
>              Id : 00000004
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk04 Serial : WD-WMC3F2148323
>           State : active
>              Id : 00000002
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> [root@localhost-live ~]#
> 
> 
> [root@localhost-live ~]# mdadm --examine /dev/sdb
> /dev/sdb:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.3.00
>     Orig Family : ab386e31
>          Family : 775b3841
>      Generation : 00458337
>      Attributes : All supported
>            UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>        Checksum : f25e8e6d correct
>     MPB Sectors : 2
>           Disks : 5
>    RAID Devices : 2
> 
>   Disk04 Serial : WD-WMC3F2148323
>           State : active
>              Id : 00000002
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
> [md0]:
>            UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>      RAID Level : 5
>         Members : 4
>           Slots : [UU_U]
>     Failed disk : 2
>       This Slot : ?
>     Sector Size : 512
>      Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
>    Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
>   Sector Offset : 0
>     Num Stripes : 7372800
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : degraded
>     Dirty State : dirty
>      RWH Policy : off
> 
> [md1]:
>            UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>      RAID Level : 5
>         Members : 4
>           Slots : [UUUU]
>     Failed disk : none
>       This Slot : 2
>     Sector Size : 512
>      Array Size : 198232064 (94.52 GiB 101.49 GB)
>    Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
>   Sector Offset : 1887440896
>     Num Stripes : 258117
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
>      RWH Policy : <unknown:128>
> 
>   Disk00 Serial : S13PJDWS608386
>           State : active
>              Id : 00000003
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk01 Serial : WD-WCC3F1681668
>           State : active
>              Id : 00000001
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk02 Serial : D-WMC3F2148323:0
>           State : active
>              Id : ffffffff
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk03 Serial : S13PJDWS608384
>           State : active
>              Id : 00000004
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> [root@localhost-live ~]#
> 
> 
> [root@localhost-live ~]# mdadm --examine /dev/sdc
> /dev/sdc:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.3.00
>     Orig Family : ab386e31
>          Family : 775b3841
>      Generation : 00458337
>      Attributes : All supported
>            UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>        Checksum : f25e8e6d correct
>     MPB Sectors : 2
>           Disks : 5
>    RAID Devices : 2
> 
>   Disk00 Serial : S13PJDWS608386
>           State : active
>              Id : 00000003
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
> [md0]:
>            UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>      RAID Level : 5
>         Members : 4
>           Slots : [UU_U]
>     Failed disk : 2
>       This Slot : 0
>     Sector Size : 512
>      Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
>    Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
>   Sector Offset : 0
>     Num Stripes : 7372800
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : degraded
>     Dirty State : dirty
>      RWH Policy : off
> 
> [md1]:
>            UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>      RAID Level : 5
>         Members : 4
>           Slots : [UUUU]
>     Failed disk : none
>       This Slot : 0
>     Sector Size : 512
>      Array Size : 198232064 (94.52 GiB 101.49 GB)
>    Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
>   Sector Offset : 1887440896
>     Num Stripes : 258117
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
>      RWH Policy : <unknown:128>
> 
>   Disk01 Serial : WD-WCC3F1681668
>           State : active
>              Id : 00000001
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk02 Serial : D-WMC3F2148323:0
>           State : active
>              Id : ffffffff
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk03 Serial : S13PJDWS608384
>           State : active
>              Id : 00000004
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk04 Serial : WD-WMC3F2148323
>           State : active
>              Id : 00000002
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> [root@localhost-live ~]#
> 
> 
> [root@localhost-live ~]# mdadm --examine /dev/sdd
> /dev/sdd:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.3.00
>     Orig Family : ab386e31
>          Family : 775b3841
>      Generation : 00458337
>      Attributes : All supported
>            UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>        Checksum : f25e8e6d correct
>     MPB Sectors : 2
>           Disks : 5
>    RAID Devices : 2
> 
>   Disk03 Serial : S13PJDWS608384
>           State : active
>              Id : 00000004
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
> [md0]:
>            UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>      RAID Level : 5
>         Members : 4
>           Slots : [UU_U]
>     Failed disk : 2
>       This Slot : 3
>     Sector Size : 512
>      Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
>    Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
>   Sector Offset : 0
>     Num Stripes : 7372800
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : degraded
>     Dirty State : dirty
>      RWH Policy : off
> 
> [md1]:
>            UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>      RAID Level : 5
>         Members : 4
>           Slots : [UUUU]
>     Failed disk : none
>       This Slot : 3
>     Sector Size : 512
>      Array Size : 198232064 (94.52 GiB 101.49 GB)
>    Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
>   Sector Offset : 1887440896
>     Num Stripes : 258117
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
>      RWH Policy : <unknown:128>
> 
>   Disk00 Serial : S13PJDWS608386
>           State : active
>              Id : 00000003
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk01 Serial : WD-WCC3F1681668
>           State : active
>              Id : 00000001
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk02 Serial : D-WMC3F2148323:0
>           State : active
>              Id : ffffffff
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk04 Serial : WD-WMC3F2148323
>           State : active
>              Id : 00000002
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> [root@localhost-live ~]#
> 
> Thanks
> 
> ps. Why was my Outlook.com email address rejected by this server?
> 
> 


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CC93341E865248F8AB635929EF587792@Tosh10Pro>
2021-02-06  3:19 ` Repairing IMSM RAID array "active, FAILED, not started" 19 Devices linuxraid
2021-02-08 13:28   ` Tkaczyk, Mariusz

Linux-Raid Archives on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-raid/0 linux-raid/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-raid linux-raid/ https://lore.kernel.org/linux-raid \
		linux-raid@vger.kernel.org
	public-inbox-index linux-raid

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-raid


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git