linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Tkaczyk, Mariusz" <mariusz.tkaczyk@linux.intel.com>
To: 19 Devices linuxraid <19devices+linuxraid@gmail.com>
Cc: linux-raid <linux-raid@vger.kernel.org>
Subject: Re: Repairing IMSM RAID array "active, FAILED, not started"
Date: Mon, 8 Feb 2021 14:28:57 +0100	[thread overview]
Message-ID: <ac370d79-95e8-d0a1-0991-fb12b128818c@linux.intel.com> (raw)
In-Reply-To: <03420E24CF73457CAAAEE93529BD8B6C@Tosh10Pro>

Hello,

You achieved dirty-degraded RAID5 scenario:
 >       Map State : degraded
 >     Dirty State : dirty

Support for assembling IMSM dirty degraded array has been
added recently to mdadm:
https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=7b99edab2834d5d08ef774b4cff784caaa1a186f

This array cannot be assembled automatically, Incremental mode doesn't
support it. To start it you need to do following steps:
1. Backup the data on the drives first.

2. Check that mdadm has this fix included. The simplest way is to
download source package and check %patches section in mdadm.spec file.
If no, then compile your mdadm or please find distribution with fix
included.

3. Stop this inactive array:
# mdadm -S /dev/md/md0

4. Call assemble on the container with force flag:
# mdadm -A /dev/md127 /dev/md/md0 --force


You will see prompt:
"%s array state forced to clean. It may cause data corruption."
That is true, some data may be invalid. There is no safe way to start
your array.

Regards,
Mariusz

On 06.02.2021 04:19, 19 Devices linuxraid wrote:
> Hi, I'm hoping you can help me repair this RAID array (md125 below).  It failed 
> after a repeated series of power interruptions.  There are 4 x 1TB drives with 2 
> RAID 5 arrays spread across them.  One array is working (md126) as are all 4 
> drives.
> 
> The boot drive was on the failed array so the system is running from a Fedora 33 
> Live USB.  Details of the 3 arrays and 4 drives follow.
> 
> [root@localhost-live ~]# mdadm -D /dev/md125
> /dev/md125:
>          Container : /dev/md/imsm0, member 0
>       Raid Devices : 4
>      Total Devices : 3
> 
>              State : active, FAILED, Not Started
>     Active Devices : 3
>    Working Devices : 3
>     Failed Devices : 0
>      Spare Devices : 0
> 
> Consistency Policy : unknown
> 
> 
>               UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>     Number   Major   Minor   RaidDevice State
>        -       0        0        0      removed
>        -       0        0        1      removed
>        -       0        0        2      removed
>        -       0        0        3      removed
> 
>        -       8       32        0      sync   /dev/sdc
>        -       8        0        1      sync   /dev/sda
>        -       8       48        3      sync   /dev/sdd
> [root@localhost-live ~]#
> 
> [root@localhost-live ~]# mdadm -D /dev/md126
> /dev/md126:
>          Container : /dev/md/imsm0, member 1
>         Raid Level : raid5
>         Array Size : 99116032 (94.52 GiB 101.49 GB)
>      Used Dev Size : 33038976 (31.51 GiB 33.83 GB)
>       Raid Devices : 4
>      Total Devices : 4
> 
>              State : clean, degraded, recovering
>     Active Devices : 3
>    Working Devices : 4
>     Failed Devices : 0
>      Spare Devices : 1
> 
>             Layout : left-asymmetric
>         Chunk Size : 128K
> 
> Consistency Policy : resync
> 
>     Rebuild Status : 35% complete
> 
> 
>               UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>     Number   Major   Minor   RaidDevice State
>        3       8       32        0      active sync   /dev/sdc
>        2       8        0        1      active sync   /dev/sda
>        1       8       16        2      spare rebuilding   /dev/sdb
>        0       8       48        3      active sync   /dev/sdd
> [root@localhost-live ~]#
> 
> [root@localhost-live ~]# mdadm -D /dev/md127
> /dev/md127:
>            Version : imsm
>         Raid Level : container
>      Total Devices : 4
> 
>    Working Devices : 4
> 
> 
>               UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>      Member Arrays : /dev/md125 /dev/md/md1_0
> 
>     Number   Major   Minor   RaidDevice
> 
>        -       8       32        -        /dev/sdc
>        -       8        0        -        /dev/sda
>        -       8       48        -        /dev/sdd
>        -       8       16        -        /dev/sdb
> [root@localhost-live ~]#
> 
> 
> [root@localhost-live ~]# mdadm --examine /dev/sda
> /dev/sda:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.3.00
>     Orig Family : ab386e31
>          Family : 775b3841
>      Generation : 00458337
>      Attributes : All supported
>            UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>        Checksum : f25e8e6d correct
>     MPB Sectors : 2
>           Disks : 5
>    RAID Devices : 2
> 
>   Disk01 Serial : WD-WCC3F1681668
>           State : active
>              Id : 00000001
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
> [md0]:
>            UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>      RAID Level : 5
>         Members : 4
>           Slots : [UU_U]
>     Failed disk : 2
>       This Slot : 1
>     Sector Size : 512
>      Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
>    Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
>   Sector Offset : 0
>     Num Stripes : 7372800
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : degraded
>     Dirty State : dirty
>      RWH Policy : off
> 
> [md1]:
>            UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>      RAID Level : 5
>         Members : 4
>           Slots : [UUUU]
>     Failed disk : none
>       This Slot : 1
>     Sector Size : 512
>      Array Size : 198232064 (94.52 GiB 101.49 GB)
>    Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
>   Sector Offset : 1887440896
>     Num Stripes : 258117
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
>      RWH Policy : <unknown:128>
> 
>   Disk00 Serial : S13PJDWS608386
>           State : active
>              Id : 00000003
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk02 Serial : D-WMC3F2148323:0
>           State : active
>              Id : ffffffff
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk03 Serial : S13PJDWS608384
>           State : active
>              Id : 00000004
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk04 Serial : WD-WMC3F2148323
>           State : active
>              Id : 00000002
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> [root@localhost-live ~]#
> 
> 
> [root@localhost-live ~]# mdadm --examine /dev/sdb
> /dev/sdb:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.3.00
>     Orig Family : ab386e31
>          Family : 775b3841
>      Generation : 00458337
>      Attributes : All supported
>            UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>        Checksum : f25e8e6d correct
>     MPB Sectors : 2
>           Disks : 5
>    RAID Devices : 2
> 
>   Disk04 Serial : WD-WMC3F2148323
>           State : active
>              Id : 00000002
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
> [md0]:
>            UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>      RAID Level : 5
>         Members : 4
>           Slots : [UU_U]
>     Failed disk : 2
>       This Slot : ?
>     Sector Size : 512
>      Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
>    Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
>   Sector Offset : 0
>     Num Stripes : 7372800
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : degraded
>     Dirty State : dirty
>      RWH Policy : off
> 
> [md1]:
>            UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>      RAID Level : 5
>         Members : 4
>           Slots : [UUUU]
>     Failed disk : none
>       This Slot : 2
>     Sector Size : 512
>      Array Size : 198232064 (94.52 GiB 101.49 GB)
>    Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
>   Sector Offset : 1887440896
>     Num Stripes : 258117
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
>      RWH Policy : <unknown:128>
> 
>   Disk00 Serial : S13PJDWS608386
>           State : active
>              Id : 00000003
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk01 Serial : WD-WCC3F1681668
>           State : active
>              Id : 00000001
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk02 Serial : D-WMC3F2148323:0
>           State : active
>              Id : ffffffff
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk03 Serial : S13PJDWS608384
>           State : active
>              Id : 00000004
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> [root@localhost-live ~]#
> 
> 
> [root@localhost-live ~]# mdadm --examine /dev/sdc
> /dev/sdc:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.3.00
>     Orig Family : ab386e31
>          Family : 775b3841
>      Generation : 00458337
>      Attributes : All supported
>            UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>        Checksum : f25e8e6d correct
>     MPB Sectors : 2
>           Disks : 5
>    RAID Devices : 2
> 
>   Disk00 Serial : S13PJDWS608386
>           State : active
>              Id : 00000003
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
> [md0]:
>            UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>      RAID Level : 5
>         Members : 4
>           Slots : [UU_U]
>     Failed disk : 2
>       This Slot : 0
>     Sector Size : 512
>      Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
>    Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
>   Sector Offset : 0
>     Num Stripes : 7372800
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : degraded
>     Dirty State : dirty
>      RWH Policy : off
> 
> [md1]:
>            UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>      RAID Level : 5
>         Members : 4
>           Slots : [UUUU]
>     Failed disk : none
>       This Slot : 0
>     Sector Size : 512
>      Array Size : 198232064 (94.52 GiB 101.49 GB)
>    Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
>   Sector Offset : 1887440896
>     Num Stripes : 258117
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
>      RWH Policy : <unknown:128>
> 
>   Disk01 Serial : WD-WCC3F1681668
>           State : active
>              Id : 00000001
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk02 Serial : D-WMC3F2148323:0
>           State : active
>              Id : ffffffff
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk03 Serial : S13PJDWS608384
>           State : active
>              Id : 00000004
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk04 Serial : WD-WMC3F2148323
>           State : active
>              Id : 00000002
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> [root@localhost-live ~]#
> 
> 
> [root@localhost-live ~]# mdadm --examine /dev/sdd
> /dev/sdd:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.3.00
>     Orig Family : ab386e31
>          Family : 775b3841
>      Generation : 00458337
>      Attributes : All supported
>            UUID : bdb7f495:21b8c189:e4968216:6f2d6c4c
>        Checksum : f25e8e6d correct
>     MPB Sectors : 2
>           Disks : 5
>    RAID Devices : 2
> 
>   Disk03 Serial : S13PJDWS608384
>           State : active
>              Id : 00000004
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
> [md0]:
>            UUID : 38c20294:230f3d70:a1a5c8bd:8add8ba5
>      RAID Level : 5
>         Members : 4
>           Slots : [UU_U]
>     Failed disk : 2
>       This Slot : 3
>     Sector Size : 512
>      Array Size : 5662310400 (2700.00 GiB 2899.10 GB)
>    Per Dev Size : 1887436800 (900.00 GiB 966.37 GB)
>   Sector Offset : 0
>     Num Stripes : 7372800
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : degraded
>     Dirty State : dirty
>      RWH Policy : off
> 
> [md1]:
>            UUID : 43d19777:6d66ecfa:3113d7a9:4feb07b4
>      RAID Level : 5
>         Members : 4
>           Slots : [UUUU]
>     Failed disk : none
>       This Slot : 3
>     Sector Size : 512
>      Array Size : 198232064 (94.52 GiB 101.49 GB)
>    Per Dev Size : 66077952 (31.51 GiB 33.83 GB)
>   Sector Offset : 1887440896
>     Num Stripes : 258117
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
>      RWH Policy : <unknown:128>
> 
>   Disk00 Serial : S13PJDWS608386
>           State : active
>              Id : 00000003
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk01 Serial : WD-WCC3F1681668
>           State : active
>              Id : 00000001
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk02 Serial : D-WMC3F2148323:0
>           State : active
>              Id : ffffffff
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> 
>   Disk04 Serial : WD-WMC3F2148323
>           State : active
>              Id : 00000002
>     Usable Size : 1953518848 (931.51 GiB 1000.20 GB)
> [root@localhost-live ~]#
> 
> Thanks
> 
> ps. Why was my Outlook.com email address rejected by this server?
> 
> 


      reply	other threads:[~2021-02-08 13:31 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CC93341E865248F8AB635929EF587792@Tosh10Pro>
2021-02-06  3:19 ` Repairing IMSM RAID array "active, FAILED, not started" 19 Devices linuxraid
2021-02-08 13:28   ` Tkaczyk, Mariusz [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ac370d79-95e8-d0a1-0991-fb12b128818c@linux.intel.com \
    --to=mariusz.tkaczyk@linux.intel.com \
    --cc=19devices+linuxraid@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).