All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID5 devices assemble into RAID0 array
@ 2017-11-26  1:50 Duane
  2017-11-26 12:04 ` Wols Lists
  0 siblings, 1 reply; 7+ messages in thread
From: Duane @ 2017-11-26  1:50 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 4322 bytes --]

I have 3 RAID5 devices. When I assemble them, I end up with a RAID0 device.

What is the cause? What is the solution?

All I can think of is the fact that there is only 1 active device. I had 
2 but then manually failed it. I want to reassemble a RAID5 array and 
then re-add the second device.


mdadm -E /dev/sdc2 /dev/sdd /dev/sde

/dev/sdc2:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x1
      Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
            Name : dave:0  (local to host dave)
   Creation Time : Fri Oct  6 10:46:50 2017
      Raid Level : raid5
    Raid Devices : 3

  Avail Dev Size : 7813726208 (3725.88 GiB 4000.63 GB)
      Array Size : 7630592 (7.28 GiB 7.81 GB)
   Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=7806142351 sectors
           State : clean
     Device UUID : 0fe7e674:e1149499:7acb8853:c71fed62

Internal Bitmap : 8 sectors from superblock
     Update Time : Sun Nov 19 13:53:23 2017
   Bad Block Log : 512 entries available at offset 24 sectors
        Checksum : 79ee1226 - correct
          Events : 51526

          Layout : left-symmetric
      Chunk Size : 64K

    Device Role : Active device 0
    Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x1
      Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
            Name : dave:0  (local to host dave)
   Creation Time : Fri Oct  6 10:46:50 2017
      Raid Level : raid5
    Raid Devices : 3

  Avail Dev Size : 7813709489 (3725.87 GiB 4000.62 GB)
      Array Size : 7630592 (7.28 GiB 7.81 GB)
   Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=7806078897 sectors
           State : clean
     Device UUID : ce2c8c03:f47f505b:c9a34e9a:668b70f5

Internal Bitmap : 8 sectors from superblock
     Update Time : Sun Nov 19 13:53:23 2017
   Bad Block Log : 512 entries available at offset 16 sectors
        Checksum : a398c591 - correct
          Events : 51526

          Layout : left-symmetric
      Chunk Size : 64K

    Device Role : spare
    Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x1
      Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
            Name : dave:0  (local to host dave)
   Creation Time : Fri Oct  6 10:46:50 2017
      Raid Level : raid5
    Raid Devices : 3

  Avail Dev Size : 7813709489 (3725.87 GiB 4000.62 GB)
      Array Size : 7630592 (7.28 GiB 7.81 GB)
   Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=7806078897 sectors
           State : clean
     Device UUID : aecb9ce3:e7bc161f:9b8d1764:db815dde

Internal Bitmap : 8 sectors from superblock
     Update Time : Sun Nov 19 13:53:23 2017
   Bad Block Log : 512 entries available at offset 16 sectors
        Checksum : fa2581ac - correct
          Events : 51526

          Layout : left-symmetric
      Chunk Size : 64K

    Device Role : spare
    Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)


mdadm -D /dev/md0
/dev/md0:
         Version : 1.2
      Raid Level : raid0
   Total Devices : 3
     Persistence : Superblock is persistent

           State : inactive

            Name : dave:0  (local to host dave)
            UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
          Events : 51526

     Number   Major   Minor   RaidDevice

        -       8       64        -        /dev/sde
        -       8       34        -        /dev/sdc2
        -       8       48        -        /dev/sdd



[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: duane.vcf --]
[-- Type: text/x-vcard; name="duane.vcf", Size: 4 bytes --]

null

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 devices assemble into RAID0 array
  2017-11-26  1:50 RAID5 devices assemble into RAID0 array Duane
@ 2017-11-26 12:04 ` Wols Lists
  2017-11-26 17:29   ` Duane
  0 siblings, 1 reply; 7+ messages in thread
From: Wols Lists @ 2017-11-26 12:04 UTC (permalink / raw)
  To: Duane, linux-raid

On 26/11/17 01:50, Duane wrote:
> I have 3 RAID5 devices. When I assemble them, I end up with a RAID0 device.
> 
> What is the cause? What is the solution?
> 
> All I can think of is the fact that there is only 1 active device. I had
> 2 but then manually failed it. I want to reassemble a RAID5 array and
> then re-add the second device.
> 
OUCH!

Sorry. You have a 3-device raid-5. You only have 1 working device. Your
array is well broken.

You can't "reassemble raid5 then readd the second device". You need to
readd the second device in order to get your raid5 back. I'll let
someone else tell you how, but you need a MINIMUM of two devices to get
your raid working again. Then you need to get your third device added
back otherwise your raid 5 is broken.

DON'T DO ANYTHING WITHOUT ADVICE. I'm sorry, but your message says you
don't understand how raid works, and you are on the verge of destroying
your array irrevocably. It should be a simple recovery, *provided* you
don't make any mistakes.

Cheers,
Wol
> 
> mdadm -E /dev/sdc2 /dev/sdd /dev/sde
> 
> /dev/sdc2:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>            Name : dave:0  (local to host dave)
>   Creation Time : Fri Oct  6 10:46:50 2017
>      Raid Level : raid5
>    Raid Devices : 3
> 
>  Avail Dev Size : 7813726208 (3725.88 GiB 4000.63 GB)
>      Array Size : 7630592 (7.28 GiB 7.81 GB)
>   Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262064 sectors, after=7806142351 sectors
>           State : clean
>     Device UUID : 0fe7e674:e1149499:7acb8853:c71fed62
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Sun Nov 19 13:53:23 2017
>   Bad Block Log : 512 entries available at offset 24 sectors
>        Checksum : 79ee1226 - correct
>          Events : 51526
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 0
>    Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdd:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>            Name : dave:0  (local to host dave)
>   Creation Time : Fri Oct  6 10:46:50 2017
>      Raid Level : raid5
>    Raid Devices : 3
> 
>  Avail Dev Size : 7813709489 (3725.87 GiB 4000.62 GB)
>      Array Size : 7630592 (7.28 GiB 7.81 GB)
>   Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262064 sectors, after=7806078897 sectors
>           State : clean
>     Device UUID : ce2c8c03:f47f505b:c9a34e9a:668b70f5
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Sun Nov 19 13:53:23 2017
>   Bad Block Log : 512 entries available at offset 16 sectors
>        Checksum : a398c591 - correct
>          Events : 51526
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : spare
>    Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sde:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>            Name : dave:0  (local to host dave)
>   Creation Time : Fri Oct  6 10:46:50 2017
>      Raid Level : raid5
>    Raid Devices : 3
> 
>  Avail Dev Size : 7813709489 (3725.87 GiB 4000.62 GB)
>      Array Size : 7630592 (7.28 GiB 7.81 GB)
>   Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262064 sectors, after=7806078897 sectors
>           State : clean
>     Device UUID : aecb9ce3:e7bc161f:9b8d1764:db815dde
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Sun Nov 19 13:53:23 2017
>   Bad Block Log : 512 entries available at offset 16 sectors
>        Checksum : fa2581ac - correct
>          Events : 51526
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : spare
>    Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
> 
> 
> mdadm -D /dev/md0
> /dev/md0:
>         Version : 1.2
>      Raid Level : raid0
>   Total Devices : 3
>     Persistence : Superblock is persistent
> 
>           State : inactive
> 
>            Name : dave:0  (local to host dave)
>            UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>          Events : 51526
> 
>     Number   Major   Minor   RaidDevice
> 
>        -       8       64        -        /dev/sde
>        -       8       34        -        /dev/sdc2
>        -       8       48        -        /dev/sdd
> 
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 devices assemble into RAID0 array
  2017-11-26 12:04 ` Wols Lists
@ 2017-11-26 17:29   ` Duane
  2017-11-26 21:11     ` Wols Lists
  0 siblings, 1 reply; 7+ messages in thread
From: Duane @ 2017-11-26 17:29 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 5249 bytes --]

You're right I failed and removed device 3, then failed and removed 
device 2, which broke my array.

Let's assume I haven't messed things up any more than I already have.

Is there a method to reverse the above operations for device 2 and device 3?

Thanks, Duane



On 2017-11-26 05:04 AM, Wols Lists wrote:
> On 26/11/17 01:50, Duane wrote:
>> I have 3 RAID5 devices. When I assemble them, I end up with a RAID0 device.
>>
>> What is the cause? What is the solution?
>>
>> All I can think of is the fact that there is only 1 active device. I had
>> 2 but then manually failed it. I want to reassemble a RAID5 array and
>> then re-add the second device.
>>
> OUCH!
>
> Sorry. You have a 3-device raid-5. You only have 1 working device. Your
> array is well broken.
>
> You can't "reassemble raid5 then readd the second device". You need to
> readd the second device in order to get your raid5 back. I'll let
> someone else tell you how, but you need a MINIMUM of two devices to get
> your raid working again. Then you need to get your third device added
> back otherwise your raid 5 is broken.
>
> DON'T DO ANYTHING WITHOUT ADVICE. I'm sorry, but your message says you
> don't understand how raid works, and you are on the verge of destroying
> your array irrevocably. It should be a simple recovery, *provided* you
> don't make any mistakes.
>
> Cheers,
> Wol
>> mdadm -E /dev/sdc2 /dev/sdd /dev/sde
>>
>> /dev/sdc2:
>>            Magic : a92b4efc
>>          Version : 1.2
>>      Feature Map : 0x1
>>       Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>>             Name : dave:0  (local to host dave)
>>    Creation Time : Fri Oct  6 10:46:50 2017
>>       Raid Level : raid5
>>     Raid Devices : 3
>>
>>   Avail Dev Size : 7813726208 (3725.88 GiB 4000.63 GB)
>>       Array Size : 7630592 (7.28 GiB 7.81 GB)
>>    Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
>>      Data Offset : 262144 sectors
>>     Super Offset : 8 sectors
>>     Unused Space : before=262064 sectors, after=7806142351 sectors
>>            State : clean
>>      Device UUID : 0fe7e674:e1149499:7acb8853:c71fed62
>>
>> Internal Bitmap : 8 sectors from superblock
>>      Update Time : Sun Nov 19 13:53:23 2017
>>    Bad Block Log : 512 entries available at offset 24 sectors
>>         Checksum : 79ee1226 - correct
>>           Events : 51526
>>
>>           Layout : left-symmetric
>>       Chunk Size : 64K
>>
>>     Device Role : Active device 0
>>     Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
>> /dev/sdd:
>>            Magic : a92b4efc
>>          Version : 1.2
>>      Feature Map : 0x1
>>       Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>>             Name : dave:0  (local to host dave)
>>    Creation Time : Fri Oct  6 10:46:50 2017
>>       Raid Level : raid5
>>     Raid Devices : 3
>>
>>   Avail Dev Size : 7813709489 (3725.87 GiB 4000.62 GB)
>>       Array Size : 7630592 (7.28 GiB 7.81 GB)
>>    Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
>>      Data Offset : 262144 sectors
>>     Super Offset : 8 sectors
>>     Unused Space : before=262064 sectors, after=7806078897 sectors
>>            State : clean
>>      Device UUID : ce2c8c03:f47f505b:c9a34e9a:668b70f5
>>
>> Internal Bitmap : 8 sectors from superblock
>>      Update Time : Sun Nov 19 13:53:23 2017
>>    Bad Block Log : 512 entries available at offset 16 sectors
>>         Checksum : a398c591 - correct
>>           Events : 51526
>>
>>           Layout : left-symmetric
>>       Chunk Size : 64K
>>
>>     Device Role : spare
>>     Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
>> /dev/sde:
>>            Magic : a92b4efc
>>          Version : 1.2
>>      Feature Map : 0x1
>>       Array UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>>             Name : dave:0  (local to host dave)
>>    Creation Time : Fri Oct  6 10:46:50 2017
>>       Raid Level : raid5
>>     Raid Devices : 3
>>
>>   Avail Dev Size : 7813709489 (3725.87 GiB 4000.62 GB)
>>       Array Size : 7630592 (7.28 GiB 7.81 GB)
>>    Used Dev Size : 7630592 (3.64 GiB 3.91 GB)
>>      Data Offset : 262144 sectors
>>     Super Offset : 8 sectors
>>     Unused Space : before=262064 sectors, after=7806078897 sectors
>>            State : clean
>>      Device UUID : aecb9ce3:e7bc161f:9b8d1764:db815dde
>>
>> Internal Bitmap : 8 sectors from superblock
>>      Update Time : Sun Nov 19 13:53:23 2017
>>    Bad Block Log : 512 entries available at offset 16 sectors
>>         Checksum : fa2581ac - correct
>>           Events : 51526
>>
>>           Layout : left-symmetric
>>       Chunk Size : 64K
>>
>>     Device Role : spare
>>     Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
>>
>>
>> mdadm -D /dev/md0
>> /dev/md0:
>>          Version : 1.2
>>       Raid Level : raid0
>>    Total Devices : 3
>>      Persistence : Superblock is persistent
>>
>>            State : inactive
>>
>>             Name : dave:0  (local to host dave)
>>             UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>>           Events : 51526
>>
>>      Number   Major   Minor   RaidDevice
>>
>>         -       8       64        -        /dev/sde
>>         -       8       34        -        /dev/sdc2
>>         -       8       48        -        /dev/sdd
>>
>>


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: duane.vcf --]
[-- Type: text/x-vcard; name="duane.vcf", Size: 4 bytes --]

null

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 devices assemble into RAID0 array
  2017-11-26 17:29   ` Duane
@ 2017-11-26 21:11     ` Wols Lists
  2017-11-28  2:52       ` Duane
  0 siblings, 1 reply; 7+ messages in thread
From: Wols Lists @ 2017-11-26 21:11 UTC (permalink / raw)
  To: Duane, linux-raid

On 26/11/17 17:29, Duane wrote:
> You're right I failed and removed device 3, then failed and removed
> device 2, which broke my array.
> 
> Let's assume I haven't messed things up any more than I already have.
> 
> Is there a method to reverse the above operations for device 2 and
> device 3?
> 
> Thanks, Duane
> 
My raid-fu is mostly theoretical, but what I would try is as follows.

It's assembled the broken array as md0, so

mdadm /dev/md0 --re-add /dev/device2

followed by

mdadm /dev/md0 --re-add /dev/device3

That won't cause any further damage, and may work fine.

Beyond that, I wouldn't like to suggest anything - if it doesn't work
you'll probably need to use things like --force, which could easily
break stuff. But re-add will probably work. And my syntax might not work
- you might need to specify the mode like --grow or --manage, I don't know.

Suck it and see, at least this won't do any damage, and if it doesn't
work we'll have to wait for further advice.

Cheers,
Wol
> 
> 
> On 2017-11-26 05:04 AM, Wols Lists wrote:
>> On 26/11/17 01:50, Duane wrote:
>>> I have 3 RAID5 devices. When I assemble them, I end up with a RAID0
>>> device.
>>>
>>> What is the cause? What is the solution?
>>>
>>> All I can think of is the fact that there is only 1 active device. I had
>>> 2 but then manually failed it. I want to reassemble a RAID5 array and
>>> then re-add the second device.
>>>
>> OUCH!
>>
>> Sorry. You have a 3-device raid-5. You only have 1 working device. Your
>> array is well broken.
>>
>> You can't "reassemble raid5 then readd the second device". You need to
>> readd the second device in order to get your raid5 back. I'll let
>> someone else tell you how, but you need a MINIMUM of two devices to get
>> your raid working again. Then you need to get your third device added
>> back otherwise your raid 5 is broken.
>>
>> DON'T DO ANYTHING WITHOUT ADVICE. I'm sorry, but your message says you
>> don't understand how raid works, and you are on the verge of destroying
>> your array irrevocably. It should be a simple recovery, *provided* you
>> don't make any mistakes.
>>
>> Cheers,
>> Wol


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 devices assemble into RAID0 array
  2017-11-26 21:11     ` Wols Lists
@ 2017-11-28  2:52       ` Duane
  2017-11-30 21:39         ` Duane
  0 siblings, 1 reply; 7+ messages in thread
From: Duane @ 2017-11-28  2:52 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3441 bytes --]

OK trying to re-add got me this problem.

# mdadm  -A /dev/md0 /dev/sdc2
mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.

# mdadm  /dev/md0 --re-add /dev/sdd
mdadm: Cannot get array info for /dev/md0

Looking at the array,

# mdadm -D /dev/md0
/dev/md0:
         Version : 1.2
      Raid Level : raid0
   Total Devices : 1
     Persistence : Superblock is persistent

           State : inactive

            Name : dave:0  (local to host dave)
            UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
          Events : 51526

     Number   Major   Minor   RaidDevice

        -       8       34        -        /dev/sdc2

I cannot re-add the missing devices.

Doing "mdadm -A /dev/md0 -s" gets me the same array configuration as I 
showed in my original post.

Does anyone know why it insists on going to RAID0?
Does anyone know how I can re-add the devices?
Thank you,
Duane



On 2017-11-26 02:11 PM, Wols Lists wrote:
> On 26/11/17 17:29, Duane wrote:
>> You're right I failed and removed device 3, then failed and removed
>> device 2, which broke my array.
>>
>> Let's assume I haven't messed things up any more than I already have.
>>
>> Is there a method to reverse the above operations for device 2 and
>> device 3?
>>
>> Thanks, Duane
>>
> My raid-fu is mostly theoretical, but what I would try is as follows.
>
> It's assembled the broken array as md0, so
>
> mdadm /dev/md0 --re-add /dev/device2
>
> followed by
>
> mdadm /dev/md0 --re-add /dev/device3
>
> That won't cause any further damage, and may work fine.
>
> Beyond that, I wouldn't like to suggest anything - if it doesn't work
> you'll probably need to use things like --force, which could easily
> break stuff. But re-add will probably work. And my syntax might not work
> - you might need to specify the mode like --grow or --manage, I don't know.
>
> Suck it and see, at least this won't do any damage, and if it doesn't
> work we'll have to wait for further advice.
>
> Cheers,
> Wol
>>
>> On 2017-11-26 05:04 AM, Wols Lists wrote:
>>> On 26/11/17 01:50, Duane wrote:
>>>> I have 3 RAID5 devices. When I assemble them, I end up with a RAID0
>>>> device.
>>>>
>>>> What is the cause? What is the solution?
>>>>
>>>> All I can think of is the fact that there is only 1 active device. I had
>>>> 2 but then manually failed it. I want to reassemble a RAID5 array and
>>>> then re-add the second device.
>>>>
>>> OUCH!
>>>
>>> Sorry. You have a 3-device raid-5. You only have 1 working device. Your
>>> array is well broken.
>>>
>>> You can't "reassemble raid5 then readd the second device". You need to
>>> readd the second device in order to get your raid5 back. I'll let
>>> someone else tell you how, but you need a MINIMUM of two devices to get
>>> your raid working again. Then you need to get your third device added
>>> back otherwise your raid 5 is broken.
>>>
>>> DON'T DO ANYTHING WITHOUT ADVICE. I'm sorry, but your message says you
>>> don't understand how raid works, and you are on the verge of destroying
>>> your array irrevocably. It should be a simple recovery, *provided* you
>>> don't make any mistakes.
>>>
>>> Cheers,
>>> Wol
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: duane.vcf --]
[-- Type: text/x-vcard; name="duane.vcf", Size: 4 bytes --]

null

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 devices assemble into RAID0 array
  2017-11-28  2:52       ` Duane
@ 2017-11-30 21:39         ` Duane
  2017-12-02 16:52           ` Phil Turmel
  0 siblings, 1 reply; 7+ messages in thread
From: Duane @ 2017-11-30 21:39 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 4614 bytes --]

I need to tie off this thread.

I got my data. I ended up manually (scriptly) merging all the strips.

On later test RAID5 arrays, I removed all the devices. Then I recorded 
the configuration of the array from the metadata in the devices. I 
cleared the superblocks on the devices and then recreated the array. Of 
importance were level, chunk size, and device order. Data offset and 
layout were the default so I didn't specify them. The data area was not 
changed and, with the same configuration as the last array, was in a 
sane format.


************************************************************

Therefore, from my experience, the best way to recover a FAILED array is 
to recreate a new array with the same devices and settings.

************************************************************

If anyone less noobish can give a better answer, I welcome it.


PS

Of course, this only works if the RAID devices were manually marked as 
failed and really are still good.




On 2017-11-27 07:52 PM, Duane wrote:
> OK trying to re-add got me this problem.
>
> # mdadm  -A /dev/md0 /dev/sdc2
> mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.
>
> # mdadm  /dev/md0 --re-add /dev/sdd
> mdadm: Cannot get array info for /dev/md0
>
> Looking at the array,
>
> # mdadm -D /dev/md0
> /dev/md0:
>         Version : 1.2
>      Raid Level : raid0
>   Total Devices : 1
>     Persistence : Superblock is persistent
>
>           State : inactive
>
>            Name : dave:0  (local to host dave)
>            UUID : f8c0d29d:9b5986a4:050ca914:3a2fb8c8
>          Events : 51526
>
>     Number   Major   Minor   RaidDevice
>
>        -       8       34        -        /dev/sdc2
>
> I cannot re-add the missing devices.
>
> Doing "mdadm -A /dev/md0 -s" gets me the same array configuration as I 
> showed in my original post.
>
> Does anyone know why it insists on going to RAID0?
> Does anyone know how I can re-add the devices?
> Thank you,
> Duane
>
>
>
> On 2017-11-26 02:11 PM, Wols Lists wrote:
>> On 26/11/17 17:29, Duane wrote:
>>> You're right I failed and removed device 3, then failed and removed
>>> device 2, which broke my array.
>>>
>>> Let's assume I haven't messed things up any more than I already have.
>>>
>>> Is there a method to reverse the above operations for device 2 and
>>> device 3?
>>>
>>> Thanks, Duane
>>>
>> My raid-fu is mostly theoretical, but what I would try is as follows.
>>
>> It's assembled the broken array as md0, so
>>
>> mdadm /dev/md0 --re-add /dev/device2
>>
>> followed by
>>
>> mdadm /dev/md0 --re-add /dev/device3
>>
>> That won't cause any further damage, and may work fine.
>>
>> Beyond that, I wouldn't like to suggest anything - if it doesn't work
>> you'll probably need to use things like --force, which could easily
>> break stuff. But re-add will probably work. And my syntax might not work
>> - you might need to specify the mode like --grow or --manage, I don't 
>> know.
>>
>> Suck it and see, at least this won't do any damage, and if it doesn't
>> work we'll have to wait for further advice.
>>
>> Cheers,
>> Wol
>>>
>>> On 2017-11-26 05:04 AM, Wols Lists wrote:
>>>> On 26/11/17 01:50, Duane wrote:
>>>>> I have 3 RAID5 devices. When I assemble them, I end up with a RAID0
>>>>> device.
>>>>>
>>>>> What is the cause? What is the solution?
>>>>>
>>>>> All I can think of is the fact that there is only 1 active device. 
>>>>> I had
>>>>> 2 but then manually failed it. I want to reassemble a RAID5 array and
>>>>> then re-add the second device.
>>>>>
>>>> OUCH!
>>>>
>>>> Sorry. You have a 3-device raid-5. You only have 1 working device. 
>>>> Your
>>>> array is well broken.
>>>>
>>>> You can't "reassemble raid5 then readd the second device". You need to
>>>> readd the second device in order to get your raid5 back. I'll let
>>>> someone else tell you how, but you need a MINIMUM of two devices to 
>>>> get
>>>> your raid working again. Then you need to get your third device added
>>>> back otherwise your raid 5 is broken.
>>>>
>>>> DON'T DO ANYTHING WITHOUT ADVICE. I'm sorry, but your message says you
>>>> don't understand how raid works, and you are on the verge of 
>>>> destroying
>>>> your array irrevocably. It should be a simple recovery, *provided* you
>>>> don't make any mistakes.
>>>>
>>>> Cheers,
>>>> Wol
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: duane.vcf --]
[-- Type: text/x-vcard; name="duane.vcf", Size: 4 bytes --]

null

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 devices assemble into RAID0 array
  2017-11-30 21:39         ` Duane
@ 2017-12-02 16:52           ` Phil Turmel
  0 siblings, 0 replies; 7+ messages in thread
From: Phil Turmel @ 2017-12-02 16:52 UTC (permalink / raw)
  To: Duane, linux-raid

On 11/30/2017 04:39 PM, Duane wrote:
> I need to tie off this thread.
> 
> I got my data. I ended up manually (scriptly) merging all the strips.
> 
> On later test RAID5 arrays, I removed all the devices. Then I recorded
> the configuration of the array from the metadata in the devices. I
> cleared the superblocks on the devices and then recreated the array. Of
> importance were level, chunk size, and device order. Data offset and
> layout were the default so I didn't specify them. The data area was not
> changed and, with the same configuration as the last array, was in a
> sane format.

Well done.  Based on your OP, this was the correct solution.  { Sorry I
couldn't contribute this week -- business travel. }

> ************************************************************
> 
> Therefore, from my experience, the best way to recover a FAILED array is
> to recreate a new array with the same devices and settings.

No, most people don't deliberately fail a device out of an array that's
already degraded.  That's an operator error that deleted the device role
information needed for any variation of --assemble.

Manual --assemble in its various forms is always preferred over
--create, and if there was any reshape in progress, the only way to succeed.

The key to successful use of --create --assume-clean is to fully
understand all of the array settings and member device order, in a
situation where the layout is unmixed.  You were paranoid enough to
collect and use these details.  You didn't have a reshape in progress.
You are extremely rare.

This mailing list is full of horror stories from people who ask for help
*after* using --create --assume-clean in their recovery attempts, and
who had *not* collected the necessary details.  --create blows away the
prior superblocks, preventing collection of those details after the fact.

So NO, --create is rarely the "best way" to recover a failed array.

Phil

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-12-02 16:52 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-26  1:50 RAID5 devices assemble into RAID0 array Duane
2017-11-26 12:04 ` Wols Lists
2017-11-26 17:29   ` Duane
2017-11-26 21:11     ` Wols Lists
2017-11-28  2:52       ` Duane
2017-11-30 21:39         ` Duane
2017-12-02 16:52           ` Phil Turmel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.