linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* regression: drive was detected as raid member due to metadata on partition
@ 2024-04-08 23:31 Sven Köhler
  2024-04-10  1:56 ` Li Nan
  2024-05-07  7:32 ` Mariusz Tkaczyk
  0 siblings, 2 replies; 10+ messages in thread
From: Sven Köhler @ 2024-04-08 23:31 UTC (permalink / raw)
  To: linux-raid

Hi,

I was shocked to find that upon reboot, my Linux machine was detecting 
/dev/sd[abcd] as members of a raid array. It would assign those members 
to  /dev/md4. It would not run the raid arrays /dev/mdX with members 
/dev/sda[abcd]X for X=1,2,3,4 as it usually did for the past couple of 
years.

My server was probably a unicorn in the sense that it used metadata 
version 0.90. This version of software RAID metadata is stored at the 
_end_ of a partition. In my case, /dev/sda4 would be the last partition 
on drive /dev/sda. I confirmed with mdadm --examine that metadata with 
the identical UUID would be found on both /dev/sda4 and /dev/sda.

Here's what I think went wrong: I believe either the kernel or mdadm 
(likely the latter) was seeing the metadata at the end of /dev/sda and 
ignored the fact that the location of the metadata was actually owned by 
a partition (namely /dev/sda4). The same happened for /dev/sd[bcd] and 
thus I ended up with /dev/md4 being started with members /dev/sda[abcd] 
instead of members /dev/sda[abcd]4.

This behavior started recently. I saw in the logs that I had updated 
mdadm but also the Linux kernel. mdadm and an appropriate mdadm.conf is 
part of my initcpio. My mdadm.conf lists the arrays with their metadata 
version and their UUID.

Starting a RAID array with members /dev/sda[abcd] somehow removed the 
partitions of the drives. The partition table would still be present, 
but the partitions would disappear from /dev. So /dev/sda[abcd]1-3 were 
not visible anymore and thus /dev/md1-3 would not be started.

I strongly believe that mdadm should ignore any metadata - regardless of 
the version - that is at a location owned by any of the partitions. 
While I'm not 100% sure how to implement that, the following might also 
work: first scan the partitions for metadata, then ignore if the parent 
device has metadata with a UUID previously found.


I did the right thing and converted my RAID arrays to metadata 1.2, but 
I'd like to save other from the adrenaline shock.



Kind Regards,
   Sven

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-04-08 23:31 regression: drive was detected as raid member due to metadata on partition Sven Köhler
@ 2024-04-10  1:56 ` Li Nan
  2024-04-10 20:59   ` Sven Köhler
  2024-05-07  7:32 ` Mariusz Tkaczyk
  1 sibling, 1 reply; 10+ messages in thread
From: Li Nan @ 2024-04-10  1:56 UTC (permalink / raw)
  To: Sven Köhler, linux-raid

Hi, Köhler

在 2024/4/9 7:31, Sven Köhler 写道:
> Hi,
> 
> I was shocked to find that upon reboot, my Linux machine was detecting 
> /dev/sd[abcd] as members of a raid array. It would assign those members to  
> /dev/md4. It would not run the raid arrays /dev/mdX with members 
> /dev/sda[abcd]X for X=1,2,3,4 as it usually did for the past couple of years.
> 
> My server was probably a unicorn in the sense that it used metadata version 
> 0.90. This version of software RAID metadata is stored at the _end_ of a 
> partition. In my case, /dev/sda4 would be the last partition on drive 
> /dev/sda. I confirmed with mdadm --examine that metadata with the identical 
> UUID would be found on both /dev/sda4 and /dev/sda.
> 

I am trying to reproduce it, but after reboot, md0 started with members
/dev/sd[bc]2 correctly. And mdadm will waring if assemble by 'mdadm -A'.

   # mdadm -CR /dev/md0 -l1 -n2 /dev/sd[bc]2 --metadata=0.9
   # mdadm -S --scan
   # mdadm -A --scan
   mdadm: WARNING /dev/sde2 and /dev/sde appear to have very similar 
superblocks.
         If they are really different, please --zero the superblock on one
         If they are the same or overlap, please remove one from the
         DEVICE list in mdadm.conf.
   mdadm: No arrays found in config file or automatically

Can you tell me how you create and config the RAID?

> Here's what I think went wrong: I believe either the kernel or mdadm 
> (likely the latter) was seeing the metadata at the end of /dev/sda and 
> ignored the fact that the location of the metadata was actually owned by a 
> partition (namely /dev/sda4). The same happened for /dev/sd[bcd] and thus I 
> ended up with /dev/md4 being started with members /dev/sda[abcd] instead of 
> members /dev/sda[abcd]4.
> 
> This behavior started recently. I saw in the logs that I had updated mdadm 
> but also the Linux kernel. mdadm and an appropriate mdadm.conf is part of 
> my initcpio. My mdadm.conf lists the arrays with their metadata version and 
> their UUID.
> 
> Starting a RAID array with members /dev/sda[abcd] somehow removed the 
> partitions of the drives. The partition table would still be present, but 
> the partitions would disappear from /dev. So /dev/sda[abcd]1-3 were not 
> visible anymore and thus /dev/md1-3 would not be started.
> 
> I strongly believe that mdadm should ignore any metadata - regardless of 
> the version - that is at a location owned by any of the partitions. While 
> I'm not 100% sure how to implement that, the following might also work: 
> first scan the partitions for metadata, then ignore if the parent device 
> has metadata with a UUID previously found.
> 
> 
> I did the right thing and converted my RAID arrays to metadata 1.2, but I'd 
> like to save other from the adrenaline shock.
> 
> 
> 
> Kind Regards,
>    Sven
> 
> .

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-04-10  1:56 ` Li Nan
@ 2024-04-10 20:59   ` Sven Köhler
  2024-04-11  2:25     ` Li Nan
  0 siblings, 1 reply; 10+ messages in thread
From: Sven Köhler @ 2024-04-10 20:59 UTC (permalink / raw)
  To: Li Nan, linux-raid

Hi,

Am 10.04.24 um 03:56 schrieb Li Nan:
> Hi, Köhler
> 
> 在 2024/4/9 7:31, Sven Köhler 写道:
>> Hi,
>>
>> I was shocked to find that upon reboot, my Linux machine was detecting 
>> /dev/sd[abcd] as members of a raid array. It would assign those 
>> members to /dev/md4. It would not run the raid arrays /dev/mdX with 
>> members /dev/sda[abcd]X for X=1,2,3,4 as it usually did for the past 
>> couple of years.
>>
>> My server was probably a unicorn in the sense that it used metadata 
>> version 0.90. This version of software RAID metadata is stored at the 
>> _end_ of a partition. In my case, /dev/sda4 would be the last 
>> partition on drive /dev/sda. I confirmed with mdadm --examine that 
>> metadata with the identical UUID would be found on both /dev/sda4 and 
>> /dev/sda.
>>
> 
> I am trying to reproduce it, but after reboot, md0 started with members
> /dev/sd[bc]2 correctly. And mdadm will waring if assemble by 'mdadm -A'.
> 
>    # mdadm -CR /dev/md0 -l1 -n2 /dev/sd[bc]2 --metadata=0.9
>    # mdadm -S --scan
>    # mdadm -A --scan
>    mdadm: WARNING /dev/sde2 and /dev/sde appear to have very similar 
> superblocks.
>          If they are really different, please --zero the superblock on one
>          If they are the same or overlap, please remove one from the
>          DEVICE list in mdadm.conf.
>    mdadm: No arrays found in config file or automatically
> 
> Can you tell me how you create and config the RAID?

I should have mentioned the mdadm and kernel version. I am using mdadm 
4.3-2 and linux-lts 6.6.23-1 on Arch Linux.

I created the array very similar to what you did:
mdadm --create /dev/md4 --level=6 --raid-devices=4 --metadata=0.90 
/dev/sd[abcd]4

My mdadm.conf looks like this:
DEVICE partitions
ARRAY /dev/md/4 metadata=0.90  UUID=...

And /proc/partitions looks like this:

major minor  #blocks  name
    8        0 2930266584 sda
    8        1    1048576 sda1
    8        2   33554432 sda2
    8        3   10485760 sda3
    8        4 2885176775 sda4
    8       16 2930266584 sdb
    8       17    1048576 sdb1
    8       18   33554432 sdb2
    8       19   10485760 sdb3
    8       20 2885176775 sdb4
    8       32 2930266584 sdc
    8       33    1048576 sdc1
    8       34   33554432 sdc2
    8       35   10485760 sdc3
    8       36 2885176775 sdc4
    8       48 2930266584 sdd
    8       49    1048576 sdd1
    8       50   33554432 sdd2
    8       51   10485760 sdd3
    8       52 2885176775 sdd4


Interestingly, sda, sdb, etc. are included. So "DEVICE partitions" 
actually considers them.


>> Here's what I think went wrong: I believe either the kernel or mdadm 
>> (likely the latter) was seeing the metadata at the end of /dev/sda and 
>> ignored the fact that the location of the metadata was actually owned 
>> by a partition (namely /dev/sda4). The same happened for /dev/sd[bcd] 
>> and thus I ended up with /dev/md4 being started with members 
>> /dev/sda[abcd] instead of members /dev/sda[abcd]4.
>>
>> This behavior started recently. I saw in the logs that I had updated 
>> mdadm but also the Linux kernel. mdadm and an appropriate mdadm.conf 
>> is part of my initcpio. My mdadm.conf lists the arrays with their 
>> metadata version and their UUID.
>>
>> Starting a RAID array with members /dev/sda[abcd] somehow removed the 
>> partitions of the drives. The partition table would still be present, 
>> but the partitions would disappear from /dev. So /dev/sda[abcd]1-3 
>> were not visible anymore and thus /dev/md1-3 would not be started.
>>
>> I strongly believe that mdadm should ignore any metadata - regardless 
>> of the version - that is at a location owned by any of the partitions. 
>> While I'm not 100% sure how to implement that, the following might 
>> also work: first scan the partitions for metadata, then ignore if the 
>> parent device has metadata with a UUID previously found.
>>
>>
>> I did the right thing and converted my RAID arrays to metadata 1.2, 
>> but I'd like to save other from the adrenaline shock.
>>
>>
>>
>> Kind Regards,
>>    Sven
>>
>> .
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-04-10 20:59   ` Sven Köhler
@ 2024-04-11  2:25     ` Li Nan
  2024-04-13 21:37       ` Sven Köhler
  0 siblings, 1 reply; 10+ messages in thread
From: Li Nan @ 2024-04-11  2:25 UTC (permalink / raw)
  To: Sven Köhler, Li Nan, linux-raid

Hi,

在 2024/4/11 4:59, Sven Köhler 写道:
> Hi,
> 
> Am 10.04.24 um 03:56 schrieb Li Nan:
>> Hi, Köhler
>>
>> 在 2024/4/9 7:31, Sven Köhler 写道:

[...]

> 
> I should have mentioned the mdadm and kernel version. I am using mdadm 
> 4.3-2 and linux-lts 6.6.23-1 on Arch Linux.
> 
> I created the array very similar to what you did:
> mdadm --create /dev/md4 --level=6 --raid-devices=4 --metadata=0.90 
> /dev/sd[abcd]4
> 
> My mdadm.conf looks like this:
> DEVICE partitions
> ARRAY /dev/md/4 metadata=0.90  UUID=...
> 
> And /proc/partitions looks like this:
> 
> major minor  #blocks  name
>     8        0 2930266584 sda
>     8        1    1048576 sda1
>     8        2   33554432 sda2
>     8        3   10485760 sda3
>     8        4 2885176775 sda4
>     8       16 2930266584 sdb
>     8       17    1048576 sdb1
>     8       18   33554432 sdb2
>     8       19   10485760 sdb3
>     8       20 2885176775 sdb4
>     8       32 2930266584 sdc
>     8       33    1048576 sdc1
>     8       34   33554432 sdc2
>     8       35   10485760 sdc3
>     8       36 2885176775 sdc4
>     8       48 2930266584 sdd
>     8       49    1048576 sdd1
>     8       50   33554432 sdd2
>     8       51   10485760 sdd3
>     8       52 2885176775 sdd4
> 
> 
> Interestingly, sda, sdb, etc. are included. So "DEVICE partitions" actually 
> considers them.
> 

I used your command and config, updated kernel and mdadm, but raid also
created correctly after reboot.

My OS is fedora, it may have been affected by some other system tools? I
have no idea.

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-04-11  2:25     ` Li Nan
@ 2024-04-13 21:37       ` Sven Köhler
  2024-04-18  7:31         ` Li Nan
  0 siblings, 1 reply; 10+ messages in thread
From: Sven Köhler @ 2024-04-13 21:37 UTC (permalink / raw)
  To: Li Nan, linux-raid

Hi,

Am 11.04.24 um 04:25 schrieb Li Nan:
> Hi,
>
> 在 2024/4/11 4:59, Sven Köhler 写道:
>> Hi,
>>
>> Am 10.04.24 um 03:56 schrieb Li Nan:
>>> Hi, Köhler
>>>
>>> 在 2024/4/9 7:31, Sven Köhler 写道:
>
> [...]
>
>>
>> I should have mentioned the mdadm and kernel version. I am using 
>> mdadm 4.3-2 and linux-lts 6.6.23-1 on Arch Linux.
>>
>> I created the array very similar to what you did:
>> mdadm --create /dev/md4 --level=6 --raid-devices=4 --metadata=0.90 
>> /dev/sd[abcd]4
>>
>> My mdadm.conf looks like this:
>> DEVICE partitions
>> ARRAY /dev/md/4 metadata=0.90  UUID=...
>>
>> And /proc/partitions looks like this:
>>
>> major minor  #blocks  name
>>     8        0 2930266584 sda
>>     8        1    1048576 sda1
>>     8        2   33554432 sda2
>>     8        3   10485760 sda3
>>     8        4 2885176775 sda4
>>     8       16 2930266584 sdb
>>     8       17    1048576 sdb1
>>     8       18   33554432 sdb2
>>     8       19   10485760 sdb3
>>     8       20 2885176775 sdb4
>>     8       32 2930266584 sdc
>>     8       33    1048576 sdc1
>>     8       34   33554432 sdc2
>>     8       35   10485760 sdc3
>>     8       36 2885176775 sdc4
>>     8       48 2930266584 sdd
>>     8       49    1048576 sdd1
>>     8       50   33554432 sdd2
>>     8       51   10485760 sdd3
>>     8       52 2885176775 sdd4
>>
>>
>> Interestingly, sda, sdb, etc. are included. So "DEVICE partitions" 
>> actually considers them.
>>
>
> I used your command and config, updated kernel and mdadm, but raid also
> created correctly after reboot.
>
> My OS is fedora, it may have been affected by some other system tools? I
> have no idea.

The Arch kernel has RAID autodetection enabled. I just tried to 
reproduce it. While mdadm will not consider /dev/sd[ab] as members, the 
kernel's autodetection will. For that you have to reboot.

I used this ISO in a VM with 2 harddisks to reproduce the issue: 
https://mirror.informatik.tu-freiberg.de/arch/iso/2024.04.01/archlinux-2024.04.01-x86_64.iso


Kind Regards,
   Sven


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-04-13 21:37       ` Sven Köhler
@ 2024-04-18  7:31         ` Li Nan
  2024-04-18 22:07           ` Sven Köhler
  2024-04-18 22:11           ` Sven Köhler
  0 siblings, 2 replies; 10+ messages in thread
From: Li Nan @ 2024-04-18  7:31 UTC (permalink / raw)
  To: Sven Köhler, Li Nan, linux-raid



在 2024/4/14 5:37, Sven Köhler 写道:

[...]

>>
>> I used your command and config, updated kernel and mdadm, but raid also
>> created correctly after reboot.
>>
>> My OS is fedora, it may have been affected by some other system tools? I
>> have no idea.
> 
> The Arch kernel has RAID autodetection enabled. I just tried to reproduce 
> it. While mdadm will not consider /dev/sd[ab] as members, the kernel's 
> autodetection will. For that you have to reboot.
> 

It is not about autodetection. Autodetection only deals with the devices in
list 'all_detected_devices', device is added to it by blk_add_partition().
So sdx will not be added to this list, and will not be autodetect.

> I used this ISO in a VM with 2 harddisks to reproduce the issue: 
> https://mirror.informatik.tu-freiberg.de/arch/iso/2024.04.01/archlinux-2024.04.01-x86_64.iso 
> 
> 
> 
> Kind Regards,
>    Sven
> 
> 
> .

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-04-18  7:31         ` Li Nan
@ 2024-04-18 22:07           ` Sven Köhler
  2024-04-18 22:11           ` Sven Köhler
  1 sibling, 0 replies; 10+ messages in thread
From: Sven Köhler @ 2024-04-18 22:07 UTC (permalink / raw)
  To: linux-raid

Am 18.04.24 um 09:31 schrieb Li Nan:
> 
> 
> 在 2024/4/14 5:37, Sven Köhler 写道:
> 
> [...]
> 
>> The Arch kernel has RAID autodetection enabled. I just tried to 
>> reproduce it. While mdadm will not consider /dev/sd[ab] as members, 
>> the kernel's autodetection will. For that you have to reboot.
>>
> 
> It is not about autodetection. Autodetection only deals with the devices in
> list 'all_detected_devices', device is added to it by blk_add_partition().
> So sdx will not be added to this list, and will not be autodetect.

I apologize. It's not the kernel autodetection. Arch Linux is using udev 
rules to re-assemble mdadm array during boot. The udev rules execute

   /usr/bin/mdadm -If $name

where $name is likely a device like /dev/sda. I'm not sure yet whether 
the udev rules have changed or mdadm has changed.

I will continue digging.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-04-18  7:31         ` Li Nan
  2024-04-18 22:07           ` Sven Köhler
@ 2024-04-18 22:11           ` Sven Köhler
  1 sibling, 0 replies; 10+ messages in thread
From: Sven Köhler @ 2024-04-18 22:11 UTC (permalink / raw)
  To: Li Nan, linux-raid

Am 18.04.24 um 09:31 schrieb Li Nan:
> 
> 
> 在 2024/4/14 5:37, Sven Köhler 写道:
> 
> [...]
> 
>>>
>>> I used your command and config, updated kernel and mdadm, but raid also
>>> created correctly after reboot.
>>>
>>> My OS is fedora, it may have been affected by some other system tools? I
>>> have no idea.
>>
>> The Arch kernel has RAID autodetection enabled. I just tried to 
>> reproduce it. While mdadm will not consider /dev/sd[ab] as members, 
>> the kernel's autodetection will. For that you have to reboot.
>>
> 
> It is not about autodetection. Autodetection only deals with the devices in
> list 'all_detected_devices', device is added to it by blk_add_partition().
> So sdx will not be added to this list, and will not be autodetect.

I apologize. It's not the kernel autodetection. Arch Linux is using udev 
rules to re-assemble mdadm array during boot. The udev rules execute

   /usr/bin/mdadm -If $name

where $name is likely a device like /dev/sda. I'm not sure yet whether 
the udev rules have changed or mdadm has changed.

I will continue digging.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-04-08 23:31 regression: drive was detected as raid member due to metadata on partition Sven Köhler
  2024-04-10  1:56 ` Li Nan
@ 2024-05-07  7:32 ` Mariusz Tkaczyk
  2024-05-28 22:57   ` Sven Köhler
  1 sibling, 1 reply; 10+ messages in thread
From: Mariusz Tkaczyk @ 2024-05-07  7:32 UTC (permalink / raw)
  To: Sven Köhler; +Cc: linux-raid

On Tue, 9 Apr 2024 01:31:35 +0200
Sven Köhler <sven.koehler@gmail.com> wrote:

> I strongly believe that mdadm should ignore any metadata - regardless of 
> the version - that is at a location owned by any of the partitions. 

That would require mdadm to understand gpt parttable, not only clone it.
We have gpt support to clone the gpt metadata( see super-gpt.c).
It should save us from such issues so you have my ack if you want to do this.

But... GPT should have secondary header located at the end of the device, so
your metadata should be not at the end. Are you using gpt or mbr parttable?
Maybe missing secondary gpt header is the reason?

> While I'm not 100% sure how to implement that, the following might also 
> work: first scan the partitions for metadata, then ignore if the parent 
> device has metadata with a UUID previously found.

No, it is not an option. In udev world, you should only operate on device you
are processing so we should avoid referencing the system.

BTW. To avoid this issue you can left few bytes empty at the end of disk, simply
make your last partition ended few bytes before end of the drive. With that
metadata will not be recognized directly on the drive. That is at least what I
expected but I'm not native experienced so please be aware of that.

> I did the right thing and converted my RAID arrays to metadata 1.2, but 
> I'd like to save other from the adrenaline shock.

There are reasons why we introduced v1.2 located at the begging of device.
You can try to fix it but I think that you should just follow upstream and
choose 1.2 if you can.

As we are more and more with 1.2 that naturally we care less about 0.9,
especially of workarounds in other utilities. We cannot control
if legacy workarounds are still there (the root cause of this change may be
outside md/mdadm, you never know :)).

So the cases like that will always come. It is right to use 1.2 now to be
better supported if you don't have strong need to stay with 0.9.

Anyway, patches are always welcomed!
Thanks,
Mariusz


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: regression: drive was detected as raid member due to metadata on partition
  2024-05-07  7:32 ` Mariusz Tkaczyk
@ 2024-05-28 22:57   ` Sven Köhler
  0 siblings, 0 replies; 10+ messages in thread
From: Sven Köhler @ 2024-05-28 22:57 UTC (permalink / raw)
  To: Mariusz Tkaczyk; +Cc: linux-raid

Hi Mariusz,

Am 07.05.24 um 09:32 schrieb Mariusz Tkaczyk:
> On Tue, 9 Apr 2024 01:31:35 +0200
> Sven Köhler <sven.koehler@gmail.com> wrote:
> 
>> I strongly believe that mdadm should ignore any metadata - regardless of
>> the version - that is at a location owned by any of the partitions.
> 
> That would require mdadm to understand gpt parttable, not only clone it.
> We have gpt support to clone the gpt metadata( see super-gpt.c).
> It should save us from such issues so you have my ack if you want to do this.

I get your point. That seems wrong to me. I wonder whether the kernel 
has some interface to gather information on partitions on a device. 
After all, the kernel knows lots of partition table types (mbr, gpt, ...)

> But... GPT should have secondary header located at the end of the device, so
> your metadata should be not at the end. Are you using gpt or mbr parttable?
> Maybe missing secondary gpt header is the reason?

I just checked. My disks don't have a GPT backup at the end. I might 
have converted an MBR partition table to a GPT. That would not create a 
backup GPT if the space is already occupied by a partition.

That said, for the sake of argument, I might just as well be using an 
MBR partition table.

>> While I'm not 100% sure how to implement that, the following might also
>> work: first scan the partitions for metadata, then ignore if the parent
>> device has metadata with a UUID previously found.
> 
> No, it is not an option. In udev world, you should only operate on device you
> are processing so we should avoid referencing the system.

Hmm, I think I know what you mean.

> BTW. To avoid this issue you can left few bytes empty at the end of disk, simply
> make your last partition ended few bytes before end of the drive. With that
> metadata will not be recognized directly on the drive. That is at least what I
> expected but I'm not native experienced so please be aware of that.

I verified that my last partition ends at the last sector of the disc. 
Pretty sure that means it must have been an MBR PT once upon a time.

This is not about me. I'm not asking to support my case for the sake of 
having my system work. I already converted to metadata 1.2 and that 
fixed the issue regardless where the last partition ends.

It's a regression, in the sense that my system has worked for years and 
after an upgrade suddenly didn't. I'd like to prevent that the same 
happens to others. It was pretty scary, even though no data seems to 
have been lost.

>> I did the right thing and converted my RAID arrays to metadata 1.2, but
>> I'd like to save other from the adrenaline shock.
> 
> There are reasons why we introduced v1.2 located at the begging of device.
> You can try to fix it but I think that you should just follow upstream and
> choose 1.2 if you can.

Yes, I agree with you. That's why I migrated to 1.2 already.

> As we are more and more with 1.2 that naturally we care less about 0.9,
> especially of workarounds in other utilities. We cannot control
> if legacy workarounds are still there (the root cause of this change may be
> outside md/mdadm, you never know :)).

Likely, the reason is outside of the mdadm binary but inside the mdadm 
repo. Arch Linux uses the udev rules provided by the mdadm package 
without modification. The diff on the udev rules between mdadm 4.2 and 
4.3 release is significant. Both invoke mdadm -If $name but likely the 
order has changed.

An investigation of that is still pending. I'm not an expert in udev 
debugging, and the logs don't show.

> So the cases like that will always come. It is right to use 1.2 now to be
> better supported if you don't have strong need to stay with 0.9.

Would it be possible to have automated tests for incremental raid 
assembly via udev rules? I'm not an expert in udev though.


> Anyway, patches are always welcomed!

Still working on my udev debugging skills. But afterwards, I may very 
well prepare a patch.



Best,
   Sven

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-05-28 22:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-08 23:31 regression: drive was detected as raid member due to metadata on partition Sven Köhler
2024-04-10  1:56 ` Li Nan
2024-04-10 20:59   ` Sven Köhler
2024-04-11  2:25     ` Li Nan
2024-04-13 21:37       ` Sven Köhler
2024-04-18  7:31         ` Li Nan
2024-04-18 22:07           ` Sven Köhler
2024-04-18 22:11           ` Sven Köhler
2024-05-07  7:32 ` Mariusz Tkaczyk
2024-05-28 22:57   ` Sven Köhler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).