linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] Detail: show correct raid level when the array is inactive
@ 2020-09-14  2:52 Lidong Zhong
  2020-10-14 15:19 ` Jes Sorensen
  0 siblings, 1 reply; 5+ messages in thread
From: Lidong Zhong @ 2020-09-14  2:52 UTC (permalink / raw)
  To: jes; +Cc: linux-raid, Lidong Zhong

Sometimes the raid level in the output of `mdadm -D /dev/mdX` is
misleading when the array is in inactive state. Here is a testcase for
introduction.
1\ creating a raid1 device with two disks. Specify a different hostname
rather than the real one for later verfication.

node1:~ # mdadm --create /dev/md0 --homehost TESTARRAY -o -l 1 -n 2 /dev/sdb
/dev/sdc
2\ remove one of the devices and reboot
3\ show the detail of raid1 device

node1:~ # mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 1
    Persistence : Superblock is persistent
          State : inactive
Working Devices : 1

You can see that the "Raid Level" in /dev/md127 is raid0 now.
After step 2\ is done, the degraded raid1 device is recognized
as a "foreign" array in 64-md-raid-assembly.rules. And thus the
timer to activate the raid1 device is not triggered. The array
level returned from GET_ARRAY_INFO ioctl is 0. And the string
shown for "Raid Level" is
str = map_num(pers, array.level);
And the definition of pers is
mapping_t pers[] = {
{ "linear", LEVEL_LINEAR},
{ "raid0", 0},
{ "0", 0}
...
So the misleading "raid0" is shown in this testcase.

Changelog:
v1: don't show "Raid Level" when array is inactive
Signed-off-by: Lidong Zhong <lidong.zhong@suse.com>
---
 Detail.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Detail.c b/Detail.c
index 24eeba0..b6587c8 100644
--- a/Detail.c
+++ b/Detail.c
@@ -224,7 +224,10 @@ int Detail(char *dev, struct context *c)
 	}
 
 	/* Ok, we have some info to print... */
-	str = map_num(pers, array.level);
+	if (inactive)
+		str = map_num(pers, info->array.level);
+	else
+		str = map_num(pers, array.level);
 
 	if (c->export) {
 		if (array.raid_disks) {
-- 
2.26.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] Detail: show correct raid level when the array is inactive
  2020-09-14  2:52 [PATCH v2] Detail: show correct raid level when the array is inactive Lidong Zhong
@ 2020-10-14 15:19 ` Jes Sorensen
  2020-10-20  9:50   ` Tkaczyk, Mariusz
  0 siblings, 1 reply; 5+ messages in thread
From: Jes Sorensen @ 2020-10-14 15:19 UTC (permalink / raw)
  To: Lidong Zhong; +Cc: linux-raid

On 9/13/20 10:52 PM, Lidong Zhong wrote:
> Sometimes the raid level in the output of `mdadm -D /dev/mdX` is
> misleading when the array is in inactive state. Here is a testcase for
> introduction.
> 1\ creating a raid1 device with two disks. Specify a different hostname
> rather than the real one for later verfication.
> 
> node1:~ # mdadm --create /dev/md0 --homehost TESTARRAY -o -l 1 -n 2 /dev/sdb
> /dev/sdc
> 2\ remove one of the devices and reboot
> 3\ show the detail of raid1 device
> 
> node1:~ # mdadm -D /dev/md127
> /dev/md127:
>         Version : 1.2
>      Raid Level : raid0
>   Total Devices : 1
>     Persistence : Superblock is persistent
>           State : inactive
> Working Devices : 1
> 
> You can see that the "Raid Level" in /dev/md127 is raid0 now.
> After step 2\ is done, the degraded raid1 device is recognized
> as a "foreign" array in 64-md-raid-assembly.rules. And thus the
> timer to activate the raid1 device is not triggered. The array
> level returned from GET_ARRAY_INFO ioctl is 0. And the string
> shown for "Raid Level" is
> str = map_num(pers, array.level);
> And the definition of pers is
> mapping_t pers[] = {
> { "linear", LEVEL_LINEAR},
> { "raid0", 0},
> { "0", 0}
> ...
> So the misleading "raid0" is shown in this testcase.
> 
> Changelog:
> v1: don't show "Raid Level" when array is inactive
> Signed-off-by: Lidong Zhong <lidong.zhong@suse.com>
>

Applied!

Thanks,
Jes


^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [PATCH v2] Detail: show correct raid level when the array is inactive
  2020-10-14 15:19 ` Jes Sorensen
@ 2020-10-20  9:50   ` Tkaczyk, Mariusz
  2020-10-20 12:32     ` Tkaczyk, Mariusz
  0 siblings, 1 reply; 5+ messages in thread
From: Tkaczyk, Mariusz @ 2020-10-20  9:50 UTC (permalink / raw)
  To: Jes Sorensen, Lidong Zhong; +Cc: linux-raid

Hello Lidong,
We are observing seqfault during IMSM raid creation caused by your patch.

Core was generated by `/sbin/mdadm --detail --export /dev/md127'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000000000042516e in Detail (dev=0x7ffdbd6d1efc "/dev/md127", c=0x7ffdbd6d0710) at Detail.c:228
228                     str = map_num(pers, info->array.level);

The issue occurs during container or volume creation and cannot be reproduced manually.
In my opinion udev is racing with create process. Observed on RHEL 8.2 with upstream mdadm.
Could you look?

If you are lack of IMSM hardware please use IMSM_NO_PLATFORM environment variable.

Thanks,
Mariusz

-----Original Message-----
From: Jes Sorensen <jes@trained-monkey.org> 
Sent: Wednesday, October 14, 2020 5:20 PM
To: Lidong Zhong <lidong.zhong@suse.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: [PATCH v2] Detail: show correct raid level when the array is inactive

On 9/13/20 10:52 PM, Lidong Zhong wrote:
> Sometimes the raid level in the output of `mdadm -D /dev/mdX` is 
> misleading when the array is in inactive state. Here is a testcase for 
> introduction.
> 1\ creating a raid1 device with two disks. Specify a different 
> hostname rather than the real one for later verfication.
> 
> node1:~ # mdadm --create /dev/md0 --homehost TESTARRAY -o -l 1 -n 2 
> /dev/sdb /dev/sdc 2\ remove one of the devices and reboot 3\ show the 
> detail of raid1 device
> 
> node1:~ # mdadm -D /dev/md127
> /dev/md127:
>         Version : 1.2
>      Raid Level : raid0
>   Total Devices : 1
>     Persistence : Superblock is persistent
>           State : inactive
> Working Devices : 1
> 
> You can see that the "Raid Level" in /dev/md127 is raid0 now.
> After step 2\ is done, the degraded raid1 device is recognized as a 
> "foreign" array in 64-md-raid-assembly.rules. And thus the timer to 
> activate the raid1 device is not triggered. The array level returned 
> from GET_ARRAY_INFO ioctl is 0. And the string shown for "Raid Level" 
> is str = map_num(pers, array.level); And the definition of pers is 
> mapping_t pers[] = { { "linear", LEVEL_LINEAR}, { "raid0", 0}, { "0", 
> 0} ...
> So the misleading "raid0" is shown in this testcase.
> 
> Changelog:
> v1: don't show "Raid Level" when array is inactive
> Signed-off-by: Lidong Zhong <lidong.zhong@suse.com>
>

Applied!

Thanks,
Jes


^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [PATCH v2] Detail: show correct raid level when the array is inactive
  2020-10-20  9:50   ` Tkaczyk, Mariusz
@ 2020-10-20 12:32     ` Tkaczyk, Mariusz
  2020-11-22 13:15       ` Zhong Lidong
  0 siblings, 1 reply; 5+ messages in thread
From: Tkaczyk, Mariusz @ 2020-10-20 12:32 UTC (permalink / raw)
  To: Tkaczyk, Mariusz, Jes Sorensen, Lidong Zhong; +Cc: linux-raid

Hello,
one clarification:
Issue can be reproduced, just create container and observe system journal.
I mean that the seqfault cannot be reproduced directly, it comes in background,
during creation.

Sorry for being inaccurate.

Mariusz

-----Original Message-----
From: Tkaczyk, Mariusz <mariusz.tkaczyk@intel.com> 
Sent: Tuesday, October 20, 2020 11:50 AM
To: Jes Sorensen <jes@trained-monkey.org>; Lidong Zhong <lidong.zhong@suse.com>
Cc: linux-raid@vger.kernel.org
Subject: RE: [PATCH v2] Detail: show correct raid level when the array is inactive

Hello Lidong,
We are observing seqfault during IMSM raid creation caused by your patch.

Core was generated by `/sbin/mdadm --detail --export /dev/md127'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000000000042516e in Detail (dev=0x7ffdbd6d1efc "/dev/md127", c=0x7ffdbd6d0710) at Detail.c:228
228                     str = map_num(pers, info->array.level);

The issue occurs during container or volume creation and cannot be reproduced manually.
In my opinion udev is racing with create process. Observed on RHEL 8.2 with upstream mdadm.
Could you look?

If you are lack of IMSM hardware please use IMSM_NO_PLATFORM environment variable.

Thanks,
Mariusz

-----Original Message-----
From: Jes Sorensen <jes@trained-monkey.org>
Sent: Wednesday, October 14, 2020 5:20 PM
To: Lidong Zhong <lidong.zhong@suse.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: [PATCH v2] Detail: show correct raid level when the array is inactive

On 9/13/20 10:52 PM, Lidong Zhong wrote:
> Sometimes the raid level in the output of `mdadm -D /dev/mdX` is 
> misleading when the array is in inactive state. Here is a testcase for 
> introduction.
> 1\ creating a raid1 device with two disks. Specify a different 
> hostname rather than the real one for later verfication.
> 
> node1:~ # mdadm --create /dev/md0 --homehost TESTARRAY -o -l 1 -n 2 
> /dev/sdb /dev/sdc 2\ remove one of the devices and reboot 3\ show the 
> detail of raid1 device
> 
> node1:~ # mdadm -D /dev/md127
> /dev/md127:
>         Version : 1.2
>      Raid Level : raid0
>   Total Devices : 1
>     Persistence : Superblock is persistent
>           State : inactive
> Working Devices : 1
> 
> You can see that the "Raid Level" in /dev/md127 is raid0 now.
> After step 2\ is done, the degraded raid1 device is recognized as a 
> "foreign" array in 64-md-raid-assembly.rules. And thus the timer to 
> activate the raid1 device is not triggered. The array level returned 
> from GET_ARRAY_INFO ioctl is 0. And the string shown for "Raid Level"
> is str = map_num(pers, array.level); And the definition of pers is 
> mapping_t pers[] = { { "linear", LEVEL_LINEAR}, { "raid0", 0}, { "0", 
> 0} ...
> So the misleading "raid0" is shown in this testcase.
> 
> Changelog:
> v1: don't show "Raid Level" when array is inactive
> Signed-off-by: Lidong Zhong <lidong.zhong@suse.com>
>

Applied!

Thanks,
Jes


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] Detail: show correct raid level when the array is inactive
  2020-10-20 12:32     ` Tkaczyk, Mariusz
@ 2020-11-22 13:15       ` Zhong Lidong
  0 siblings, 0 replies; 5+ messages in thread
From: Zhong Lidong @ 2020-11-22 13:15 UTC (permalink / raw)
  To: Tkaczyk, Mariusz, Jes Sorensen; +Cc: linux-raid

Hi Tkaczyk,

Sorry for late response. I can reproduce it locally. I'll submit
a new patch to fix the regression.

Thanks,
Lidong

On 10/20/20 8:32 PM, Tkaczyk, Mariusz wrote:
> Hello,
> one clarification:
> Issue can be reproduced, just create container and observe system journal.
> I mean that the seqfault cannot be reproduced directly, it comes in background,
> during creation.
> 
> Sorry for being inaccurate.
> 
> Mariusz
> 
> -----Original Message-----
> From: Tkaczyk, Mariusz <mariusz.tkaczyk@intel.com> 
> Sent: Tuesday, October 20, 2020 11:50 AM
> To: Jes Sorensen <jes@trained-monkey.org>; Lidong Zhong <lidong.zhong@suse.com>
> Cc: linux-raid@vger.kernel.org
> Subject: RE: [PATCH v2] Detail: show correct raid level when the array is inactive
> 
> Hello Lidong,
> We are observing seqfault during IMSM raid creation caused by your patch.
> 
> Core was generated by `/sbin/mdadm --detail --export /dev/md127'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x000000000042516e in Detail (dev=0x7ffdbd6d1efc "/dev/md127", c=0x7ffdbd6d0710) at Detail.c:228
> 228                     str = map_num(pers, info->array.level);
> 
> The issue occurs during container or volume creation and cannot be reproduced manually.
> In my opinion udev is racing with create process. Observed on RHEL 8.2 with upstream mdadm.
> Could you look?
> 
> If you are lack of IMSM hardware please use IMSM_NO_PLATFORM environment variable.
> 
> Thanks,
> Mariusz
> 
> -----Original Message-----
> From: Jes Sorensen <jes@trained-monkey.org>
> Sent: Wednesday, October 14, 2020 5:20 PM
> To: Lidong Zhong <lidong.zhong@suse.com>
> Cc: linux-raid@vger.kernel.org
> Subject: Re: [PATCH v2] Detail: show correct raid level when the array is inactive
> 
> On 9/13/20 10:52 PM, Lidong Zhong wrote:
>> Sometimes the raid level in the output of `mdadm -D /dev/mdX` is 
>> misleading when the array is in inactive state. Here is a testcase for 
>> introduction.
>> 1\ creating a raid1 device with two disks. Specify a different 
>> hostname rather than the real one for later verfication.
>>
>> node1:~ # mdadm --create /dev/md0 --homehost TESTARRAY -o -l 1 -n 2 
>> /dev/sdb /dev/sdc 2\ remove one of the devices and reboot 3\ show the 
>> detail of raid1 device
>>
>> node1:~ # mdadm -D /dev/md127
>> /dev/md127:
>>         Version : 1.2
>>      Raid Level : raid0
>>   Total Devices : 1
>>     Persistence : Superblock is persistent
>>           State : inactive
>> Working Devices : 1
>>
>> You can see that the "Raid Level" in /dev/md127 is raid0 now.
>> After step 2\ is done, the degraded raid1 device is recognized as a 
>> "foreign" array in 64-md-raid-assembly.rules. And thus the timer to 
>> activate the raid1 device is not triggered. The array level returned 
>> from GET_ARRAY_INFO ioctl is 0. And the string shown for "Raid Level"
>> is str = map_num(pers, array.level); And the definition of pers is 
>> mapping_t pers[] = { { "linear", LEVEL_LINEAR}, { "raid0", 0}, { "0", 
>> 0} ...
>> So the misleading "raid0" is shown in this testcase.
>>
>> Changelog:
>> v1: don't show "Raid Level" when array is inactive
>> Signed-off-by: Lidong Zhong <lidong.zhong@suse.com>
>>
> 
> Applied!
> 
> Thanks,
> Jes
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-11-22 13:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-14  2:52 [PATCH v2] Detail: show correct raid level when the array is inactive Lidong Zhong
2020-10-14 15:19 ` Jes Sorensen
2020-10-20  9:50   ` Tkaczyk, Mariusz
2020-10-20 12:32     ` Tkaczyk, Mariusz
2020-11-22 13:15       ` Zhong Lidong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).