All of lore.kernel.org
 help / color / mirror / Atom feed
* lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
       [not found] <338941973.7699634.1473230038475.JavaMail.zimbra@redhat.com>
@ 2016-09-07  6:43 ` Yi Zhang
  2016-09-08 22:56   ` Shaohua Li
  0 siblings, 1 reply; 10+ messages in thread
From: Yi Zhang @ 2016-09-07  6:43 UTC (permalink / raw)
  To: linux-raid; +Cc: shli

Hello

I tried create one IMSM RAID10 with missing, found lots of "md: export_rdev(sde)" printed, anyone could help check it?

Steps I used:
mdadm -CR /dev/md0 /dev/sd[b-f] -n5 -e imsm
mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing

Version:
4.8.0-rc5
mdadm - v3.4-84-gbd1fd72 - 25th August 2016

Log: 
http://pastebin.com/FJJwvgg6

<6>[  301.102007] md: bind<sdb>
<6>[  301.102095] md: bind<sdc>
<6>[  301.102159] md: bind<sdd>
<6>[  301.102215] md: bind<sde>
<6>[  301.102291] md: bind<sdf>
<6>[  301.103010] ata3.00: Enabling discard_zeroes_data
<6>[  311.714344] ata3.00: Enabling discard_zeroes_data
<6>[  311.721866] md: bind<sdb>
<6>[  311.721965] md: bind<sdc>
<6>[  311.722029] md: bind<sdd>
<5>[  311.733165] md/raid10:md127: not clean -- starting background reconstruction
<6>[  311.733167] md/raid10:md127: active with 3 out of 4 devices
<6>[  311.733186] md127: detected capacity change from 0 to 240060989440
<6>[  311.774027] md: bind<sde>
<6>[  311.810664] md: md127 switched to read-write mode.
<6>[  311.819885] md: resync of RAID array md127
<6>[  311.819886] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
<6>[  311.819887] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
<6>[  311.819891] md: using 128k window, over a total of 234435328k.
<6>[  316.606073] ata3.00: Enabling discard_zeroes_data
<6>[  343.949845] capability: warning: `turbostat' uses 32-bit capabilities (legacy support in use)
<6>[ 1482.314944] md: md127: resync done.
<7>[ 1482.315086] RAID10 conf printout:
<7>[ 1482.315087]  --- wd:3 rd:4
<7>[ 1482.315089]  disk 0, wo:0, o:1, dev:sdb
<7>[ 1482.315089]  disk 1, wo:0, o:1, dev:sdc
<7>[ 1482.315090]  disk 2, wo:0, o:1, dev:sdd
<7>[ 1482.315099] RAID10 conf printout:
<7>[ 1482.315099]  --- wd:3 rd:4
<7>[ 1482.315100]  disk 0, wo:0, o:1, dev:sdb
<7>[ 1482.315100]  disk 1, wo:0, o:1, dev:sdc
<7>[ 1482.315101]  disk 2, wo:0, o:1, dev:sdd
<7>[ 1482.315101]  disk 3, wo:1, o:1, dev:sde
<6>[ 1482.315220] md: recovery of RAID array md127
<6>[ 1482.315221] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
<6>[ 1482.315222] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
<6>[ 1482.315227] md: using 128k window, over a total of 117217664k.
<6>[ 2697.184217] md: md127: recovery done.
<7>[ 2697.524143] RAID10 conf printout:
<7>[ 2697.524144]  --- wd:4 rd:4
<7>[ 2697.524146]  disk 0, wo:0, o:1, dev:sdb
<7>[ 2697.524146]  disk 1, wo:0, o:1, dev:sdc
<7>[ 2697.524147]  disk 2, wo:0, o:1, dev:sdd
<7>[ 2697.524148]  disk 3, wo:0, o:1, dev:sde
<6>[ 2697.524632] md: export_rdev(sde)
<6>[ 2697.549452] md: export_rdev(sde)
<6>[ 2697.568763] md: export_rdev(sde)
<6>[ 2697.587938] md: export_rdev(sde)
<6>[ 2697.607271] md: export_rdev(sde)
<6>[ 2697.626321] md: export_rdev(sde)
<6>[ 2697.645676] md: export_rdev(sde)
<6>[ 2697.663211] md: export_rdev(sde)
<6>[ 2697.681603] md: export_rdev(sde)
<6>[ 2697.699117] md: export_rdev(sde)
<6>[ 2697.716510] md: export_rdev(sde)

Best Regards,
  Yi Zhang



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-07  6:43 ` lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing Yi Zhang
@ 2016-09-08 22:56   ` Shaohua Li
  2016-09-09 12:56     ` Artur Paszkiewicz
  0 siblings, 1 reply; 10+ messages in thread
From: Shaohua Li @ 2016-09-08 22:56 UTC (permalink / raw)
  To: Yi Zhang; +Cc: linux-raid, Jes.Sorensen

On Wed, Sep 07, 2016 at 02:43:41AM -0400, Yi Zhang wrote:
> Hello
> 
> I tried create one IMSM RAID10 with missing, found lots of "md: export_rdev(sde)" printed, anyone could help check it?
> 
> Steps I used:
> mdadm -CR /dev/md0 /dev/sd[b-f] -n5 -e imsm
> mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing
> 
> Version:
> 4.8.0-rc5
> mdadm - v3.4-84-gbd1fd72 - 25th August 2016

can't reproduce with old mdadm but can with upstream mdadm. Looks mdadm is
keeping write the new_dev sysfs entry.

Jes, any idea?

Thanks,
Shaohua 
> Log: 
> http://pastebin.com/FJJwvgg6
> 
> <6>[  301.102007] md: bind<sdb>
> <6>[  301.102095] md: bind<sdc>
> <6>[  301.102159] md: bind<sdd>
> <6>[  301.102215] md: bind<sde>
> <6>[  301.102291] md: bind<sdf>
> <6>[  301.103010] ata3.00: Enabling discard_zeroes_data
> <6>[  311.714344] ata3.00: Enabling discard_zeroes_data
> <6>[  311.721866] md: bind<sdb>
> <6>[  311.721965] md: bind<sdc>
> <6>[  311.722029] md: bind<sdd>
> <5>[  311.733165] md/raid10:md127: not clean -- starting background reconstruction
> <6>[  311.733167] md/raid10:md127: active with 3 out of 4 devices
> <6>[  311.733186] md127: detected capacity change from 0 to 240060989440
> <6>[  311.774027] md: bind<sde>
> <6>[  311.810664] md: md127 switched to read-write mode.
> <6>[  311.819885] md: resync of RAID array md127
> <6>[  311.819886] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
> <6>[  311.819887] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
> <6>[  311.819891] md: using 128k window, over a total of 234435328k.
> <6>[  316.606073] ata3.00: Enabling discard_zeroes_data
> <6>[  343.949845] capability: warning: `turbostat' uses 32-bit capabilities (legacy support in use)
> <6>[ 1482.314944] md: md127: resync done.
> <7>[ 1482.315086] RAID10 conf printout:
> <7>[ 1482.315087]  --- wd:3 rd:4
> <7>[ 1482.315089]  disk 0, wo:0, o:1, dev:sdb
> <7>[ 1482.315089]  disk 1, wo:0, o:1, dev:sdc
> <7>[ 1482.315090]  disk 2, wo:0, o:1, dev:sdd
> <7>[ 1482.315099] RAID10 conf printout:
> <7>[ 1482.315099]  --- wd:3 rd:4
> <7>[ 1482.315100]  disk 0, wo:0, o:1, dev:sdb
> <7>[ 1482.315100]  disk 1, wo:0, o:1, dev:sdc
> <7>[ 1482.315101]  disk 2, wo:0, o:1, dev:sdd
> <7>[ 1482.315101]  disk 3, wo:1, o:1, dev:sde
> <6>[ 1482.315220] md: recovery of RAID array md127
> <6>[ 1482.315221] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
> <6>[ 1482.315222] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
> <6>[ 1482.315227] md: using 128k window, over a total of 117217664k.
> <6>[ 2697.184217] md: md127: recovery done.
> <7>[ 2697.524143] RAID10 conf printout:
> <7>[ 2697.524144]  --- wd:4 rd:4
> <7>[ 2697.524146]  disk 0, wo:0, o:1, dev:sdb
> <7>[ 2697.524146]  disk 1, wo:0, o:1, dev:sdc
> <7>[ 2697.524147]  disk 2, wo:0, o:1, dev:sdd
> <7>[ 2697.524148]  disk 3, wo:0, o:1, dev:sde
> <6>[ 2697.524632] md: export_rdev(sde)
> <6>[ 2697.549452] md: export_rdev(sde)
> <6>[ 2697.568763] md: export_rdev(sde)
> <6>[ 2697.587938] md: export_rdev(sde)
> <6>[ 2697.607271] md: export_rdev(sde)
> <6>[ 2697.626321] md: export_rdev(sde)
> <6>[ 2697.645676] md: export_rdev(sde)
> <6>[ 2697.663211] md: export_rdev(sde)
> <6>[ 2697.681603] md: export_rdev(sde)
> <6>[ 2697.699117] md: export_rdev(sde)
> <6>[ 2697.716510] md: export_rdev(sde)
> 
> Best Regards,
>   Yi Zhang
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-08 22:56   ` Shaohua Li
@ 2016-09-09 12:56     ` Artur Paszkiewicz
  2016-09-12  8:03       ` Yi Zhang
  2016-09-14 21:05       ` Jes Sorensen
  0 siblings, 2 replies; 10+ messages in thread
From: Artur Paszkiewicz @ 2016-09-09 12:56 UTC (permalink / raw)
  To: Shaohua Li, Yi Zhang; +Cc: linux-raid, Jes.Sorensen

On 09/09/2016 12:56 AM, Shaohua Li wrote:
> On Wed, Sep 07, 2016 at 02:43:41AM -0400, Yi Zhang wrote:
>> Hello
>>
>> I tried create one IMSM RAID10 with missing, found lots of "md: export_rdev(sde)" printed, anyone could help check it?
>>
>> Steps I used:
>> mdadm -CR /dev/md0 /dev/sd[b-f] -n5 -e imsm
>> mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing
>>
>> Version:
>> 4.8.0-rc5
>> mdadm - v3.4-84-gbd1fd72 - 25th August 2016
> 
> can't reproduce with old mdadm but can with upstream mdadm. Looks mdadm is
> keeping write the new_dev sysfs entry.
> 
> Jes, any idea?
> 
> Thanks,
> Shaohua 
>> Log: 
>> http://pastebin.com/FJJwvgg6
>>
>> <6>[  301.102007] md: bind<sdb>
>> <6>[  301.102095] md: bind<sdc>
>> <6>[  301.102159] md: bind<sdd>
>> <6>[  301.102215] md: bind<sde>
>> <6>[  301.102291] md: bind<sdf>
>> <6>[  301.103010] ata3.00: Enabling discard_zeroes_data
>> <6>[  311.714344] ata3.00: Enabling discard_zeroes_data
>> <6>[  311.721866] md: bind<sdb>
>> <6>[  311.721965] md: bind<sdc>
>> <6>[  311.722029] md: bind<sdd>
>> <5>[  311.733165] md/raid10:md127: not clean -- starting background reconstruction
>> <6>[  311.733167] md/raid10:md127: active with 3 out of 4 devices
>> <6>[  311.733186] md127: detected capacity change from 0 to 240060989440
>> <6>[  311.774027] md: bind<sde>
>> <6>[  311.810664] md: md127 switched to read-write mode.
>> <6>[  311.819885] md: resync of RAID array md127
>> <6>[  311.819886] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
>> <6>[  311.819887] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
>> <6>[  311.819891] md: using 128k window, over a total of 234435328k.
>> <6>[  316.606073] ata3.00: Enabling discard_zeroes_data
>> <6>[  343.949845] capability: warning: `turbostat' uses 32-bit capabilities (legacy support in use)
>> <6>[ 1482.314944] md: md127: resync done.
>> <7>[ 1482.315086] RAID10 conf printout:
>> <7>[ 1482.315087]  --- wd:3 rd:4
>> <7>[ 1482.315089]  disk 0, wo:0, o:1, dev:sdb
>> <7>[ 1482.315089]  disk 1, wo:0, o:1, dev:sdc
>> <7>[ 1482.315090]  disk 2, wo:0, o:1, dev:sdd
>> <7>[ 1482.315099] RAID10 conf printout:
>> <7>[ 1482.315099]  --- wd:3 rd:4
>> <7>[ 1482.315100]  disk 0, wo:0, o:1, dev:sdb
>> <7>[ 1482.315100]  disk 1, wo:0, o:1, dev:sdc
>> <7>[ 1482.315101]  disk 2, wo:0, o:1, dev:sdd
>> <7>[ 1482.315101]  disk 3, wo:1, o:1, dev:sde
>> <6>[ 1482.315220] md: recovery of RAID array md127
>> <6>[ 1482.315221] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
>> <6>[ 1482.315222] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
>> <6>[ 1482.315227] md: using 128k window, over a total of 117217664k.
>> <6>[ 2697.184217] md: md127: recovery done.
>> <7>[ 2697.524143] RAID10 conf printout:
>> <7>[ 2697.524144]  --- wd:4 rd:4
>> <7>[ 2697.524146]  disk 0, wo:0, o:1, dev:sdb
>> <7>[ 2697.524146]  disk 1, wo:0, o:1, dev:sdc
>> <7>[ 2697.524147]  disk 2, wo:0, o:1, dev:sdd
>> <7>[ 2697.524148]  disk 3, wo:0, o:1, dev:sde
>> <6>[ 2697.524632] md: export_rdev(sde)
>> <6>[ 2697.549452] md: export_rdev(sde)
>> <6>[ 2697.568763] md: export_rdev(sde)
>> <6>[ 2697.587938] md: export_rdev(sde)
>> <6>[ 2697.607271] md: export_rdev(sde)
>> <6>[ 2697.626321] md: export_rdev(sde)
>> <6>[ 2697.645676] md: export_rdev(sde)
>> <6>[ 2697.663211] md: export_rdev(sde)
>> <6>[ 2697.681603] md: export_rdev(sde)
>> <6>[ 2697.699117] md: export_rdev(sde)
>> <6>[ 2697.716510] md: export_rdev(sde)
>>
>> Best Regards,
>>   Yi Zhang

Can you check if this fix works for you? If it does I'll send a proper
patch for this.

Thanks,
Artur

diff --git a/super-intel.c b/super-intel.c
index 92817e9..ffa71f6 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -7789,6 +7789,9 @@ static struct mdinfo *imsm_activate_spare(struct active_array *a,
 			IMSM_T_STATE_DEGRADED)
 		return NULL;
 
+	if (get_imsm_map(dev, MAP_0)->map_state == IMSM_T_STATE_UNINITIALIZED)
+		return NULL;
+
 	/*
 	 * If there are any failed disks check state of the other volume.
 	 * Block rebuild if the another one is failed until failed disks

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-09 12:56     ` Artur Paszkiewicz
@ 2016-09-12  8:03       ` Yi Zhang
  2016-09-12 10:58         ` Artur Paszkiewicz
  2016-09-14 21:05       ` Jes Sorensen
  1 sibling, 1 reply; 10+ messages in thread
From: Yi Zhang @ 2016-09-12  8:03 UTC (permalink / raw)
  To: Artur Paszkiewicz, Shaohua Li; +Cc: linux-raid, Jes.Sorensen



On 09/09/2016 08:56 PM, Artur Paszkiewicz wrote:
> On 09/09/2016 12:56 AM, Shaohua Li wrote:
>> On Wed, Sep 07, 2016 at 02:43:41AM -0400, Yi Zhang wrote:
>>> Hello
>>>
>>> I tried create one IMSM RAID10 with missing, found lots of "md: export_rdev(sde)" printed, anyone could help check it?
>>>
>>> Steps I used:
>>> mdadm -CR /dev/md0 /dev/sd[b-f] -n5 -e imsm
>>> mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing
>>>
>>> Version:
>>> 4.8.0-rc5
>>> mdadm - v3.4-84-gbd1fd72 - 25th August 2016
>> can't reproduce with old mdadm but can with upstream mdadm. Looks mdadm is
>> keeping write the new_dev sysfs entry.
>>
>> Jes, any idea?
>>
>> Thanks,
>> Shaohua
>>> Log:
>>> http://pastebin.com/FJJwvgg6
>>>
>>> <6>[  301.102007] md: bind<sdb>
>>> <6>[  301.102095] md: bind<sdc>
>>> <6>[  301.102159] md: bind<sdd>
>>> <6>[  301.102215] md: bind<sde>
>>> <6>[  301.102291] md: bind<sdf>
>>> <6>[  301.103010] ata3.00: Enabling discard_zeroes_data
>>> <6>[  311.714344] ata3.00: Enabling discard_zeroes_data
>>> <6>[  311.721866] md: bind<sdb>
>>> <6>[  311.721965] md: bind<sdc>
>>> <6>[  311.722029] md: bind<sdd>
>>> <5>[  311.733165] md/raid10:md127: not clean -- starting background reconstruction
>>> <6>[  311.733167] md/raid10:md127: active with 3 out of 4 devices
>>> <6>[  311.733186] md127: detected capacity change from 0 to 240060989440
>>> <6>[  311.774027] md: bind<sde>
>>> <6>[  311.810664] md: md127 switched to read-write mode.
>>> <6>[  311.819885] md: resync of RAID array md127
>>> <6>[  311.819886] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
>>> <6>[  311.819887] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
>>> <6>[  311.819891] md: using 128k window, over a total of 234435328k.
>>> <6>[  316.606073] ata3.00: Enabling discard_zeroes_data
>>> <6>[  343.949845] capability: warning: `turbostat' uses 32-bit capabilities (legacy support in use)
>>> <6>[ 1482.314944] md: md127: resync done.
>>> <7>[ 1482.315086] RAID10 conf printout:
>>> <7>[ 1482.315087]  --- wd:3 rd:4
>>> <7>[ 1482.315089]  disk 0, wo:0, o:1, dev:sdb
>>> <7>[ 1482.315089]  disk 1, wo:0, o:1, dev:sdc
>>> <7>[ 1482.315090]  disk 2, wo:0, o:1, dev:sdd
>>> <7>[ 1482.315099] RAID10 conf printout:
>>> <7>[ 1482.315099]  --- wd:3 rd:4
>>> <7>[ 1482.315100]  disk 0, wo:0, o:1, dev:sdb
>>> <7>[ 1482.315100]  disk 1, wo:0, o:1, dev:sdc
>>> <7>[ 1482.315101]  disk 2, wo:0, o:1, dev:sdd
>>> <7>[ 1482.315101]  disk 3, wo:1, o:1, dev:sde
>>> <6>[ 1482.315220] md: recovery of RAID array md127
>>> <6>[ 1482.315221] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
>>> <6>[ 1482.315222] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
>>> <6>[ 1482.315227] md: using 128k window, over a total of 117217664k.
>>> <6>[ 2697.184217] md: md127: recovery done.
>>> <7>[ 2697.524143] RAID10 conf printout:
>>> <7>[ 2697.524144]  --- wd:4 rd:4
>>> <7>[ 2697.524146]  disk 0, wo:0, o:1, dev:sdb
>>> <7>[ 2697.524146]  disk 1, wo:0, o:1, dev:sdc
>>> <7>[ 2697.524147]  disk 2, wo:0, o:1, dev:sdd
>>> <7>[ 2697.524148]  disk 3, wo:0, o:1, dev:sde
>>> <6>[ 2697.524632] md: export_rdev(sde)
>>> <6>[ 2697.549452] md: export_rdev(sde)
>>> <6>[ 2697.568763] md: export_rdev(sde)
>>> <6>[ 2697.587938] md: export_rdev(sde)
>>> <6>[ 2697.607271] md: export_rdev(sdeautomate)
>>> <6>[ 2697.626321] md: export_rdev(sdeautomateautomate)
>>> <6>[ 2697.645676] md: export_rdev(sde)
>>> <6>[ 2697.663211] md: export_rdev(sde)
>>> <6>[ 2697.681603] md: export_rdev(sde)
>>> <6>[ 2697.699117] md: export_rdev(sde)
>>> <6>[ 2697.716510] md: export_rdev(sde)
>>>
>>> Best Regards,
>>>    Yi Zhang
> Can you check if this fix works for you? If it does I'll send a proper
> patch for this.
Hello Artur
With your patch, no "md: export_rdev(sde)" printed after create raid10.

I found another problem, not sure whether it is reasonable, could you 
help confirm it, thanks.
When I create one container with 4 disks[1], and create one raid10 with 
3 disks(sd[b-d]) + 1 missing [2], but it finally bind the fourth disk: 
sde [3].

[1] mdadm -CR /dev/md0 /dev/sd[b-e] -n4 -e imsm
[2] mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing --size=500M
[3] # cat /proc/mdstat
Personalities : [raid10]
md127 : active raid10 sde[4] sdd[2] sdc[1] sdb[0]
       1024000 blocks super external:/md0/0 128K chunks 2 near-copies 
[4/4] [UUUU]

md0 : inactive sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
       4420 blocks super external:imsm

unused devices: <none>
> Thanks,
> Artur
>
> diff --git a/super-intel.c b/super-intel.c
> index 92817e9..ffa71f6 100644
> --- a/super-intel.c
> +++ b/super-intel.c
> @@ -7789,6 +7789,9 @@ static struct mdinfo *imsm_activate_spare(struct active_array *a,
>   			IMSM_T_STATE_DEGRADED)
>   		return NULL;
>   
> +	if (get_imsm_map(dev, MAP_0)->map_state == IMSM_T_STATE_UNINITIALIZED)
> +		return NULL;
> +
>   	/*
>   	 * If there are any failed disks check state of the other volume.
>   	 * Block rebuild if the another one is failed until failed disks


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-12  8:03       ` Yi Zhang
@ 2016-09-12 10:58         ` Artur Paszkiewicz
  2016-09-14  9:24           ` Yi Zhang
  0 siblings, 1 reply; 10+ messages in thread
From: Artur Paszkiewicz @ 2016-09-12 10:58 UTC (permalink / raw)
  To: Yi Zhang, Shaohua Li; +Cc: linux-raid, Jes.Sorensen

On 09/12/2016 10:03 AM, Yi Zhang wrote:
> Hello Artur
> With your patch, no "md: export_rdev(sde)" printed after create raid10.
> 
> I found another problem, not sure whether it is reasonable, could you help confirm it, thanks.
> When I create one container with 4 disks[1], and create one raid10 with 3 disks(sd[b-d]) + 1 missing [2], but it finally bind the fourth disk: sde [3].
> 
> [1] mdadm -CR /dev/md0 /dev/sd[b-e] -n4 -e imsm
> [2] mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing --size=500M
> [3] # cat /proc/mdstat
> Personalities : [raid10]
> md127 : active raid10 sde[4] sdd[2] sdc[1] sdb[0]
>       1024000 blocks super external:/md0/0 128K chunks 2 near-copies [4/4] [UUUU]
> 
> md0 : inactive sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
>       4420 blocks super external:imsm
> 
> unused devices: <none>

I think that this is correct behavior. Because there is a spare disk
available in the container, it is used for rebuilding the volume. This
is equivalent to:

mdadm -CR /dev/md0 /dev/sd[b-d] -n3 -e imsm
mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing --size=500M
mdadm -a /dev/md0 /dev/sde


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-12 10:58         ` Artur Paszkiewicz
@ 2016-09-14  9:24           ` Yi Zhang
  0 siblings, 0 replies; 10+ messages in thread
From: Yi Zhang @ 2016-09-14  9:24 UTC (permalink / raw)
  To: Artur Paszkiewicz; +Cc: Shaohua Li, Jes.Sorensen, linux-raid



On 09/12/2016 06:58 PM, Artur Paszkiewicz wrote:
> On 09/12/2016 10:03 AM, Yi Zhang wrote:
>> Hello Artur
>> With your patch, no "md: export_rdev(sde)" printed after create raid10.
>>
>> I found another problem, not sure whether it is reasonable, could you help confirm it, thanks.
>> When I create one container with 4 disks[1], and create one raid10 with 3 disks(sd[b-d]) + 1 missing [2], but it finally bind the fourth disk: sde [3].
>>
>> [1] mdadm -CR /dev/md0 /dev/sd[b-e] -n4 -e imsm
>> [2] mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing --size=500M
>> [3] # cat /proc/mdstat
>> Personalities : [raid10]
>> md127 : active raid10 sde[4] sdd[2] sdc[1] sdb[0]
>>        1024000 blocks super external:/md0/0 128K chunks 2 near-copies [4/4] [UUUU]
>>
>> md0 : inactive sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
>>        4420 blocks super external:imsm
>>
>> unused devices: <none>
> I think that this is correct behavior. Because there is a spare disk
> available in the container, it is used for rebuilding the volume. This
> is equivalent to:
>
> mdadm -CR /dev/md0 /dev/sd[b-d] -n3 -e imsm
> mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing --size=500M
> mdadm -a /dev/md0 /dev/sde
got, thanks Artur for the confirmation.

Yi
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-09 12:56     ` Artur Paszkiewicz
  2016-09-12  8:03       ` Yi Zhang
@ 2016-09-14 21:05       ` Jes Sorensen
  2016-09-15  8:01         ` Artur Paszkiewicz
  1 sibling, 1 reply; 10+ messages in thread
From: Jes Sorensen @ 2016-09-14 21:05 UTC (permalink / raw)
  To: Artur Paszkiewicz; +Cc: Shaohua Li, Yi Zhang, linux-raid

Artur Paszkiewicz <artur.paszkiewicz@intel.com> writes:
> On 09/09/2016 12:56 AM, Shaohua Li wrote:
>> On Wed, Sep 07, 2016 at 02:43:41AM -0400, Yi Zhang wrote:
>>> Hello
>>>
>>> I tried create one IMSM RAID10 with missing, found lots of "md:
>>> export_rdev(sde)" printed, anyone could help check it?
>>>
>>> Steps I used:
>>> mdadm -CR /dev/md0 /dev/sd[b-f] -n5 -e imsm
>>> mdadm -CR /dev/md/Volume0 -l10 -n4 /dev/sd[b-d] missing
>>>
>>> Version:
>>> 4.8.0-rc5
>>> mdadm - v3.4-84-gbd1fd72 - 25th August 2016
>> 
>> can't reproduce with old mdadm but can with upstream mdadm. Looks mdadm is
>> keeping write the new_dev sysfs entry.
>> 
>> Jes, any idea?
>> 
>> Thanks,
>> Shaohua 

[snip]

> Can you check if this fix works for you? If it does I'll send a proper
> patch for this.
>
> Thanks,
> Artur

Artur,

You were too fast :) Did you intend to post a patch with a commit
message?

Cheers,
Jes

>
> diff --git a/super-intel.c b/super-intel.c
> index 92817e9..ffa71f6 100644
> --- a/super-intel.c
> +++ b/super-intel.c
> @@ -7789,6 +7789,9 @@ static struct mdinfo *imsm_activate_spare(struct active_array *a,
>  			IMSM_T_STATE_DEGRADED)
>  		return NULL;
>  
> +	if (get_imsm_map(dev, MAP_0)->map_state == IMSM_T_STATE_UNINITIALIZED)
> +		return NULL;
> +
>  	/*
>  	 * If there are any failed disks check state of the other volume.
>  	 * Block rebuild if the another one is failed until failed disks
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-14 21:05       ` Jes Sorensen
@ 2016-09-15  8:01         ` Artur Paszkiewicz
  2016-09-16 12:31           ` Jes Sorensen
  0 siblings, 1 reply; 10+ messages in thread
From: Artur Paszkiewicz @ 2016-09-15  8:01 UTC (permalink / raw)
  To: Jes Sorensen; +Cc: Shaohua Li, Yi Zhang, linux-raid

On 09/14/2016 11:05 PM, Jes Sorensen wrote:
> Artur,
> 
> You were too fast :) Did you intend to post a patch with a commit
> message?
> 
> Cheers,
> Jes
> 

Hi Jes,

I wanted to wait for feedback from Yi first. I just sent a proper patch
with a commit message.

Artur

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-15  8:01         ` Artur Paszkiewicz
@ 2016-09-16 12:31           ` Jes Sorensen
  2016-09-18  2:53             ` Yi Zhang
  0 siblings, 1 reply; 10+ messages in thread
From: Jes Sorensen @ 2016-09-16 12:31 UTC (permalink / raw)
  To: Artur Paszkiewicz; +Cc: Shaohua Li, Yi Zhang, linux-raid

Artur Paszkiewicz <artur.paszkiewicz@intel.com> writes:
> On 09/14/2016 11:05 PM, Jes Sorensen wrote:
>> Artur,
>> 
>> You were too fast :) Did you intend to post a patch with a commit
>> message?
>> 
>> Cheers,
>> Jes
>> 
>
> Hi Jes,
>
> I wanted to wait for feedback from Yi first. I just sent a proper patch
> with a commit message.

Thats great, much appreciated!

Jes

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing
  2016-09-16 12:31           ` Jes Sorensen
@ 2016-09-18  2:53             ` Yi Zhang
  0 siblings, 0 replies; 10+ messages in thread
From: Yi Zhang @ 2016-09-18  2:53 UTC (permalink / raw)
  To: Jes Sorensen, Artur Paszkiewicz; +Cc: Shaohua Li, linux-raid

Hello Artur
I have test the patch you provided and that works, thanks.

Best Regards,
  Yi Zhang


----- Original Message -----
From: "Jes Sorensen" <Jes.Sorensen@redhat.com>
To: "Artur Paszkiewicz" <artur.paszkiewicz@intel.com>
Cc: "Shaohua Li" <shli@kernel.org>, "Yi Zhang" <yizhan@redhat.com>, linux-raid@vger.kernel.org
Sent: Friday, September 16, 2016 8:31:50 PM
Subject: Re: lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing

Artur Paszkiewicz <artur.paszkiewicz@intel.com> writes:
> On 09/14/2016 11:05 PM, Jes Sorensen wrote:
>> Artur,
>> 
>> You were too fast :) Did you intend to post a patch with a commit
>> message?
>> 
>> Cheers,
>> Jes
>> 
>
> Hi Jes,
>
> I wanted to wait for feedback from Yi first. I just sent a proper patch
> with a commit message.

Thats great, much appreciated!

Jes
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-09-18  2:53 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <338941973.7699634.1473230038475.JavaMail.zimbra@redhat.com>
2016-09-07  6:43 ` lots of "md: export_rdev(sde)" printed after create IMSM RAID10 with missing Yi Zhang
2016-09-08 22:56   ` Shaohua Li
2016-09-09 12:56     ` Artur Paszkiewicz
2016-09-12  8:03       ` Yi Zhang
2016-09-12 10:58         ` Artur Paszkiewicz
2016-09-14  9:24           ` Yi Zhang
2016-09-14 21:05       ` Jes Sorensen
2016-09-15  8:01         ` Artur Paszkiewicz
2016-09-16 12:31           ` Jes Sorensen
2016-09-18  2:53             ` Yi Zhang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.