All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] md: ensure sectors is nonzero when change component size
@ 2017-10-12  8:30 Zhilong Liu
  2017-10-12 17:37 ` Shaohua Li
  0 siblings, 1 reply; 7+ messages in thread
From: Zhilong Liu @ 2017-10-12  8:30 UTC (permalink / raw)
  To: shli; +Cc: linux-raid, neilb, Zhilong Liu

Against the raids which chunk_size is meaningful, the component_size
must be >= chunk_size when require resize. If "new_size < chunk_size"
has required, the "mddev->pers->resize" will set sectors as '0', and
then the raids isn't meaningful any more due to mddev->dev_sectors is
'0'.

Cc: Neil Brown <neilb@suse.com>
Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 drivers/md/raid10.c | 2 ++
 drivers/md/raid5.c  | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 374df57..f3891f6 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -3871,6 +3871,8 @@ static int raid10_resize(struct mddev *mddev, sector_t sectors)
 
 	oldsize = raid10_size(mddev, 0, 0);
 	size = raid10_size(mddev, sectors, 0);
+	if (size == 0)
+		return -EINVAL;
 	if (mddev->external_size &&
 	    mddev->array_sectors > size)
 		return -EINVAL;
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 928e24a..768767c 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7725,6 +7725,8 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
 	if (conf->log || raid5_has_ppl(conf))
 		return -EINVAL;
 	sectors &= ~((sector_t)conf->chunk_sectors - 1);
+	if (sectors == 0)
+		return -EINVAL;
 	newsize = raid5_size(mddev, sectors, mddev->raid_disks);
 	if (mddev->external_size &&
 	    mddev->array_sectors > newsize)
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] md: ensure sectors is nonzero when change component size
  2017-10-12  8:30 [PATCH] md: ensure sectors is nonzero when change component size Zhilong Liu
@ 2017-10-12 17:37 ` Shaohua Li
  2017-10-13  2:47   ` Zhilong Liu
  0 siblings, 1 reply; 7+ messages in thread
From: Shaohua Li @ 2017-10-12 17:37 UTC (permalink / raw)
  To: Zhilong Liu; +Cc: linux-raid, neilb

On Thu, Oct 12, 2017 at 04:30:51PM +0800, Zhilong Liu wrote:
> Against the raids which chunk_size is meaningful, the component_size
> must be >= chunk_size when require resize. If "new_size < chunk_size"
> has required, the "mddev->pers->resize" will set sectors as '0', and
> then the raids isn't meaningful any more due to mddev->dev_sectors is
> '0'.
> 
> Cc: Neil Brown <neilb@suse.com>
> Signed-off-by: Zhilong Liu <zlliu@suse.com>

Not sure about this, does size 0 disk really harm?

> ---
>  drivers/md/raid10.c | 2 ++
>  drivers/md/raid5.c  | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index 374df57..f3891f6 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -3871,6 +3871,8 @@ static int raid10_resize(struct mddev *mddev, sector_t sectors)
>  
>  	oldsize = raid10_size(mddev, 0, 0);
>  	size = raid10_size(mddev, sectors, 0);
> +	if (size == 0)
> +		return -EINVAL;
>  	if (mddev->external_size &&
>  	    mddev->array_sectors > size)
>  		return -EINVAL;
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 928e24a..768767c 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -7725,6 +7725,8 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
>  	if (conf->log || raid5_has_ppl(conf))
>  		return -EINVAL;
>  	sectors &= ~((sector_t)conf->chunk_sectors - 1);
> +	if (sectors == 0)
> +		return -EINVAL;
>  	newsize = raid5_size(mddev, sectors, mddev->raid_disks);
>  	if (mddev->external_size &&
>  	    mddev->array_sectors > newsize)
> -- 
> 2.6.6
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] md: ensure sectors is nonzero when change component size
  2017-10-12 17:37 ` Shaohua Li
@ 2017-10-13  2:47   ` Zhilong Liu
  2017-10-13 19:05     ` Shaohua Li
  0 siblings, 1 reply; 7+ messages in thread
From: Zhilong Liu @ 2017-10-13  2:47 UTC (permalink / raw)
  To: Shaohua Li; +Cc: linux-raid, neilb



On 10/13/2017 01:37 AM, Shaohua Li wrote:
> On Thu, Oct 12, 2017 at 04:30:51PM +0800, Zhilong Liu wrote:
>> Against the raids which chunk_size is meaningful, the component_size
>> must be >= chunk_size when require resize. If "new_size < chunk_size"
>> has required, the "mddev->pers->resize" will set sectors as '0', and
>> then the raids isn't meaningful any more due to mddev->dev_sectors is
>> '0'.
>>
>> Cc: Neil Brown <neilb@suse.com>
>> Signed-off-by: Zhilong Liu <zlliu@suse.com>
> Not sure about this, does size 0 disk really harm?
>

 From my site, I think changing the component size as '0' should be avoided.
When resize changing required and new_size < current_chunk_size, such as
raid5:

raid5.c: raid5_resize()
...
7727         sectors &= ~((sector_t)conf->chunk_sectors - 1);
...

'sectors' got '0'.

then:
...
7743         mddev->dev_sectors = sectors;
...

the dev_sectors(the component size) got '0'.
same scenario happens in raid10.

So, it's really not meaningful if changing the raid component_size to 
'0', md
should give this scenario a test, otherwise, it's a trouble thing to 
restore after
doing such invalid re-size.

Thanks,
-Zhilong

>> ---
>>   drivers/md/raid10.c | 2 ++
>>   drivers/md/raid5.c  | 2 ++
>>   2 files changed, 4 insertions(+)
>>
>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>> index 374df57..f3891f6 100644
>> --- a/drivers/md/raid10.c
>> +++ b/drivers/md/raid10.c
>> @@ -3871,6 +3871,8 @@ static int raid10_resize(struct mddev *mddev, sector_t sectors)
>>   
>>   	oldsize = raid10_size(mddev, 0, 0);
>>   	size = raid10_size(mddev, sectors, 0);
>> +	if (size == 0)
>> +		return -EINVAL;
>>   	if (mddev->external_size &&
>>   	    mddev->array_sectors > size)
>>   		return -EINVAL;
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index 928e24a..768767c 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -7725,6 +7725,8 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
>>   	if (conf->log || raid5_has_ppl(conf))
>>   		return -EINVAL;
>>   	sectors &= ~((sector_t)conf->chunk_sectors - 1);
>> +	if (sectors == 0)
>> +		return -EINVAL;
>>   	newsize = raid5_size(mddev, sectors, mddev->raid_disks);
>>   	if (mddev->external_size &&
>>   	    mddev->array_sectors > newsize)
>> -- 
>> 2.6.6
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] md: ensure sectors is nonzero when change component size
  2017-10-13  2:47   ` Zhilong Liu
@ 2017-10-13 19:05     ` Shaohua Li
  2017-10-16  7:31       ` Zhilong Liu
  0 siblings, 1 reply; 7+ messages in thread
From: Shaohua Li @ 2017-10-13 19:05 UTC (permalink / raw)
  To: Zhilong Liu; +Cc: linux-raid, neilb

On Fri, Oct 13, 2017 at 10:47:29AM +0800, Zhilong Liu wrote:
> 
> 
> On 10/13/2017 01:37 AM, Shaohua Li wrote:
> > On Thu, Oct 12, 2017 at 04:30:51PM +0800, Zhilong Liu wrote:
> > > Against the raids which chunk_size is meaningful, the component_size
> > > must be >= chunk_size when require resize. If "new_size < chunk_size"
> > > has required, the "mddev->pers->resize" will set sectors as '0', and
> > > then the raids isn't meaningful any more due to mddev->dev_sectors is
> > > '0'.
> > > 
> > > Cc: Neil Brown <neilb@suse.com>
> > > Signed-off-by: Zhilong Liu <zlliu@suse.com>
> > Not sure about this, does size 0 disk really harm?
> > 
> 
> From my site, I think changing the component size as '0' should be avoided.
> When resize changing required and new_size < current_chunk_size, such as
> raid5:
> 
> raid5.c: raid5_resize()
> ...
> 7727         sectors &= ~((sector_t)conf->chunk_sectors - 1);
> ...
> 
> 'sectors' got '0'.
> 
> then:
> ...
> 7743         mddev->dev_sectors = sectors;
> ...
> 
> the dev_sectors(the component size) got '0'.
> same scenario happens in raid10.
> 
> So, it's really not meaningful if changing the raid component_size to '0',
> md
> should give this scenario a test, otherwise, it's a trouble thing to restore
> after
> doing such invalid re-size.

Yes, I understand how it could be 0. My question is what's wrong with a size-0
disk? For example, if you don't setup file for a loop block device, its size is
0.

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] md: ensure sectors is nonzero when change component size
  2017-10-13 19:05     ` Shaohua Li
@ 2017-10-16  7:31       ` Zhilong Liu
  2017-10-17  0:21         ` NeilBrown
  0 siblings, 1 reply; 7+ messages in thread
From: Zhilong Liu @ 2017-10-16  7:31 UTC (permalink / raw)
  To: Shaohua Li; +Cc: linux-raid, neilb



On 10/14/2017 03:05 AM, Shaohua Li wrote:
> On Fri, Oct 13, 2017 at 10:47:29AM +0800, Zhilong Liu wrote:
>>
>> On 10/13/2017 01:37 AM, Shaohua Li wrote:
>>> On Thu, Oct 12, 2017 at 04:30:51PM +0800, Zhilong Liu wrote:
>>>> Against the raids which chunk_size is meaningful, the component_size
>>>> must be >= chunk_size when require resize. If "new_size < chunk_size"
>>>> has required, the "mddev->pers->resize" will set sectors as '0', and
>>>> then the raids isn't meaningful any more due to mddev->dev_sectors is
>>>> '0'.
>>>>
>>>> Cc: Neil Brown <neilb@suse.com>
>>>> Signed-off-by: Zhilong Liu <zlliu@suse.com>
>>> Not sure about this, does size 0 disk really harm?
>>>
>>  From my site, I think changing the component size as '0' should be avoided.
>> When resize changing required and new_size < current_chunk_size, such as
>> raid5:
>>
>> raid5.c: raid5_resize()
>> ...
>> 7727         sectors &= ~((sector_t)conf->chunk_sectors - 1);
>> ...
>>
>> 'sectors' got '0'.
>>
>> then:
>> ...
>> 7743         mddev->dev_sectors = sectors;
>> ...
>>
>> the dev_sectors(the component size) got '0'.
>> same scenario happens in raid10.
>>
>> So, it's really not meaningful if changing the raid component_size to '0',
>> md
>> should give this scenario a test, otherwise, it's a trouble thing to restore
>> after
>> doing such invalid re-size.
> Yes, I understand how it could be 0. My question is what's wrong with a size-0
> disk? For example, if you don't setup file for a loop block device, its size is
> 0.
I'm sorry I'm not very clear with your question, I try to describe more 
on this scenario.
the 0-component_size isn't a 0-size disk. resize doesn't change 
raid_member_disk size
to 0.

For example: mdadm -CR /dev/md0 -b internal -l5 -n2 -x1 /dev/sd[b-d]
if set the component_size to 0, how would the 'internal bitmap' be? And 
if I want to make
a file-system on this raid, how would it be? it's out of my control.

I would continue to provide infos for you if any questions needs further 
discussion.

Hope this information is useful for you.
Here is piece of dmesg for the following steps:
1. mdadm -CR /dev/md0 -b internal -l5 -n2 -x1 /dev/sd[b-d]
2. mdadm -G /dev/md0 --size 511
3. mkfs.ext3 /dev/md0
the mkfs would be stuck all time, cannot kill the mkfs process and have to
force to reboot, then lots of same call trace prints in dmesg.

... ... ...
[   18.376342] async_tx: api initialized (async)
[   18.418992] md/raid:md0: not clean -- starting background reconstruction
[   18.419010] md/raid:md0: device sdc operational as raid disk 1
[   18.419011] md/raid:md0: device sdb operational as raid disk 0
[   18.419360] md/raid:md0: raid level 5 active with 2 out of 2 devices, 
algorithm 2
[   18.420881] random: nonblocking pool is initialized
[   18.421869] md: resync of RAID array md0
[   18.421880] md: md0: resync done.
[   18.504658] md: resync of RAID array md0
[   18.504666] md: md0: resync done.
[   18.504671] ------------[ cut here ]------------
[   18.504680] WARNING: CPU: 3 PID: 1396 at ../drivers/md/md.c:7571 
md_seq_show+0x7ad/0x7c0 [md_mod]()
[   18.504702] Modules linked in: raid456 async_raid6_recov async_memcpy 
libcrc32c async_pq async_xor async_tx md_mod sd_mod iscsi_tcp 
libiscsi_tcp libiscsi scsi_transport_iscsi af_packet iscsi_ibft 
iscsi_boot_sysfs softdog ppdev joydev serio_raw parport_pc parport 
pvpanic pcspkr i2c_piix4 processor button ata_generic btrfs xor raid6_pq 
cirrus ata_piix virtio_net virtio_balloon virtio_blk ahci drm_kms_helper 
syscopyarea libahci sysfillrect sysimgblt fb_sys_fops ttm drm uhci_hcd 
ehci_hcd usbcore virtio_pci virtio_ring virtio libata usb_common floppy 
sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod 
autofs4
[   18.504703] Supported: Yes
[   18.504704] CPU: 3 PID: 1396 Comm: mdadm Not tainted 4.4.73-5-default #1
[   18.504705] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS Bochs 01/01/2011
[   18.504707]  0000000000000000 ffffffff8131fe80 0000000000000000 
ffffffffa04ad656
[   18.504708]  ffffffff8107df61 ffff880036866700 ffff88003b9ab800 
0000000000000003
[   18.504710]  0000000000000000 00000000000c3000 ffffffffa049f87d 
ffffffff811c2bad
[   18.504710] Call Trace:
[   18.504721]  [<ffffffff81019b19>] dump_trace+0x59/0x310
[   18.504724]  [<ffffffff81019eba>] show_stack_log_lvl+0xea/0x170
[   18.504726]  [<ffffffff8101ac41>] show_stack+0x21/0x40
[   18.504729]  [<ffffffff8131fe80>] dump_stack+0x5c/0x7c
[   18.504733]  [<ffffffff8107df61>] warn_slowpath_common+0x81/0xb0
[   18.504738]  [<ffffffffa049f87d>] md_seq_show+0x7ad/0x7c0 [md_mod]
[   18.504747]  [<ffffffff8122761c>] seq_read+0x22c/0x370
[   18.504751]  [<ffffffff8126b8a9>] proc_reg_read+0x39/0x70
[   18.504754]  [<ffffffff81204eb3>] __vfs_read+0x23/0x130
[   18.504756]  [<ffffffff81205a3a>] vfs_read+0x7a/0x120
[   18.504758]  [<ffffffff81206b42>] SyS_read+0x42/0xa0
[   18.504761]  [<ffffffff8160916e>] entry_SYSCALL_64_fastpath+0x12/0x6d
[   18.506306] DWARF2 unwinder stuck at entry_SYSCALL_64_fastpath+0x12/0x6d

[   18.506307] Leftover inexact backtrace:

[   18.506309] ---[ end trace 6a7e8bf93781c207 ]---
[   18.566273] md: resync of RAID array md0
[   18.566282] md: md0: resync done.
[   18.566284] ------------[ cut here ]------------
[   18.566294] WARNING: CPU: 3 PID: 1396 at ../drivers/md/md.c:7571 
md_seq_show+0x7ad/0x7c0 [md_mod]()
[   18.566316] Modules linked in: raid456 async_raid6_recov async_memcpy 
libcrc32c async_pq async_xor async_tx md_mod sd_mod iscsi_tcp 
libiscsi_tcp libiscsi scsi_transport_iscsi af_packet iscsi_ibft 
iscsi_boot_sysfs softdog ppdev joydev serio_raw parport_pc parport 
pvpanic pcspkr i2c_piix4 processor button ata_generic btrfs xor raid6_pq 
cirrus ata_piix virtio_net virtio_balloon virtio_blk ahci drm_kms_helper 
syscopyarea libahci sysfillrect sysimgblt fb_sys_fops ttm drm uhci_hcd 
ehci_hcd usbcore virtio_pci virtio_ring virtio libata usb_common floppy 
sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod 
autofs4
[   18.566316] Supported: Yes
[   18.566318] CPU: 3 PID: 1396 Comm: mdadm Tainted: G W          
4.4.73-5-default #1
[   18.566319] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS Bochs 01/01/2011
[   18.566321]  0000000000000000 ffffffff8131fe80 0000000000000000 
ffffffffa04ad656
[   18.566322]  ffffffff8107df61 ffff880036866700 ffff88003b9ab800 
0000000000000003
[   18.566324]  0000000000000000 00000000000c3000 ffffffffa049f87d 
0000000000001000
[   18.566324] Call Trace:
[   18.566334]  [<ffffffff81019b19>] dump_trace+0x59/0x310
[   18.566338]  [<ffffffff81019eba>] show_stack_log_lvl+0xea/0x170
[   18.566340]  [<ffffffff8101ac41>] show_stack+0x21/0x40
[   18.566343]  [<ffffffff8131fe80>] dump_stack+0x5c/0x7c
[   18.566347]  [<ffffffff8107df61>] warn_slowpath_common+0x81/0xb0
[   18.566352]  [<ffffffffa049f87d>] md_seq_show+0x7ad/0x7c0 [md_mod]
[   18.566361]  [<ffffffff8122761c>] seq_read+0x22c/0x370
[   18.566365]  [<ffffffff8126b8a9>] proc_reg_read+0x39/0x70
[   18.566367]  [<ffffffff81204eb3>] __vfs_read+0x23/0x130
[   18.566369]  [<ffffffff81205a3a>] vfs_read+0x7a/0x120
[   18.566371]  [<ffffffff81206b42>] SyS_read+0x42/0xa0
[   18.566375]  [<ffffffff8160916e>] entry_SYSCALL_64_fastpath+0x12/0x6d
[   18.567902] DWARF2 unwinder stuck at entry_SYSCALL_64_fastpath+0x12/0x6d

[   18.567903] Leftover inexact backtrace:

[   18.567905] ---[ end trace 6a7e8bf93781c208 ]---
[   18.641350] md: resync of RAID array md0
[   18.641360] md: md0: resync done.

Thanks,
-Zhilong

> Thanks,
> Shaohua
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] md: ensure sectors is nonzero when change component size
  2017-10-16  7:31       ` Zhilong Liu
@ 2017-10-17  0:21         ` NeilBrown
  2017-10-23  9:23           ` Zhilong Liu
  0 siblings, 1 reply; 7+ messages in thread
From: NeilBrown @ 2017-10-17  0:21 UTC (permalink / raw)
  To: Zhilong Liu, Shaohua Li; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2917 bytes --]

On Mon, Oct 16 2017, Zhilong Liu wrote:

> On 10/14/2017 03:05 AM, Shaohua Li wrote:
>> On Fri, Oct 13, 2017 at 10:47:29AM +0800, Zhilong Liu wrote:
>>>
>>> On 10/13/2017 01:37 AM, Shaohua Li wrote:
>>>> On Thu, Oct 12, 2017 at 04:30:51PM +0800, Zhilong Liu wrote:
>>>>> Against the raids which chunk_size is meaningful, the component_size
>>>>> must be >= chunk_size when require resize. If "new_size < chunk_size"
>>>>> has required, the "mddev->pers->resize" will set sectors as '0', and
>>>>> then the raids isn't meaningful any more due to mddev->dev_sectors is
>>>>> '0'.
>>>>>
>>>>> Cc: Neil Brown <neilb@suse.com>
>>>>> Signed-off-by: Zhilong Liu <zlliu@suse.com>
>>>> Not sure about this, does size 0 disk really harm?
>>>>
>>>  From my site, I think changing the component size as '0' should be avoided.
>>> When resize changing required and new_size < current_chunk_size, such as
>>> raid5:
>>>
>>> raid5.c: raid5_resize()
>>> ...
>>> 7727         sectors &= ~((sector_t)conf->chunk_sectors - 1);
>>> ...
>>>
>>> 'sectors' got '0'.
>>>
>>> then:
>>> ...
>>> 7743         mddev->dev_sectors = sectors;
>>> ...
>>>
>>> the dev_sectors(the component size) got '0'.
>>> same scenario happens in raid10.
>>>
>>> So, it's really not meaningful if changing the raid component_size to '0',
>>> md
>>> should give this scenario a test, otherwise, it's a trouble thing to restore
>>> after
>>> doing such invalid re-size.
>> Yes, I understand how it could be 0. My question is what's wrong with a size-0
>> disk? For example, if you don't setup file for a loop block device, its size is
>> 0.
> I'm sorry I'm not very clear with your question, I try to describe more 
> on this scenario.
> the 0-component_size isn't a 0-size disk. resize doesn't change 
> raid_member_disk size
> to 0.
>
> For example: mdadm -CR /dev/md0 -b internal -l5 -n2 -x1 /dev/sd[b-d]
> if set the component_size to 0, how would the 'internal bitmap' be? And 
> if I want to make
> a file-system on this raid, how would it be? it's out of my control.
>
> I would continue to provide infos for you if any questions needs further 
> discussion.
>
> Hope this information is useful for you.
> Here is piece of dmesg for the following steps:
> 1. mdadm -CR /dev/md0 -b internal -l5 -n2 -x1 /dev/sd[b-d]
> 2. mdadm -G /dev/md0 --size 511
> 3. mkfs.ext3 /dev/md0
> the mkfs would be stuck all time, cannot kill the mkfs process and have to
> force to reboot, then lots of same call trace prints in dmesg.

I think the cause of this problem is that raid5_size() treats zero
values for 'sectors' and 'raid_disks' as "don't change".

So setting the size to zero will change mddev->dev_sectors but not
mddev->array_size.
This causes internal confusion.
Maybe we should use a different number of "don't change" ??

This could affect any of the ->size() functions.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] md: ensure sectors is nonzero when change component size
  2017-10-17  0:21         ` NeilBrown
@ 2017-10-23  9:23           ` Zhilong Liu
  0 siblings, 0 replies; 7+ messages in thread
From: Zhilong Liu @ 2017-10-23  9:23 UTC (permalink / raw)
  To: NeilBrown, Shaohua Li; +Cc: linux-raid



On 10/17/2017 08:21 AM, NeilBrown wrote:
> On Mon, Oct 16 2017, Zhilong Liu wrote:
>
>> On 10/14/2017 03:05 AM, Shaohua Li wrote:
>>> On Fri, Oct 13, 2017 at 10:47:29AM +0800, Zhilong Liu wrote:
>>>> On 10/13/2017 01:37 AM, Shaohua Li wrote:
>>>>> On Thu, Oct 12, 2017 at 04:30:51PM +0800, Zhilong Liu wrote:
>>>>>> Against the raids which chunk_size is meaningful, the component_size
>>>>>> must be >= chunk_size when require resize. If "new_size < chunk_size"
>>>>>> has required, the "mddev->pers->resize" will set sectors as '0', and
>>>>>> then the raids isn't meaningful any more due to mddev->dev_sectors is
>>>>>> '0'.
>>>>>>
>>>>>> Cc: Neil Brown <neilb@suse.com>
>>>>>> Signed-off-by: Zhilong Liu <zlliu@suse.com>
>>>>> Not sure about this, does size 0 disk really harm?
>>>>>
>>>>   From my site, I think changing the component size as '0' should be avoided.
>>>> When resize changing required and new_size < current_chunk_size, such as
>>>> raid5:
>>>>
>>>> raid5.c: raid5_resize()
>>>> ...
>>>> 7727         sectors &= ~((sector_t)conf->chunk_sectors - 1);
>>>> ...
>>>>
>>>> 'sectors' got '0'.
>>>>
>>>> then:
>>>> ...
>>>> 7743         mddev->dev_sectors = sectors;
>>>> ...
>>>>
>>>> the dev_sectors(the component size) got '0'.
>>>> same scenario happens in raid10.
>>>>
>>>> So, it's really not meaningful if changing the raid component_size to '0',
>>>> md
>>>> should give this scenario a test, otherwise, it's a trouble thing to restore
>>>> after
>>>> doing such invalid re-size.
>>> Yes, I understand how it could be 0. My question is what's wrong with a size-0
>>> disk? For example, if you don't setup file for a loop block device, its size is
>>> 0.
>> I'm sorry I'm not very clear with your question, I try to describe more
>> on this scenario.
>> the 0-component_size isn't a 0-size disk. resize doesn't change
>> raid_member_disk size
>> to 0.
>>
>> For example: mdadm -CR /dev/md0 -b internal -l5 -n2 -x1 /dev/sd[b-d]
>> if set the component_size to 0, how would the 'internal bitmap' be? And
>> if I want to make
>> a file-system on this raid, how would it be? it's out of my control.
>>
>> I would continue to provide infos for you if any questions needs further
>> discussion.
>>
>> Hope this information is useful for you.
>> Here is piece of dmesg for the following steps:
>> 1. mdadm -CR /dev/md0 -b internal -l5 -n2 -x1 /dev/sd[b-d]
>> 2. mdadm -G /dev/md0 --size 511
>> 3. mkfs.ext3 /dev/md0
>> the mkfs would be stuck all time, cannot kill the mkfs process and have to
>> force to reboot, then lots of same call trace prints in dmesg.
> I think the cause of this problem is that raid5_size() treats zero
> values for 'sectors' and 'raid_disks' as "don't change".
>
> So setting the size to zero will change mddev->dev_sectors but not
> mddev->array_size.
> This causes internal confusion.
> Maybe we should use a different number of "don't change" ??
>
> This could affect any of the ->size() functions.

Thanks for the clarification, yes, this causes the array unusable, such 
as the mkfs
causes the process stuck or continue to resize the array refused by 
mdadm due to
the component_size is '0', .etc.

Thanks,
-Zhilong

>
> NeilBrown


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-10-23  9:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-12  8:30 [PATCH] md: ensure sectors is nonzero when change component size Zhilong Liu
2017-10-12 17:37 ` Shaohua Li
2017-10-13  2:47   ` Zhilong Liu
2017-10-13 19:05     ` Shaohua Li
2017-10-16  7:31       ` Zhilong Liu
2017-10-17  0:21         ` NeilBrown
2017-10-23  9:23           ` Zhilong Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.