All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Unable to (un)-grow raid6
       [not found] <4E26C335-DEDB-489E-B54E-A285273569A1@alkaline-solutions.com>
@ 2015-12-06  3:04 ` David Waite
  2015-12-06 20:35   ` Phil Turmel
  0 siblings, 1 reply; 5+ messages in thread
From: David Waite @ 2015-12-06  3:04 UTC (permalink / raw)
  To: linux-raid

I’m having difficulty shrinking down a RAID6 array (md2) on a Sinology NAS. I wish to go from 13 drives to 11, and believe I need to go to 12 first to maintain operation and redundancy through the resizing process.

The -array-size has already been shrunk to account for the two drives I wish to remove, and one of the devices has been removed and the machine restarted before the command output below was generated.

---
# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md2 : active raid6 sda3[0] sdia3[14] sdib3[9] sdic3[10] sdid3[11] sdie3[12] sdg3[17] sdf3[5] sde3[16] sdd3[3] sdc3[13] sdb3[15]
      26329895935 blocks super 1.2 level 6, 64k chunk, algorithm 2 [13/12] [UUUUUUU_UUUUU]
      
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] sdg2[6]
      2097088 blocks [8/7] [UUUUUUU_]
      
md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] sde1[4] sdf1[5] sdg1[6]
      2490176 blocks [8/7] [UUUUUUU_]
      
unused devices: <none>

---

When running 
# mdadm --grow -n 12 /dev/md2 --backup-file=/mnt/backup-file3
mdadm: max_devs [384] of [/dev/md2]
mdadm: Need to backup 7040K of critical section..
mdadm: Cannot set device shape for /dev/md2: Invalid argument

---

dmesg just reports:
[87908.606245] md: couldn't update array info. -22

---

# uname -a
Linux diskstation 3.10.77 #7135 SMP Thu Oct 15 13:36:56 CST 2015 x86_64 GNU/Linux synology_avoton_1815+

---

Anyone have advice on how to proceed? I thought shrinking of a RAID6 array is supported as long as you use a backup file to allow recovery of the initial blocks

-DW

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Unable to (un)-grow raid6
  2015-12-06  3:04 ` Unable to (un)-grow raid6 David Waite
@ 2015-12-06 20:35   ` Phil Turmel
  2015-12-06 21:13     ` David Waite
  0 siblings, 1 reply; 5+ messages in thread
From: Phil Turmel @ 2015-12-06 20:35 UTC (permalink / raw)
  To: David Waite, linux-raid

On 12/05/2015 10:04 PM, David Waite wrote:
> I’m having difficulty shrinking down a RAID6 array (md2) on a Sinology NAS. I wish to go from 13 drives to 11, and believe I need to go to 12 first to maintain operation and redundancy through the resizing process.

No, you can go straight to 11 if you've set array-size properly.  --grow
operations maintain redundancy throughout.

> The -array-size has already been shrunk to account for the two drives I wish to remove, and one of the devices has been removed and the machine restarted before the command output below was generated.

You might need to set array-size again.  It is a temporary setting that
only becomes permanent when --grow successfully starts.

> # cat /proc/mdstat 
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
> md2 : active raid6 sda3[0] sdia3[14] sdib3[9] sdic3[10] sdid3[11] sdie3[12] sdg3[17] sdf3[5] sde3[16] sdd3[3] sdc3[13] sdb3[15]
>       26329895935 blocks super 1.2 level 6, 64k chunk, algorithm 2 [13/12] [UUUUUUU_UUUUU]
>       
> md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] sdg2[6]
>       2097088 blocks [8/7] [UUUUUUU_]
>       
> md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] sde1[4] sdf1[5] sdg1[6]
>       2490176 blocks [8/7] [UUUUUUU_]
>       
> unused devices: <none>


> When running 
> # mdadm --grow -n 12 /dev/md2 --backup-file=/mnt/backup-file3
> mdadm: max_devs [384] of [/dev/md2]
> mdadm: Need to backup 7040K of critical section..
> mdadm: Cannot set device shape for /dev/md2: Invalid argument
> 
> ---
> 
> dmesg just reports:
> [87908.606245] md: couldn't update array info. -22

Error code 22 is "Invalid Argument", corresponding to the mdadm output.

Try setting array size again, then try again.  Feel free to go straight
to 11 devices -- it'll save you much time.

Also, for completeness, show --detail and --examine for the array and it
members.

Phil

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Unable to (un)-grow raid6
  2015-12-06 20:35   ` Phil Turmel
@ 2015-12-06 21:13     ` David Waite
  2015-12-06 21:21       ` Phil Turmel
  0 siblings, 1 reply; 5+ messages in thread
From: David Waite @ 2015-12-06 21:13 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid


> On Dec 6, 2015, at 1:35 PM, Phil Turmel <philip@turmel.org> wrote:
> 
> On 12/05/2015 10:04 PM, David Waite wrote:
>> I’m having difficulty shrinking down a RAID6 array (md2) on a Sinology NAS. I wish to go from 13 drives to 11, and believe I need to go to 12 first to maintain operation and redundancy through the resizing process.
> 
> No, you can go straight to 11 if you've set array-size properly.  --grow
> operations maintain redundancy throughout.

I thought —grow maintains redundancy for power loss but not disk failure.

Would I do this by simply marking the other drive I want to remove as failed?

I’ll try —array-size again. How is the array-size suggestion by mdadm calculated - the drives are not of uniform size.

-DW--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Unable to (un)-grow raid6
  2015-12-06 21:13     ` David Waite
@ 2015-12-06 21:21       ` Phil Turmel
       [not found]         ` <BACF203A-3817-4F24-88B0-38713D9459D9@alkaline-solutions.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Phil Turmel @ 2015-12-06 21:21 UTC (permalink / raw)
  To: David Waite; +Cc: linux-raid

On 12/06/2015 04:13 PM, David Waite wrote:
> 
>> On Dec 6, 2015, at 1:35 PM, Phil Turmel <philip@turmel.org> wrote:
>>
>> On 12/05/2015 10:04 PM, David Waite wrote:
>>> I’m having difficulty shrinking down a RAID6 array (md2) on a Sinology NAS. I wish to go from 13 drives to 11, and believe I need to go to 12 first to maintain operation and redundancy through the resizing process.
>>
>> No, you can go straight to 11 if you've set array-size properly.  --grow
>> operations maintain redundancy throughout.
> 
> I thought —grow maintains redundancy for power loss but not disk failure.
> 
> Would I do this by simply marking the other drive I want to remove as failed?

No!  Is that what you did with the other one?

--grow with a reduction in number leaves the unneeded drive(s) as hot
spares when it is complete.  That's when you remove them.  Or, if it
didn't pick the drives you wanted to remove, you can then do a --replace
operation, which also maintains redundancy throughout.

> I’ll try —array-size again. How is the array-size suggestion by mdadm calculated - the drives are not of uniform size.

You didn't post your --detail and --examine output as requested, so I
can't be specific.  For parity arrays, the size of the smallest member
controls the size of the array.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Unable to (un)-grow raid6
       [not found]             ` <10980501-F73B-4A63-AF7D-D7100C9C6B72@alkaline-solutions.com>
@ 2015-12-09  3:12               ` Phil Turmel
  0 siblings, 0 replies; 5+ messages in thread
From: Phil Turmel @ 2015-12-09  3:12 UTC (permalink / raw)
  To: David Waite; +Cc: Linux-RAID

On 12/08/2015 06:42 PM, David Waite wrote:
> Thanks for your help - it is now reshaping the array with sdia3 and
> sdib3 as hot spares.
> 
> It appears the version of mdadm on the NAS (3.1.4) is pretty old, and
> does not have a replace command - just marking disks as faulty. I
> will build and load in a newer version of mdadm. I assume there is no
> issue in doing this while a reshaping is going on?

No issue.  You don't even have to install it.  However, like most
features, there's a kernel part and a userspace part.  If the new mdadm
that supports --replace can't talk to the matching kernel part, it won't
work.

Either way, you can't actually do the --replace until the reshape finishes.

{Convention on kernel.org lists is to reply-to-all, trim replies, and
bottom post.  Please do.}

Phil

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-12-09  3:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <4E26C335-DEDB-489E-B54E-A285273569A1@alkaline-solutions.com>
2015-12-06  3:04 ` Unable to (un)-grow raid6 David Waite
2015-12-06 20:35   ` Phil Turmel
2015-12-06 21:13     ` David Waite
2015-12-06 21:21       ` Phil Turmel
     [not found]         ` <BACF203A-3817-4F24-88B0-38713D9459D9@alkaline-solutions.com>
     [not found]           ` <5664D042.9080401@turmel.org>
     [not found]             ` <10980501-F73B-4A63-AF7D-D7100C9C6B72@alkaline-solutions.com>
2015-12-09  3:12               ` Phil Turmel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.