All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: status of raid 4/5 disk reduce
@ 2008-12-12 15:38 David Lethe
  0 siblings, 0 replies; 19+ messages in thread
From: David Lethe @ 2008-12-12 15:38 UTC (permalink / raw)
  To: Michael Brancato, Alex Lilley; +Cc: linux-raid



-----Original Message-----

From:  "Michael Brancato" <mike@mikebrancato.com>
Subj:  Re: status of raid 4/5 disk reduce
Date:  Wed Dec 10, 2008 6:07 pm
Size:  3K
To:  "Alex Lilley" <alex@redwax.co.uk>
cc:  "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>

 
> There is the very obvious use to reduce the number of drives but 
> ultimately have a larger array if the drives are all larger. And there 
> should be no issue with file system/lvm resizing as these can generally 
> grow on-line anyway. 
>  
> I appreciate that shrinking the size of the array and doing so onto less 
> disks is both an unlikely requirement and fraught with danger.   Growing 
> the size of the array but to less disks is very useful indeed, which is 
> what I was getting at. 
 
Hardware limitations is a good use case.  When I say reduce, I mean  
--grow -nX and not necessarily reducing the size of the array in the end. 
 
>>>> This is a lot to ask for in terms of development, and creates extreme 
>>>> risk of data loss. 
>>>> First, you degrade /dev/md0, so any bad blocks or drive failures will 
>>>> cause catastrophic 
>>>> data loss, unless /dev/disk4 is used for mirroring in the interim. 
 
This is a standard fact of RAID45.  Any RAID45 with a failed drive is  
subject to these same concerns.  Isn't this true today with grow if  
replacing a 4x100GB array with 4x200GB by replacing one drive at a time? 
 
>>>> Secondly, by removing that disk (for sake of argument, say each disk is 
>>>> 1TB. You go from 3TB usable data 
>>>> to 2TB.  Most likely, you need to resize the file system in place so it 
>>>> fits into 2TB.  You're probably booted 
>>>> onto md0 also, which makes it difficult.  Resizing a hot filesystem 
>>>> without scratch space??  If your file system 
>>>> can't be dynamically reduced, then no point worrying about md raid. 
 
There are a lot of assumptions here about how the array is used,  
filesystem support, etc.  I'm not saying that in every situation this is  
ideal.  There are many situations where md0 is not the boot device, md0  
is not the device to be contracted, and the filesystem supports either  
online or offline resizing.  Concerns about filesystem expansion or  
contraction (online or not) and array shrinking are mutually exclusive  
of one another and shrinking the size of the array is already possible. 
 
Neil Brown has previously responded to a comment on the topic at  
http://neil.brown.name/blog/20050727143147 in regards to a --shrink option. 
 
Here are a few use cases: 
 
Hardware limitations - Replacing 4x120GB size drives with 3x500GB  
drives.  This would involve replacing each 120GB disk with a 500GB one  
at a time and rebuilding each before reshaping the array to 3 drives and  
growing to use all space on the new drives.  This is especially useful  
on a system which cannot increase the number if drives it has (4 max),  
only capacity. 
 
Drive failure - A developer, home user or SMB has a drive failure in an  
array.  Due to money, time, shipping delays, etc, the user cannot  
replace the drive immediately and the drive is in a degraded state.  The  
user shrinks the filesystem by 1 drive amount and shrinks the array to  
return to a optimal state in the array.  The array would return to a  
protected state in hours not days if waiting on a drive. 
 
Flexibility - A user wishes to free a disk in an array which is  
oversized to use that disk elsewhere. 
 
I hope this give a better understanding of the usefulness of reducing  
the amount of disks in a RAID45 array. 
-- 
Mike Brancato, CISSP 
-- 
To unsubscribe from this list: send the line "unsubscribe linux-raid" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at  http://vger.kernel.org/majordomo-info.html 
 
statistically speaking, if you are in a degraded mode, the Worst thing to do would be a resize.  It would take 3x -14X longer then a rebuild as every block of every n drive will have to be read, and you will have multiple writes to all n-1 disks. Do the math.
The nature of the I/O means you won't get a lot of help from cache either.

If you are degraded, last thing you want to do is pound surviving drives this way.  An experienced admin would spend the time doing an incremental backup, or at least turn off the computer if they didn't have a spare disk.  Granted there are some usable scenarios for resizing, but doing so with degraded md is just not a smart idea.   

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-08 20:59 Michael Brancato
  2008-12-09 21:11 ` Alex Lilley
@ 2008-12-15 23:18 ` Neil Brown
  1 sibling, 0 replies; 19+ messages in thread
From: Neil Brown @ 2008-12-15 23:18 UTC (permalink / raw)
  To: Michael Brancato; +Cc: linux-raid

On Monday December 8, mike@mikebrancato.com wrote:
> I'm curious as to the status of the ability to reduce the number of 
> disks in a RAID 4/5 array.  I would like the ability to reshape a 4 disk 
> raid4/5 to a 3 disk raid4/5 for flexibility.
> 
> here is what I want to do....
> $ sudo mdadm /dev/md0 --fail /dev/disk4 --remove /dev/disk4
> mdadm: set /dev/disk4 faulty in /dev/md0
> mdadm: hot removed /dev/disk4
> $ sudo mdadm --grow /dev/md0 -n3
> mdadm: /dev/md0: Cannot reduce number of data disks (yet).
> 
> I know this capability is missing in the md driver.  What is needed to 
> make it work and is anyone currently working on it?

It is on my todo list, but I am not currently working on it.  Maybe
next year.

There are three sorts of restriping.

1/ When the total amount of space grows.
   In this case we are (for the most part) reading data from later in
   the devices and writing it somewhere earlier in the devices.
   So we progress forward through the devices (from low blocks
   addresses to high block address) often having two copies of the
   data that is currently being moved, so we can be sure of finding
   good data after a crash.

2/ When the total amount of space shrinks.
   This is the reverse of the above.  Data is moved from early in the
   device to later in the device, so we start at the end and move
   backwards (from high block addresses to low block addresses).
   Again, the data which is currently being moved is easily safe in
   the face of a system crash.

3/ When the total amount of space remains unchanged (e.g. raid5 to
   raid6 with one extra device).
   To make this crash proof we would need to copy N stripes of data
   into some backup area, then copy it back in the new layout, then
   update the metadata.
   So this will be much slower than other restriping (which is already
   slow). 

Only '1' is currently implemented.

'2' should be fairly easy.  There are some fiddly bits such as mapping
the forward progress that md insists on into a backwards progress, but
that is quite manageable.

'3' would require a user-space helper to be copying data to backup and
then allowing the restripe to progress a bit further.

NeilBrown

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-11 15:24                   ` David Lethe
@ 2008-12-11 16:13                     ` Michael Brancato
  0 siblings, 0 replies; 19+ messages in thread
From: Michael Brancato @ 2008-12-11 16:13 UTC (permalink / raw)
  To: David Lethe; +Cc: Mikael Abrahamsson, John Robinson, Linux RAID

On Dec 11, 2008, at 10:24 AM, "David Lethe" <david@santools.com> wrote:

>> -----Original Message-----
>> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>> owner@vger.kernel.org] On Behalf Of Mikael Abrahamsson
>> Sent: Thursday, December 11, 2008 8:47 AM
>> To: John Robinson
>> Cc: Linux RAID
>> Subject: Re: status of raid 4/5 disk reduce
>>
>> On Thu, 11 Dec 2008, John Robinson wrote:
>>
>>> But my 1U server only physically has room for 4 drives. It doesn't
>>> matter how many extra controllers I buy, I can't attach more drives.
>>
>> You can use a temporary external eSATA enclosure, pvmove the data if
>> you're using LVM, then take downtime when you exchange your new  
>> drives
>> into the internal drive bays (or you could even move them one by one
> by
>> hotremoving one drive from the eSATA enclosure into the internal  
>> drive
>> bays.
>>
>> --
>> Mikael Abrahamsson    email: swmike@swm.pp.se
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid"
>> in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
> Or you can crack open another PC and get power from that via long  
> power
> cables. Since you have a rack-mounted system, chances are good you  
> have
> more rack-mounted systems.   Of course this is still all moot, because
> John insists that the RAID reduce function doesn't need to address
> reducing mounted file systems  ... and your 1U system is booted to the
> very file system that needs to be reduced.
>
> So we have come full circle.  A md reduce isn't practical unless it
> includes on-line file system support, which is a deal-killer as it not
> only requires massive development efforts outside of the LINUX RAID
> group, but has to be done in conjunction with the developers in this
> group.
>
> Just suck it up and mount all the disks somehow (or use backup).   
> Either
> give up on resizing, or install Solaris with ZFS boot, then you can
> resize all  you want.
>
>
> David

David,
I must correct your statements. You are the only person here who  
insists on online FS shrinking.  You are the only person with the  
misconception that the ability for a FS to shrink and the ability to  
reshape an array are interdependent. You also cannot shrink ZFS via  
vdev removal / replacement.  I think you meant VxFS, which can  
evacuate data from disks to shrink.

What about a 2U, 8 drive server with a RAID1 boot array and a 6 drive  
RAID5?  I'm sure no matter the situation, there is a labor-intensive,  
risky, power-cord-snaking alternative that involves additional  
hardware. But providing a way to reshape via unmount, resize, remount,  
reshape is a practical alternative no matter whether or not it is the  
only possible way to acheieve the same results.  In-place expansion  
and shrinks offer numerous benefits in administration.

You can continue your Rube Goldberg array shrinks to your heart's  
content. But it comes off as rude to think you have the only solution  
to array shrinking by excluding an in-place reshape reduce. 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-11 14:46                 ` Mikael Abrahamsson
  2008-12-11 15:24                   ` David Lethe
@ 2008-12-11 15:27                   ` Michael Brancato
  1 sibling, 0 replies; 19+ messages in thread
From: Michael Brancato @ 2008-12-11 15:27 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: John Robinson, Linux RAID


On Dec 11, 2008, at 9:46 AM, Mikael Abrahamsson <swmike@swm.pp.se>  
wrote:

> On Thu, 11 Dec 2008, John Robinson wrote:
>
>> But my 1U server only physically has room for 4 drives. It doesn't  
>> matter how many extra controllers I buy, I can't attach more drives.
>
> You can use a temporary external eSATA enclosure, pvmove the data if  
> you're using LVM, then take downtime when you exchange your new  
> drives into the internal drive bays (or you could even move them one  
> by one by hotremoving one drive from the eSATA enclosure into the  
> internal drive bays.

What about on a Friday, one of his drives fails, but he doesn't have  
spare hardware? If he has enough freespace he can shrink the FS and  
array size, then reshape from 4 to 3 drives to regain parity for  
protection over the weekend.

If degrading an array multiple times to support expansion by replacing  
one drive at a time is acceptable practice, why is degrading once for  
array contraction frowned upon?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: status of raid 4/5 disk reduce
  2008-12-11 14:46                 ` Mikael Abrahamsson
@ 2008-12-11 15:24                   ` David Lethe
  2008-12-11 16:13                     ` Michael Brancato
  2008-12-11 15:27                   ` Michael Brancato
  1 sibling, 1 reply; 19+ messages in thread
From: David Lethe @ 2008-12-11 15:24 UTC (permalink / raw)
  To: Mikael Abrahamsson, John Robinson; +Cc: Linux RAID

> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Mikael Abrahamsson
> Sent: Thursday, December 11, 2008 8:47 AM
> To: John Robinson
> Cc: Linux RAID
> Subject: Re: status of raid 4/5 disk reduce
> 
> On Thu, 11 Dec 2008, John Robinson wrote:
> 
> > But my 1U server only physically has room for 4 drives. It doesn't
> > matter how many extra controllers I buy, I can't attach more drives.
> 
> You can use a temporary external eSATA enclosure, pvmove the data if
> you're using LVM, then take downtime when you exchange your new drives
> into the internal drive bays (or you could even move them one by one
by
> hotremoving one drive from the eSATA enclosure into the internal drive
> bays.
> 
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


Or you can crack open another PC and get power from that via long power
cables. Since you have a rack-mounted system, chances are good you have
more rack-mounted systems.   Of course this is still all moot, because
John insists that the RAID reduce function doesn't need to address
reducing mounted file systems  ... and your 1U system is booted to the
very file system that needs to be reduced.

So we have come full circle.  A md reduce isn't practical unless it
includes on-line file system support, which is a deal-killer as it not
only requires massive development efforts outside of the LINUX RAID
group, but has to be done in conjunction with the developers in this
group.  

Just suck it up and mount all the disks somehow (or use backup).  Either
give up on resizing, or install Solaris with ZFS boot, then you can
resize all  you want.


David




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-11 13:52                 ` Louis-David Mitterrand
@ 2008-12-11 15:13                   ` Michael Brancato
  0 siblings, 0 replies; 19+ messages in thread
From: Michael Brancato @ 2008-12-11 15:13 UTC (permalink / raw)
  To: Louis-David Mitterrand; +Cc: linux-raid


On Dec 11, 2008, at 8:52 AM, Louis-David Mitterrand <vindex+lists-linux-raid@apartia.org 
 > wrote:
>>
>>
>
> Not for xfs.

Nor can JFS2 (jfs) as implemented on Linux, ufs or reiser4.

http://gparted.sourceforge.net/features.php

Again, reshape is exclusive from the higher-level filesystem.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-11 11:43               ` John Robinson
@ 2008-12-11 14:46                 ` Mikael Abrahamsson
  2008-12-11 15:24                   ` David Lethe
  2008-12-11 15:27                   ` Michael Brancato
  0 siblings, 2 replies; 19+ messages in thread
From: Mikael Abrahamsson @ 2008-12-11 14:46 UTC (permalink / raw)
  To: John Robinson; +Cc: Linux RAID

On Thu, 11 Dec 2008, John Robinson wrote:

> But my 1U server only physically has room for 4 drives. It doesn't 
> matter how many extra controllers I buy, I can't attach more drives.

You can use a temporary external eSATA enclosure, pvmove the data if 
you're using LVM, then take downtime when you exchange your new drives 
into the internal drive bays (or you could even move them one by one by 
hotremoving one drive from the eSATA enclosure into the internal drive 
bays.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-11  6:33               ` Michael Brancato
@ 2008-12-11 13:52                 ` Louis-David Mitterrand
  2008-12-11 15:13                   ` Michael Brancato
  0 siblings, 1 reply; 19+ messages in thread
From: Louis-David Mitterrand @ 2008-12-11 13:52 UTC (permalink / raw)
  To: linux-raid

On Thu, Dec 11, 2008 at 01:33:51AM -0500, Michael Brancato wrote:
>
> David Lethe wrote:
>
>>   Respectfully, go bother the LVM, jfs, ext, afs, and all the other 
>> file
>> system people.  You have zero chance of getting them on board to support
>> online file system shrinking without any guarantee of scratch space.
>> My advice is that you don't tell them you also want them to resize while
>> the md volume is being resized, and also don't tell them that the array
>> might be degraded.
>
> I didn't bring up nor argued the filesystem online resize issue, you  
> did.  Why does the filesystem have to be online during the reshape or  
> the shrink?  People shrink filesystems and partitions while offline  
> everyday, and the sun still rises.  Filesystem support for reshaping  
> (not resizing) should be a non-issue.  Resizing shrink exists today.

Not for xfs.

-- 
http://www.critikart.net

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-11  4:30             ` David Lethe
  2008-12-11  6:33               ` Michael Brancato
  2008-12-11 11:43               ` John Robinson
@ 2008-12-11 11:51               ` Alex Lilley
  2 siblings, 0 replies; 19+ messages in thread
From: Alex Lilley @ 2008-12-11 11:51 UTC (permalink / raw)
  To: David Lethe; +Cc: Michael Brancato, linux-raid



David Lethe wrote:
>   
>> -----Original Message-----
>> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>> owner@vger.kernel.org] On Behalf Of Michael Brancato
>> Sent: Wednesday, December 10, 2008 6:07 PM
>> To: Alex Lilley
>> Cc: linux-raid@vger.kernel.org
>> Subject: Re: status of raid 4/5 disk reduce
>>
>>
>>     
>>> There is the very obvious use to reduce the number of drives but
>>> ultimately have a larger array if the drives are all larger. And
>>>       
>> there
>>     
>>> should be no issue with file system/lvm resizing as these can
>>>       
>> generally
>>     
>>> grow on-line anyway.
>>>
>>> I appreciate that shrinking the size of the array and doing so onto
>>>       
>> less
>>     
>>> disks is both an unlikely requirement and fraught with danger.
>>>       
>> Growing
>>     
>>> the size of the array but to less disks is very useful indeed, which
>>>       
>> is
>>     
>>> what I was getting at.
>>>       
>> Hardware limitations is a good use case.  When I say reduce, I mean
>> --grow -nX and not necessarily reducing the size of the array in the
>> end.
>>
>>     
>>>>>> This is a lot to ask for in terms of development, and creates
>>>>>>             
>> extreme
>>     
>>>>>> risk of data loss.
>>>>>> First, you degrade /dev/md0, so any bad blocks or drive failures
>>>>>>             
>> will
>>     
>>>>>> cause catastrophic
>>>>>> data loss, unless /dev/disk4 is used for mirroring in the
>>>>>>             
> interim.
>   
>> This is a standard fact of RAID45.  Any RAID45 with a failed drive is
>> subject to these same concerns.  Isn't this true today with grow if
>> replacing a 4x100GB array with 4x200GB by replacing one drive at a
>> time?
>>
>>     
>>>>>> Secondly, by removing that disk (for sake of argument, say each
>>>>>>             
>> disk is
>>     
>>>>>> 1TB. You go from 3TB usable data
>>>>>> to 2TB.  Most likely, you need to resize the file system in place
>>>>>>             
>> so it
>>     
>>>>>> fits into 2TB.  You're probably booted
>>>>>> onto md0 also, which makes it difficult.  Resizing a hot
>>>>>>             
>> filesystem
>>     
>>>>>> without scratch space??  If your file system
>>>>>> can't be dynamically reduced, then no point worrying about md
>>>>>>             
>> raid.
>>
>> There are a lot of assumptions here about how the array is used,
>> filesystem support, etc.  I'm not saying that in every situation this
>> is
>> ideal.  There are many situations where md0 is not the boot device,
>>     
> md0
>   
>> is not the device to be contracted, and the filesystem supports either
>> online or offline resizing.  Concerns about filesystem expansion or
>> contraction (online or not) and array shrinking are mutually exclusive
>> of one another and shrinking the size of the array is already
>>     
> possible.
>   
>> Neil Brown has previously responded to a comment on the topic at
>> http://neil.brown.name/blog/20050727143147 in regards to a --shrink
>> option.
>>
>> Here are a few use cases:
>>
>> Hardware limitations - Replacing 4x120GB size drives with 3x500GB
>> drives.  This would involve replacing each 120GB disk with a 500GB one
>> at a time and rebuilding each before reshaping the array to 3 drives
>> and
>> growing to use all space on the new drives.  This is especially useful
>> on a system which cannot increase the number if drives it has (4 max),
>> only capacity.
>>
>> Drive failure - A developer, home user or SMB has a drive failure in
>>     
> an
>   
>> array.  Due to money, time, shipping delays, etc, the user cannot
>> replace the drive immediately and the drive is in a degraded state.
>> The
>> user shrinks the filesystem by 1 drive amount and shrinks the array to
>> return to a optimal state in the array.  The array would return to a
>> protected state in hours not days if waiting on a drive.
>>
>> Flexibility - A user wishes to free a disk in an array which is
>> oversized to use that disk elsewhere.
>>
>> I hope this give a better understanding of the usefulness of reducing
>> the amount of disks in a RAID45 array.
>> --
>> Mike Brancato, CISSP
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid"
>> in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>     
>
>
>   
> Respectfully, go bother the LVM, jfs, ext, afs, and all the other file
> system people.  You have zero chance of getting them on board to support
> online file system shrinking without any guarantee of scratch space.
> My advice is that you don't tell them you also want them to resize while
> the md volume is being resized, and also don't tell them that the array
> might be degraded.
>
> If you want to copy 4x120 into 3x500 ... mount all the disks and COPY
> the data.  If you are truly limited to 4 disks, and are too cheap to
> spend $10-20 for another controller, after buying 1.5TB worth of disk
> drives, then you really need to get your priorities in order.
>
> David
>
>
>   
The file system aspect is complete irrelevant - resizing is already 
possible in both directions.

Buying extra controllers isn't always appropriate.  there is the 
physical space issue in the case, power consumption of more drives not 
to mention heat and seeing as big drives get cheaper by the day why not 
get rid of all those small drives and replace with fewer larger drives.  
It is completely reasonable and not a question of misplaced priorities, 
just doing the most sensible and most manageable thing.  less drives 
will always be better in raid 5 and 6 because the more drives you have 
the more chance there is of multiple failures.

Copying to a new array defeats the object of any reshaping and is 
completely impractical.

Rgds

Alex
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-11  4:30             ` David Lethe
  2008-12-11  6:33               ` Michael Brancato
@ 2008-12-11 11:43               ` John Robinson
  2008-12-11 14:46                 ` Mikael Abrahamsson
  2008-12-11 11:51               ` Alex Lilley
  2 siblings, 1 reply; 19+ messages in thread
From: John Robinson @ 2008-12-11 11:43 UTC (permalink / raw)
  To: Linux RAID

On 11/12/2008 04:30, David Lethe wrote:
[...]
> If you want to copy 4x120 into 3x500 ... mount all the disks and COPY
> the data.  If you are truly limited to 4 disks, and are too cheap to
> spend $10-20 for another controller, after buying 1.5TB worth of disk
> drives, then you really need to get your priorities in order.

But my 1U server only physically has room for 4 drives. It doesn't 
matter how many extra controllers I buy, I can't attach more drives.

Cheers,

John.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-11  4:30             ` David Lethe
@ 2008-12-11  6:33               ` Michael Brancato
  2008-12-11 13:52                 ` Louis-David Mitterrand
  2008-12-11 11:43               ` John Robinson
  2008-12-11 11:51               ` Alex Lilley
  2 siblings, 1 reply; 19+ messages in thread
From: Michael Brancato @ 2008-12-11  6:33 UTC (permalink / raw)
  To: David Lethe; +Cc: Alex Lilley, linux-raid


David Lethe wrote:

>   
> Respectfully, go bother the LVM, jfs, ext, afs, and all the other file
> system people.  You have zero chance of getting them on board to support
> online file system shrinking without any guarantee of scratch space.
> My advice is that you don't tell them you also want them to resize while
> the md volume is being resized, and also don't tell them that the array
> might be degraded.

I didn't bring up nor argued the filesystem online resize issue, you 
did.  Why does the filesystem have to be online during the reshape or 
the shrink?  People shrink filesystems and partitions while offline 
everyday, and the sun still rises.  Filesystem support for reshaping 
(not resizing) should be a non-issue.  Resizing shrink exists today.

> If you want to copy 4x120 into 3x500 ... mount all the disks and COPY
> the data.  If you are truly limited to 4 disks, and are too cheap to
> spend $10-20 for another controller, after buying 1.5TB worth of disk
> drives, then you really need to get your priorities in order.

Words fail me....  Why support reshaping RAID arrays by adding disks 
then if we should just go buy more disks and create a new array to copy 
the data to?

I don't know why you oppose the flexibility of reshape shrinking, but at 
one point it was planned.  I simply tried to see what the status was of 
the development and what was needed to get support for reshape shrink. 
When it comes to actually using it, to each his own.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: status of raid 4/5 disk reduce
  2008-12-11  0:07           ` Michael Brancato
@ 2008-12-11  4:30             ` David Lethe
  2008-12-11  6:33               ` Michael Brancato
                                 ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: David Lethe @ 2008-12-11  4:30 UTC (permalink / raw)
  To: Michael Brancato, Alex Lilley; +Cc: linux-raid



> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Michael Brancato
> Sent: Wednesday, December 10, 2008 6:07 PM
> To: Alex Lilley
> Cc: linux-raid@vger.kernel.org
> Subject: Re: status of raid 4/5 disk reduce
> 
> 
> > There is the very obvious use to reduce the number of drives but
> > ultimately have a larger array if the drives are all larger. And
> there
> > should be no issue with file system/lvm resizing as these can
> generally
> > grow on-line anyway.
> >
> > I appreciate that shrinking the size of the array and doing so onto
> less
> > disks is both an unlikely requirement and fraught with danger.
> Growing
> > the size of the array but to less disks is very useful indeed, which
> is
> > what I was getting at.
> 
> Hardware limitations is a good use case.  When I say reduce, I mean
> --grow -nX and not necessarily reducing the size of the array in the
> end.
> 
> >>>> This is a lot to ask for in terms of development, and creates
> extreme
> >>>> risk of data loss.
> >>>> First, you degrade /dev/md0, so any bad blocks or drive failures
> will
> >>>> cause catastrophic
> >>>> data loss, unless /dev/disk4 is used for mirroring in the
interim.
> 
> This is a standard fact of RAID45.  Any RAID45 with a failed drive is
> subject to these same concerns.  Isn't this true today with grow if
> replacing a 4x100GB array with 4x200GB by replacing one drive at a
> time?
> 
> >>>> Secondly, by removing that disk (for sake of argument, say each
> disk is
> >>>> 1TB. You go from 3TB usable data
> >>>> to 2TB.  Most likely, you need to resize the file system in place
> so it
> >>>> fits into 2TB.  You're probably booted
> >>>> onto md0 also, which makes it difficult.  Resizing a hot
> filesystem
> >>>> without scratch space??  If your file system
> >>>> can't be dynamically reduced, then no point worrying about md
> raid.
> 
> There are a lot of assumptions here about how the array is used,
> filesystem support, etc.  I'm not saying that in every situation this
> is
> ideal.  There are many situations where md0 is not the boot device,
md0
> is not the device to be contracted, and the filesystem supports either
> online or offline resizing.  Concerns about filesystem expansion or
> contraction (online or not) and array shrinking are mutually exclusive
> of one another and shrinking the size of the array is already
possible.
> 
> Neil Brown has previously responded to a comment on the topic at
> http://neil.brown.name/blog/20050727143147 in regards to a --shrink
> option.
> 
> Here are a few use cases:
> 
> Hardware limitations - Replacing 4x120GB size drives with 3x500GB
> drives.  This would involve replacing each 120GB disk with a 500GB one
> at a time and rebuilding each before reshaping the array to 3 drives
> and
> growing to use all space on the new drives.  This is especially useful
> on a system which cannot increase the number if drives it has (4 max),
> only capacity.
> 
> Drive failure - A developer, home user or SMB has a drive failure in
an
> array.  Due to money, time, shipping delays, etc, the user cannot
> replace the drive immediately and the drive is in a degraded state.
> The
> user shrinks the filesystem by 1 drive amount and shrinks the array to
> return to a optimal state in the array.  The array would return to a
> protected state in hours not days if waiting on a drive.
> 
> Flexibility - A user wishes to free a disk in an array which is
> oversized to use that disk elsewhere.
> 
> I hope this give a better understanding of the usefulness of reducing
> the amount of disks in a RAID45 array.
> --
> Mike Brancato, CISSP
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


  
Respectfully, go bother the LVM, jfs, ext, afs, and all the other file
system people.  You have zero chance of getting them on board to support
online file system shrinking without any guarantee of scratch space.
My advice is that you don't tell them you also want them to resize while
the md volume is being resized, and also don't tell them that the array
might be degraded.

If you want to copy 4x120 into 3x500 ... mount all the disks and COPY
the data.  If you are truly limited to 4 disks, and are too cheap to
spend $10-20 for another controller, after buying 1.5TB worth of disk
drives, then you really need to get your priorities in order.

David





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-10 12:14         ` Alex Lilley
@ 2008-12-11  0:07           ` Michael Brancato
  2008-12-11  4:30             ` David Lethe
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Brancato @ 2008-12-11  0:07 UTC (permalink / raw)
  To: Alex Lilley; +Cc: linux-raid


> There is the very obvious use to reduce the number of drives but
> ultimately have a larger array if the drives are all larger. And there
> should be no issue with file system/lvm resizing as these can generally
> grow on-line anyway.
> 
> I appreciate that shrinking the size of the array and doing so onto less
> disks is both an unlikely requirement and fraught with danger.   Growing
> the size of the array but to less disks is very useful indeed, which is
> what I was getting at.

Hardware limitations is a good use case.  When I say reduce, I mean 
--grow -nX and not necessarily reducing the size of the array in the end.

>>>> This is a lot to ask for in terms of development, and creates extreme
>>>> risk of data loss.
>>>> First, you degrade /dev/md0, so any bad blocks or drive failures will
>>>> cause catastrophic
>>>> data loss, unless /dev/disk4 is used for mirroring in the interim.

This is a standard fact of RAID45.  Any RAID45 with a failed drive is 
subject to these same concerns.  Isn't this true today with grow if 
replacing a 4x100GB array with 4x200GB by replacing one drive at a time?

>>>> Secondly, by removing that disk (for sake of argument, say each disk is
>>>> 1TB. You go from 3TB usable data
>>>> to 2TB.  Most likely, you need to resize the file system in place so it
>>>> fits into 2TB.  You're probably booted
>>>> onto md0 also, which makes it difficult.  Resizing a hot filesystem
>>>> without scratch space??  If your file system
>>>> can't be dynamically reduced, then no point worrying about md raid.

There are a lot of assumptions here about how the array is used, 
filesystem support, etc.  I'm not saying that in every situation this is 
ideal.  There are many situations where md0 is not the boot device, md0 
is not the device to be contracted, and the filesystem supports either 
online or offline resizing.  Concerns about filesystem expansion or 
contraction (online or not) and array shrinking are mutually exclusive 
of one another and shrinking the size of the array is already possible.

Neil Brown has previously responded to a comment on the topic at 
http://neil.brown.name/blog/20050727143147 in regards to a --shrink option.

Here are a few use cases:

Hardware limitations - Replacing 4x120GB size drives with 3x500GB 
drives.  This would involve replacing each 120GB disk with a 500GB one 
at a time and rebuilding each before reshaping the array to 3 drives and 
growing to use all space on the new drives.  This is especially useful 
on a system which cannot increase the number if drives it has (4 max), 
only capacity.

Drive failure - A developer, home user or SMB has a drive failure in an 
array.  Due to money, time, shipping delays, etc, the user cannot 
replace the drive immediately and the drive is in a degraded state.  The 
user shrinks the filesystem by 1 drive amount and shrinks the array to 
return to a optimal state in the array.  The array would return to a 
protected state in hours not days if waiting on a drive.

Flexibility - A user wishes to free a disk in an array which is 
oversized to use that disk elsewhere.

I hope this give a better understanding of the usefulness of reducing 
the amount of disks in a RAID45 array.
--
Mike Brancato, CISSP

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-09 23:15       ` Ryan Wagoner
@ 2008-12-10 12:14         ` Alex Lilley
  2008-12-11  0:07           ` Michael Brancato
  0 siblings, 1 reply; 19+ messages in thread
From: Alex Lilley @ 2008-12-10 12:14 UTC (permalink / raw)
  To: linux-raid

There is the very obvious use to reduce the number of drives but
ultimately have a larger array if the drives are all larger. And there
should be no issue with file system/lvm resizing as these can generally
grow on-line anyway.

I appreciate that shrinking the size of the array and doing so onto less
disks is both an unlikely requirement and fraught with danger.   Growing
the size of the array but to less disks is very useful indeed, which is
what I was getting at.

Regards

Alex

Ryan Wagoner wrote:
>> Things like RAID1 -> RAID5 and RAID5 -> RAID6 reshaping seem to be more
>> in demand than shrinking as well.
>>     
>
> RAID 1 to RAID 5 can already be done with mdadm. The RAID 5 shrink
> could be useful in some situations. The risk of user error causing
> file system data loss is no worse than resizing an LVM volume without
> shrinking the file system first.
>
> Ryan
>
> On Tue, Dec 9, 2008 at 4:51 PM, Robin Hill <robin@robinhill.me.uk> wrote:
>   
>> On Tue Dec 09, 2008 at 03:33:17PM -0600, David Lethe wrote:
>>
>>     
>>>> -----Original Message-----
>>>> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>>>> owner@vger.kernel.org] On Behalf Of Alex Lilley
>>>> Sent: Tuesday, December 09, 2008 3:12 PM
>>>> To: Michael Brancato
>>>> Cc: linux-raid@vger.kernel.org
>>>> Subject: Re: status of raid 4/5 disk reduce
>>>>
>>>> Hi Michael
>>>>
>>>> I posed this a few weeks back but haven't seen any activity on it yet
>>>> or
>>>> any suggestion as to when this might be possible.
>>>>
>>>> For reference, my thread started here:
>>>> http://marc.info/?l=linux-raid&m=122753511309332&w=2
>>>>
>>>> Cross fingers for this because I think it is a real killer feature.
>>>>
>>>> Regards
>>>>
>>>> Alex
>>>>
>>>> Michael Brancato wrote:
>>>>         
>>>>> I'm curious as to the status of the ability to reduce the number of
>>>>> disks in a RAID 4/5 array.  I would like the ability to reshape a 4
>>>>> disk raid4/5 to a 3 disk raid4/5 for flexibility.
>>>>>
>>>>> here is what I want to do....
>>>>> $ sudo mdadm /dev/md0 --fail /dev/disk4 --remove /dev/disk4
>>>>> mdadm: set /dev/disk4 faulty in /dev/md0
>>>>> mdadm: hot removed /dev/disk4
>>>>> $ sudo mdadm --grow /dev/md0 -n3
>>>>> mdadm: /dev/md0: Cannot reduce number of data disks (yet).
>>>>>
>>>>> I know this capability is missing in the md driver.  What is needed to
>>>>> make it work and is anyone currently working on it?
>>>>>
>>>>> Regards,
>>>>>
>>>>>           
>>> This is a lot to ask for in terms of development, and creates extreme
>>> risk of data loss.
>>> First, you degrade /dev/md0, so any bad blocks or drive failures will
>>> cause catastrophic
>>> data loss, unless /dev/disk4 is used for mirroring in the interim.
>>>
>>> Secondly, by removing that disk (for sake of argument, say each disk is
>>> 1TB. You go from 3TB usable data
>>> to 2TB.  Most likely, you need to resize the file system in place so it
>>> fits into 2TB.  You're probably booted
>>> onto md0 also, which makes it difficult.  Resizing a hot filesystem
>>> without scratch space??  If your file system
>>> can't be dynamically reduced, then no point worrying about md raid.
>>>
>>> I don't see it happening .. ever.  Even if somebody wrote the logic, I
>>> can't imagine the code being tested enough
>>> to be safe for live data.
>>>
>>>       
>> I'd agree that, as described here, it's not too likely.
>>
>> However, if you start with the requirement that the capacity of the
>> final array is the same or larger than the capacity of the current array
>> (e.g. replace the drives, one at a time, with larger drives first) so
>> that no filesystem resizing is required, you should be able to do the
>> reshape without having to go degraded at all.  I'm not sure this process
>> would be fundamentally more complex (or more risky) than the current
>> growing process.
>>
>> Having said that, I'm not aware of any current work going on on this.
>> Things like RAID1 -> RAID5 and RAID5 -> RAID6 reshaping seem to be more
>> in demand than shrinking as well.
>>
>> Cheers,
>>    Robin
>> --
>>     ___
>>    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
>>   / / )      | Little Jim says ....                            |
>>  // !!       |      "He fallen in de water !!"                 |
>>
>>     
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-09 21:51     ` Robin Hill
@ 2008-12-09 23:15       ` Ryan Wagoner
  2008-12-10 12:14         ` Alex Lilley
  0 siblings, 1 reply; 19+ messages in thread
From: Ryan Wagoner @ 2008-12-09 23:15 UTC (permalink / raw)
  To: linux-raid

> Things like RAID1 -> RAID5 and RAID5 -> RAID6 reshaping seem to be more
> in demand than shrinking as well.

RAID 1 to RAID 5 can already be done with mdadm. The RAID 5 shrink
could be useful in some situations. The risk of user error causing
file system data loss is no worse than resizing an LVM volume without
shrinking the file system first.

Ryan

On Tue, Dec 9, 2008 at 4:51 PM, Robin Hill <robin@robinhill.me.uk> wrote:
> On Tue Dec 09, 2008 at 03:33:17PM -0600, David Lethe wrote:
>
>> > -----Original Message-----
>> > From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>> > owner@vger.kernel.org] On Behalf Of Alex Lilley
>> > Sent: Tuesday, December 09, 2008 3:12 PM
>> > To: Michael Brancato
>> > Cc: linux-raid@vger.kernel.org
>> > Subject: Re: status of raid 4/5 disk reduce
>> >
>> > Hi Michael
>> >
>> > I posed this a few weeks back but haven't seen any activity on it yet
>> > or
>> > any suggestion as to when this might be possible.
>> >
>> > For reference, my thread started here:
>> > http://marc.info/?l=linux-raid&m=122753511309332&w=2
>> >
>> > Cross fingers for this because I think it is a real killer feature.
>> >
>> > Regards
>> >
>> > Alex
>> >
>> > Michael Brancato wrote:
>> > > I'm curious as to the status of the ability to reduce the number of
>> > > disks in a RAID 4/5 array.  I would like the ability to reshape a 4
>> > > disk raid4/5 to a 3 disk raid4/5 for flexibility.
>> > >
>> > > here is what I want to do....
>> > > $ sudo mdadm /dev/md0 --fail /dev/disk4 --remove /dev/disk4
>> > > mdadm: set /dev/disk4 faulty in /dev/md0
>> > > mdadm: hot removed /dev/disk4
>> > > $ sudo mdadm --grow /dev/md0 -n3
>> > > mdadm: /dev/md0: Cannot reduce number of data disks (yet).
>> > >
>> > > I know this capability is missing in the md driver.  What is needed to
>> > > make it work and is anyone currently working on it?
>> > >
>> > > Regards,
>> > >
>>
>> This is a lot to ask for in terms of development, and creates extreme
>> risk of data loss.
>> First, you degrade /dev/md0, so any bad blocks or drive failures will
>> cause catastrophic
>> data loss, unless /dev/disk4 is used for mirroring in the interim.
>>
>> Secondly, by removing that disk (for sake of argument, say each disk is
>> 1TB. You go from 3TB usable data
>> to 2TB.  Most likely, you need to resize the file system in place so it
>> fits into 2TB.  You're probably booted
>> onto md0 also, which makes it difficult.  Resizing a hot filesystem
>> without scratch space??  If your file system
>> can't be dynamically reduced, then no point worrying about md raid.
>>
>> I don't see it happening .. ever.  Even if somebody wrote the logic, I
>> can't imagine the code being tested enough
>> to be safe for live data.
>>
> I'd agree that, as described here, it's not too likely.
>
> However, if you start with the requirement that the capacity of the
> final array is the same or larger than the capacity of the current array
> (e.g. replace the drives, one at a time, with larger drives first) so
> that no filesystem resizing is required, you should be able to do the
> reshape without having to go degraded at all.  I'm not sure this process
> would be fundamentally more complex (or more risky) than the current
> growing process.
>
> Having said that, I'm not aware of any current work going on on this.
> Things like RAID1 -> RAID5 and RAID5 -> RAID6 reshaping seem to be more
> in demand than shrinking as well.
>
> Cheers,
>    Robin
> --
>     ___
>    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
>   / / )      | Little Jim says ....                            |
>  // !!       |      "He fallen in de water !!"                 |
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-09 21:33   ` David Lethe
@ 2008-12-09 21:51     ` Robin Hill
  2008-12-09 23:15       ` Ryan Wagoner
  0 siblings, 1 reply; 19+ messages in thread
From: Robin Hill @ 2008-12-09 21:51 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3254 bytes --]

On Tue Dec 09, 2008 at 03:33:17PM -0600, David Lethe wrote:

> > -----Original Message-----
> > From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> > owner@vger.kernel.org] On Behalf Of Alex Lilley
> > Sent: Tuesday, December 09, 2008 3:12 PM
> > To: Michael Brancato
> > Cc: linux-raid@vger.kernel.org
> > Subject: Re: status of raid 4/5 disk reduce
> > 
> > Hi Michael
> > 
> > I posed this a few weeks back but haven't seen any activity on it yet
> > or
> > any suggestion as to when this might be possible.
> > 
> > For reference, my thread started here:
> > http://marc.info/?l=linux-raid&m=122753511309332&w=2
> > 
> > Cross fingers for this because I think it is a real killer feature.
> > 
> > Regards
> > 
> > Alex
> > 
> > Michael Brancato wrote:
> > > I'm curious as to the status of the ability to reduce the number of
> > > disks in a RAID 4/5 array.  I would like the ability to reshape a 4
> > > disk raid4/5 to a 3 disk raid4/5 for flexibility.
> > >
> > > here is what I want to do....
> > > $ sudo mdadm /dev/md0 --fail /dev/disk4 --remove /dev/disk4
> > > mdadm: set /dev/disk4 faulty in /dev/md0
> > > mdadm: hot removed /dev/disk4
> > > $ sudo mdadm --grow /dev/md0 -n3
> > > mdadm: /dev/md0: Cannot reduce number of data disks (yet).
> > >
> > > I know this capability is missing in the md driver.  What is needed to
> > > make it work and is anyone currently working on it?
> > >
> > > Regards,
> > >
> 
> This is a lot to ask for in terms of development, and creates extreme
> risk of data loss.
> First, you degrade /dev/md0, so any bad blocks or drive failures will
> cause catastrophic
> data loss, unless /dev/disk4 is used for mirroring in the interim.
> 
> Secondly, by removing that disk (for sake of argument, say each disk is
> 1TB. You go from 3TB usable data
> to 2TB.  Most likely, you need to resize the file system in place so it
> fits into 2TB.  You're probably booted
> onto md0 also, which makes it difficult.  Resizing a hot filesystem
> without scratch space??  If your file system
> can't be dynamically reduced, then no point worrying about md raid. 
> 
> I don't see it happening .. ever.  Even if somebody wrote the logic, I
> can't imagine the code being tested enough
> to be safe for live data.  
> 
I'd agree that, as described here, it's not too likely.

However, if you start with the requirement that the capacity of the
final array is the same or larger than the capacity of the current array
(e.g. replace the drives, one at a time, with larger drives first) so
that no filesystem resizing is required, you should be able to do the
reshape without having to go degraded at all.  I'm not sure this process
would be fundamentally more complex (or more risky) than the current
growing process.

Having said that, I'm not aware of any current work going on on this.
Things like RAID1 -> RAID5 and RAID5 -> RAID6 reshaping seem to be more
in demand than shrinking as well.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: status of raid 4/5 disk reduce
  2008-12-09 21:11 ` Alex Lilley
@ 2008-12-09 21:33   ` David Lethe
  2008-12-09 21:51     ` Robin Hill
  0 siblings, 1 reply; 19+ messages in thread
From: David Lethe @ 2008-12-09 21:33 UTC (permalink / raw)
  To: Alex Lilley, Michael Brancato; +Cc: linux-raid

> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Alex Lilley
> Sent: Tuesday, December 09, 2008 3:12 PM
> To: Michael Brancato
> Cc: linux-raid@vger.kernel.org
> Subject: Re: status of raid 4/5 disk reduce
> 
> Hi Michael
> 
> I posed this a few weeks back but haven't seen any activity on it yet
> or
> any suggestion as to when this might be possible.
> 
> For reference, my thread started here:
> http://marc.info/?l=linux-raid&m=122753511309332&w=2
> 
> Cross fingers for this because I think it is a real killer feature.
> 
> Regards
> 
> Alex
> 
> Michael Brancato wrote:
> > I'm curious as to the status of the ability to reduce the number of
> > disks in a RAID 4/5 array.  I would like the ability to reshape a 4
> > disk raid4/5 to a 3 disk raid4/5 for flexibility.
> >
> > here is what I want to do....
> > $ sudo mdadm /dev/md0 --fail /dev/disk4 --remove /dev/disk4
> > mdadm: set /dev/disk4 faulty in /dev/md0
> > mdadm: hot removed /dev/disk4
> > $ sudo mdadm --grow /dev/md0 -n3
> > mdadm: /dev/md0: Cannot reduce number of data disks (yet).
> >
> > I know this capability is missing in the md driver.  What is needed
> to
> > make it work and is anyone currently working on it?
> >
> > Regards,
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

This is a lot to ask for in terms of development, and creates extreme
risk of data loss.
First, you degrade /dev/md0, so any bad blocks or drive failures will
cause catastrophic
data loss, unless /dev/disk4 is used for mirroring in the interim.

Secondly, by removing that disk (for sake of argument, say each disk is
1TB. You go from 3TB usable data
to 2TB.  Most likely, you need to resize the file system in place so it
fits into 2TB.  You're probably booted
onto md0 also, which makes it difficult.  Resizing a hot filesystem
without scratch space??  If your file system
can't be dynamically reduced, then no point worrying about md raid. 

I don't see it happening .. ever.  Even if somebody wrote the logic, I
can't imagine the code being tested enough
to be safe for live data.  


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: status of raid 4/5 disk reduce
  2008-12-08 20:59 Michael Brancato
@ 2008-12-09 21:11 ` Alex Lilley
  2008-12-09 21:33   ` David Lethe
  2008-12-15 23:18 ` Neil Brown
  1 sibling, 1 reply; 19+ messages in thread
From: Alex Lilley @ 2008-12-09 21:11 UTC (permalink / raw)
  To: Michael Brancato; +Cc: linux-raid

Hi Michael

I posed this a few weeks back but haven't seen any activity on it yet or 
any suggestion as to when this might be possible.

For reference, my thread started here: 
http://marc.info/?l=linux-raid&m=122753511309332&w=2

Cross fingers for this because I think it is a real killer feature.

Regards

Alex

Michael Brancato wrote:
> I'm curious as to the status of the ability to reduce the number of 
> disks in a RAID 4/5 array.  I would like the ability to reshape a 4 
> disk raid4/5 to a 3 disk raid4/5 for flexibility.
>
> here is what I want to do....
> $ sudo mdadm /dev/md0 --fail /dev/disk4 --remove /dev/disk4
> mdadm: set /dev/disk4 faulty in /dev/md0
> mdadm: hot removed /dev/disk4
> $ sudo mdadm --grow /dev/md0 -n3
> mdadm: /dev/md0: Cannot reduce number of data disks (yet).
>
> I know this capability is missing in the md driver.  What is needed to 
> make it work and is anyone currently working on it?
>
> Regards,
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* status of raid 4/5 disk reduce
@ 2008-12-08 20:59 Michael Brancato
  2008-12-09 21:11 ` Alex Lilley
  2008-12-15 23:18 ` Neil Brown
  0 siblings, 2 replies; 19+ messages in thread
From: Michael Brancato @ 2008-12-08 20:59 UTC (permalink / raw)
  To: linux-raid

I'm curious as to the status of the ability to reduce the number of 
disks in a RAID 4/5 array.  I would like the ability to reshape a 4 disk 
raid4/5 to a 3 disk raid4/5 for flexibility.

here is what I want to do....
$ sudo mdadm /dev/md0 --fail /dev/disk4 --remove /dev/disk4
mdadm: set /dev/disk4 faulty in /dev/md0
mdadm: hot removed /dev/disk4
$ sudo mdadm --grow /dev/md0 -n3
mdadm: /dev/md0: Cannot reduce number of data disks (yet).

I know this capability is missing in the md driver.  What is needed to 
make it work and is anyone currently working on it?

Regards,

-- 
Mike Brancato, CISSP

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2008-12-15 23:18 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-12 15:38 status of raid 4/5 disk reduce David Lethe
  -- strict thread matches above, loose matches on Subject: below --
2008-12-08 20:59 Michael Brancato
2008-12-09 21:11 ` Alex Lilley
2008-12-09 21:33   ` David Lethe
2008-12-09 21:51     ` Robin Hill
2008-12-09 23:15       ` Ryan Wagoner
2008-12-10 12:14         ` Alex Lilley
2008-12-11  0:07           ` Michael Brancato
2008-12-11  4:30             ` David Lethe
2008-12-11  6:33               ` Michael Brancato
2008-12-11 13:52                 ` Louis-David Mitterrand
2008-12-11 15:13                   ` Michael Brancato
2008-12-11 11:43               ` John Robinson
2008-12-11 14:46                 ` Mikael Abrahamsson
2008-12-11 15:24                   ` David Lethe
2008-12-11 16:13                     ` Michael Brancato
2008-12-11 15:27                   ` Michael Brancato
2008-12-11 11:51               ` Alex Lilley
2008-12-15 23:18 ` Neil Brown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.