All of lore.kernel.org
 help / color / mirror / Atom feed
* status of raid 4/5 disk reduce
@ 2008-12-08 20:59 Michael Brancato
  2008-12-09 21:11 ` Alex Lilley
  2008-12-15 23:18 ` Neil Brown
  0 siblings, 2 replies; 19+ messages in thread
From: Michael Brancato @ 2008-12-08 20:59 UTC (permalink / raw)
  To: linux-raid

I'm curious as to the status of the ability to reduce the number of 
disks in a RAID 4/5 array.  I would like the ability to reshape a 4 disk 
raid4/5 to a 3 disk raid4/5 for flexibility.

here is what I want to do....
$ sudo mdadm /dev/md0 --fail /dev/disk4 --remove /dev/disk4
mdadm: set /dev/disk4 faulty in /dev/md0
mdadm: hot removed /dev/disk4
$ sudo mdadm --grow /dev/md0 -n3
mdadm: /dev/md0: Cannot reduce number of data disks (yet).

I know this capability is missing in the md driver.  What is needed to 
make it work and is anyone currently working on it?

Regards,

-- 
Mike Brancato, CISSP

^ permalink raw reply	[flat|nested] 19+ messages in thread
* Re: status of raid 4/5 disk reduce
@ 2008-12-12 15:38 David Lethe
  0 siblings, 0 replies; 19+ messages in thread
From: David Lethe @ 2008-12-12 15:38 UTC (permalink / raw)
  To: Michael Brancato, Alex Lilley; +Cc: linux-raid



-----Original Message-----

From:  "Michael Brancato" <mike@mikebrancato.com>
Subj:  Re: status of raid 4/5 disk reduce
Date:  Wed Dec 10, 2008 6:07 pm
Size:  3K
To:  "Alex Lilley" <alex@redwax.co.uk>
cc:  "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>

 
> There is the very obvious use to reduce the number of drives but 
> ultimately have a larger array if the drives are all larger. And there 
> should be no issue with file system/lvm resizing as these can generally 
> grow on-line anyway. 
>  
> I appreciate that shrinking the size of the array and doing so onto less 
> disks is both an unlikely requirement and fraught with danger.   Growing 
> the size of the array but to less disks is very useful indeed, which is 
> what I was getting at. 
 
Hardware limitations is a good use case.  When I say reduce, I mean  
--grow -nX and not necessarily reducing the size of the array in the end. 
 
>>>> This is a lot to ask for in terms of development, and creates extreme 
>>>> risk of data loss. 
>>>> First, you degrade /dev/md0, so any bad blocks or drive failures will 
>>>> cause catastrophic 
>>>> data loss, unless /dev/disk4 is used for mirroring in the interim. 
 
This is a standard fact of RAID45.  Any RAID45 with a failed drive is  
subject to these same concerns.  Isn't this true today with grow if  
replacing a 4x100GB array with 4x200GB by replacing one drive at a time? 
 
>>>> Secondly, by removing that disk (for sake of argument, say each disk is 
>>>> 1TB. You go from 3TB usable data 
>>>> to 2TB.  Most likely, you need to resize the file system in place so it 
>>>> fits into 2TB.  You're probably booted 
>>>> onto md0 also, which makes it difficult.  Resizing a hot filesystem 
>>>> without scratch space??  If your file system 
>>>> can't be dynamically reduced, then no point worrying about md raid. 
 
There are a lot of assumptions here about how the array is used,  
filesystem support, etc.  I'm not saying that in every situation this is  
ideal.  There are many situations where md0 is not the boot device, md0  
is not the device to be contracted, and the filesystem supports either  
online or offline resizing.  Concerns about filesystem expansion or  
contraction (online or not) and array shrinking are mutually exclusive  
of one another and shrinking the size of the array is already possible. 
 
Neil Brown has previously responded to a comment on the topic at  
http://neil.brown.name/blog/20050727143147 in regards to a --shrink option. 
 
Here are a few use cases: 
 
Hardware limitations - Replacing 4x120GB size drives with 3x500GB  
drives.  This would involve replacing each 120GB disk with a 500GB one  
at a time and rebuilding each before reshaping the array to 3 drives and  
growing to use all space on the new drives.  This is especially useful  
on a system which cannot increase the number if drives it has (4 max),  
only capacity. 
 
Drive failure - A developer, home user or SMB has a drive failure in an  
array.  Due to money, time, shipping delays, etc, the user cannot  
replace the drive immediately and the drive is in a degraded state.  The  
user shrinks the filesystem by 1 drive amount and shrinks the array to  
return to a optimal state in the array.  The array would return to a  
protected state in hours not days if waiting on a drive. 
 
Flexibility - A user wishes to free a disk in an array which is  
oversized to use that disk elsewhere. 
 
I hope this give a better understanding of the usefulness of reducing  
the amount of disks in a RAID45 array. 
-- 
Mike Brancato, CISSP 
-- 
To unsubscribe from this list: send the line "unsubscribe linux-raid" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at  http://vger.kernel.org/majordomo-info.html 
 
statistically speaking, if you are in a degraded mode, the Worst thing to do would be a resize.  It would take 3x -14X longer then a rebuild as every block of every n drive will have to be read, and you will have multiple writes to all n-1 disks. Do the math.
The nature of the I/O means you won't get a lot of help from cache either.

If you are degraded, last thing you want to do is pound surviving drives this way.  An experienced admin would spend the time doing an incremental backup, or at least turn off the computer if they didn't have a spare disk.  Granted there are some usable scenarios for resizing, but doing so with degraded md is just not a smart idea.   

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2008-12-15 23:18 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-08 20:59 status of raid 4/5 disk reduce Michael Brancato
2008-12-09 21:11 ` Alex Lilley
2008-12-09 21:33   ` David Lethe
2008-12-09 21:51     ` Robin Hill
2008-12-09 23:15       ` Ryan Wagoner
2008-12-10 12:14         ` Alex Lilley
2008-12-11  0:07           ` Michael Brancato
2008-12-11  4:30             ` David Lethe
2008-12-11  6:33               ` Michael Brancato
2008-12-11 13:52                 ` Louis-David Mitterrand
2008-12-11 15:13                   ` Michael Brancato
2008-12-11 11:43               ` John Robinson
2008-12-11 14:46                 ` Mikael Abrahamsson
2008-12-11 15:24                   ` David Lethe
2008-12-11 16:13                     ` Michael Brancato
2008-12-11 15:27                   ` Michael Brancato
2008-12-11 11:51               ` Alex Lilley
2008-12-15 23:18 ` Neil Brown
2008-12-12 15:38 David Lethe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.