All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michael Brancato <mike@mikebrancato.com>
To: David Lethe <david@santools.com>
Cc: Mikael Abrahamsson <swmike@swm.pp.se>,
	John Robinson <john.robinson@anonymous.org.uk>,
	Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: status of raid 4/5 disk reduce
Date: Thu, 11 Dec 2008 11:13:08 -0500	[thread overview]
Message-ID: <10B57D9F-CC19-41E6-8941-0153CA6CC42D@mikebrancato.com> (raw)
In-Reply-To: <A20315AE59B5C34585629E258D76A97C02FA211E@34093-C3-EVS3.exchange.rackspace.com>

On Dec 11, 2008, at 10:24 AM, "David Lethe" <david@santools.com> wrote:

>> -----Original Message-----
>> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>> owner@vger.kernel.org] On Behalf Of Mikael Abrahamsson
>> Sent: Thursday, December 11, 2008 8:47 AM
>> To: John Robinson
>> Cc: Linux RAID
>> Subject: Re: status of raid 4/5 disk reduce
>>
>> On Thu, 11 Dec 2008, John Robinson wrote:
>>
>>> But my 1U server only physically has room for 4 drives. It doesn't
>>> matter how many extra controllers I buy, I can't attach more drives.
>>
>> You can use a temporary external eSATA enclosure, pvmove the data if
>> you're using LVM, then take downtime when you exchange your new  
>> drives
>> into the internal drive bays (or you could even move them one by one
> by
>> hotremoving one drive from the eSATA enclosure into the internal  
>> drive
>> bays.
>>
>> --
>> Mikael Abrahamsson    email: swmike@swm.pp.se
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid"
>> in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
> Or you can crack open another PC and get power from that via long  
> power
> cables. Since you have a rack-mounted system, chances are good you  
> have
> more rack-mounted systems.   Of course this is still all moot, because
> John insists that the RAID reduce function doesn't need to address
> reducing mounted file systems  ... and your 1U system is booted to the
> very file system that needs to be reduced.
>
> So we have come full circle.  A md reduce isn't practical unless it
> includes on-line file system support, which is a deal-killer as it not
> only requires massive development efforts outside of the LINUX RAID
> group, but has to be done in conjunction with the developers in this
> group.
>
> Just suck it up and mount all the disks somehow (or use backup).   
> Either
> give up on resizing, or install Solaris with ZFS boot, then you can
> resize all  you want.
>
>
> David

David,
I must correct your statements. You are the only person here who  
insists on online FS shrinking.  You are the only person with the  
misconception that the ability for a FS to shrink and the ability to  
reshape an array are interdependent. You also cannot shrink ZFS via  
vdev removal / replacement.  I think you meant VxFS, which can  
evacuate data from disks to shrink.

What about a 2U, 8 drive server with a RAID1 boot array and a 6 drive  
RAID5?  I'm sure no matter the situation, there is a labor-intensive,  
risky, power-cord-snaking alternative that involves additional  
hardware. But providing a way to reshape via unmount, resize, remount,  
reshape is a practical alternative no matter whether or not it is the  
only possible way to acheieve the same results.  In-place expansion  
and shrinks offer numerous benefits in administration.

You can continue your Rube Goldberg array shrinks to your heart's  
content. But it comes off as rude to think you have the only solution  
to array shrinking by excluding an in-place reshape reduce. 

  reply	other threads:[~2008-12-11 16:13 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-08 20:59 status of raid 4/5 disk reduce Michael Brancato
2008-12-09 21:11 ` Alex Lilley
2008-12-09 21:33   ` David Lethe
2008-12-09 21:51     ` Robin Hill
2008-12-09 23:15       ` Ryan Wagoner
2008-12-10 12:14         ` Alex Lilley
2008-12-11  0:07           ` Michael Brancato
2008-12-11  4:30             ` David Lethe
2008-12-11  6:33               ` Michael Brancato
2008-12-11 13:52                 ` Louis-David Mitterrand
2008-12-11 15:13                   ` Michael Brancato
2008-12-11 11:43               ` John Robinson
2008-12-11 14:46                 ` Mikael Abrahamsson
2008-12-11 15:24                   ` David Lethe
2008-12-11 16:13                     ` Michael Brancato [this message]
2008-12-11 15:27                   ` Michael Brancato
2008-12-11 11:51               ` Alex Lilley
2008-12-15 23:18 ` Neil Brown
2008-12-12 15:38 David Lethe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=10B57D9F-CC19-41E6-8941-0153CA6CC42D@mikebrancato.com \
    --to=mike@mikebrancato.com \
    --cc=david@santools.com \
    --cc=john.robinson@anonymous.org.uk \
    --cc=linux-raid@vger.kernel.org \
    --cc=swmike@swm.pp.se \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.