All of lore.kernel.org
 help / color / mirror / Atom feed
* Adding more drives/saturating the bandwidth
@ 2009-03-26 12:43 Jon Hardcastle
  2009-03-30 15:40 ` Goswin von Brederlow
  0 siblings, 1 reply; 16+ messages in thread
From: Jon Hardcastle @ 2009-03-26 12:43 UTC (permalink / raw)
  To: linux-raid


Hey guys, How do you know if your machine can handle adding some more drives to it? How can you check that there is enough BUS IO to handle extra sata cards and also that the machine is powerful enough to support say an 8 drive raid 5...

-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'..Be fearful when others are greedy, and be greedy when others are fearful.'
-----------------------

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-03-26 12:43 Adding more drives/saturating the bandwidth Jon Hardcastle
@ 2009-03-30 15:40 ` Goswin von Brederlow
  2009-03-30 16:28   ` Nagilum
  2009-03-31  8:23   ` Jon Hardcastle
  0 siblings, 2 replies; 16+ messages in thread
From: Goswin von Brederlow @ 2009-03-30 15:40 UTC (permalink / raw)
  To: Jon; +Cc: linux-raid

Jon Hardcastle <jd_hardcastle@yahoo.com> writes:

> Hey guys, How do you know if your machine can handle adding some more drives to it? How can you check that there is enough BUS IO to handle extra sata cards and also that the machine is powerful enough to support say an 8 drive raid 5...

A) Try & error.
B) look up the speed of the bus and half it. Any bandwidth left?
   make sure the cpu isn't at 100% already as well

MfG
        Goswin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-03-30 15:40 ` Goswin von Brederlow
@ 2009-03-30 16:28   ` Nagilum
  2009-03-31  8:23   ` Jon Hardcastle
  1 sibling, 0 replies; 16+ messages in thread
From: Nagilum @ 2009-03-30 16:28 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: Jon, linux-raid


----- Message from goswin-v-b@web.de ---------
> Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
>
>> Hey guys, How do you know if your machine can handle adding some  
>> more drives to it? How can you check that there is enough BUS IO to  
>> handle extra sata cards and also that the machine is powerful  
>> enough to support say an 8 drive raid 5...

Maybe hdparm could abused for that?
Reporting the disk cache transfer rates for one or two disks at the same time?

----- End message from goswin-v-b@web.de -----



========================================================================
#    _  __          _ __     http://www.nagilum.org/ \n icq://69646724 #
#   / |/ /__ ____ _(_) /_ ____ _  nagilum@nagilum.org \n +491776461165 #
#  /    / _ `/ _ `/ / / // /  ' \  Amiga (68k/PPC): AOS/NetBSD/Linux   #
# /_/|_/\_,_/\_, /_/_/\_,_/_/_/_/   Mac (PPC): MacOS-X / NetBSD /Linux #
#           /___/     x86: FreeBSD/Linux/Solaris/Win2k  ARM9: EPOC EV6 #
========================================================================


----------------------------------------------------------------
cakebox.homeunix.net - all the machine one needs..

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-03-30 15:40 ` Goswin von Brederlow
  2009-03-30 16:28   ` Nagilum
@ 2009-03-31  8:23   ` Jon Hardcastle
  2009-03-31 13:05     ` Greg Freemyer
  2009-03-31 21:07     ` Goswin von Brederlow
  1 sibling, 2 replies; 16+ messages in thread
From: Jon Hardcastle @ 2009-03-31  8:23 UTC (permalink / raw)
  To: Jon, Goswin von Brederlow; +Cc: linux-raid


--- On Mon, 30/3/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:

> From: Goswin von Brederlow <goswin-v-b@web.de>
> Subject: Re: Adding more drives/saturating the bandwidth
> To: Jon@eHardcastle.com
> Cc: linux-raid@vger.kernel.org
> Date: Monday, 30 March, 2009, 4:40 PM
> Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
> 
> > Hey guys, How do you know if your machine can handle
> adding some more drives to it? How can you check that there
> is enough BUS IO to handle extra sata cards and also that
> the machine is powerful enough to support say an 8 drive
> raid 5...
> 
> A) Try & error.
> B) look up the speed of the bus and half it. Any bandwidth
> left?
>    make sure the cpu isn't at 100% already as well
> 
> MfG
>         Goswin

Cheers guys, I dont think CPU will be an issue as when i looked yesterday whilst copying to my 6 drive raid 5 array it was at ~10% (the only time i get access issues is when i am smart checking all 6 discs and trying to stream a movie of it at the same time!)

As for try and error... sounds scary as once I have added a drive to the array I can undo the process! I suppose I could add them as JBOD's and then thrash the hell out of them.. whilst accessing the rest of the array... What does halving the bus speed tell me in anycase?


-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'..Be fearful when others are greedy, and be greedy when others are fearful.'
-----------------------

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-03-31  8:23   ` Jon Hardcastle
@ 2009-03-31 13:05     ` Greg Freemyer
  2009-03-31 21:07     ` Goswin von Brederlow
  1 sibling, 0 replies; 16+ messages in thread
From: Greg Freemyer @ 2009-03-31 13:05 UTC (permalink / raw)
  To: Jon; +Cc: Goswin von Brederlow, linux-raid

On Tue, Mar 31, 2009 at 4:23 AM, Jon Hardcastle <jd_hardcastle@yahoo.com> wrote:
>
> --- On Mon, 30/3/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>
>> From: Goswin von Brederlow <goswin-v-b@web.de>
>> Subject: Re: Adding more drives/saturating the bandwidth
>> To: Jon@eHardcastle.com
>> Cc: linux-raid@vger.kernel.org
>> Date: Monday, 30 March, 2009, 4:40 PM
>> Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
>>
>> > Hey guys, How do you know if your machine can handle
>> adding some more drives to it? How can you check that there
>> is enough BUS IO to handle extra sata cards and also that
>> the machine is powerful enough to support say an 8 drive
>> raid 5...
>>
>> A) Try & error.
>> B) look up the speed of the bus and half it. Any bandwidth
>> left?
>>    make sure the cpu isn't at 100% already as well
>>
>> MfG
>>         Goswin
>
> Cheers guys, I dont think CPU will be an issue as when i looked yesterday whilst copying to my 6 drive raid 5 array it was at ~10% (the only time i get access issues is when i am smart checking all 6 discs and trying to stream a movie of it at the same time!)
>
> As for try and error... sounds scary as once I have added a drive to the array I can undo the process!

What do you think the "ran out of bandwidth" error is.  You make it
sound fatal.  Not true, it is just a bottleneck.  So if you design
your system to max out the bus structure with a a random i/o load, but
then you perform a large sequential load, the sequential workload will
just under perform what you would expect based on the disk drives
themselves.

No big deal as long as you design and test based on your real world workload.

Greg
-- 
Greg Freemyer
Head of EDD Tape Extraction and Processing team
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf

The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-03-31  8:23   ` Jon Hardcastle
  2009-03-31 13:05     ` Greg Freemyer
@ 2009-03-31 21:07     ` Goswin von Brederlow
  2009-04-01  8:15       ` Jon Hardcastle
                         ` (2 more replies)
  1 sibling, 3 replies; 16+ messages in thread
From: Goswin von Brederlow @ 2009-03-31 21:07 UTC (permalink / raw)
  To: Jon; +Cc: Goswin von Brederlow, linux-raid

Jon Hardcastle <jd_hardcastle@yahoo.com> writes:

> --- On Mon, 30/3/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>
>> From: Goswin von Brederlow <goswin-v-b@web.de>
>> Subject: Re: Adding more drives/saturating the bandwidth
>> To: Jon@eHardcastle.com
>> Cc: linux-raid@vger.kernel.org
>> Date: Monday, 30 March, 2009, 4:40 PM
>> Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
>> 
>> > Hey guys, How do you know if your machine can handle
>> adding some more drives to it? How can you check that there
>> is enough BUS IO to handle extra sata cards and also that
>> the machine is powerful enough to support say an 8 drive
>> raid 5...
>> 
>> A) Try & error.
>> B) look up the speed of the bus and half it. Any bandwidth
>> left?
>>    make sure the cpu isn't at 100% already as well
>> 
>> MfG
>>         Goswin
>
> Cheers guys, I dont think CPU will be an issue as when i looked yesterday whilst copying to my 6 drive raid 5 array it was at ~10% (the only time i get access issues is when i am smart checking all 6 discs and trying to stream a movie of it at the same time!)
>
> As for try and error... sounds scary as once I have added a drive to the array I can undo the process! I suppose I could add them as JBOD's and then thrash the hell out of them.. whilst accessing the rest of the array... What does halving the bus speed tell me in anycase?

Just see if you can get a decent bandwidth from each disk:

for i in /dev/sd?; do dd if=$i of=/dev/null bs=1M count=10240 & done
iostat -k 10


Halving the bus speed gives you a reasonable low expectation of how
much data you should be able to pull of the disks. If your disks can't
even fill half the bus bandwidth you certainly can cope with more
disks or something is seriously wrong.

MfG
        Goswin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-03-31 21:07     ` Goswin von Brederlow
@ 2009-04-01  8:15       ` Jon Hardcastle
  2009-04-01  8:56       ` Jon Hardcastle
  2009-04-01 14:56       ` Andrew Burgess
  2 siblings, 0 replies; 16+ messages in thread
From: Jon Hardcastle @ 2009-04-01  8:15 UTC (permalink / raw)
  To: Jon, Goswin von Brederlow; +Cc: linux-raid


--- On Tue, 31/3/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:

> From: Goswin von Brederlow <goswin-v-b@web.de>
> Subject: Re: Adding more drives/saturating the bandwidth
> To: Jon@eHardcastle.com
> Cc: "Goswin von Brederlow" <goswin-v-b@web.de>, linux-raid@vger.kernel.org
> Date: Tuesday, 31 March, 2009, 10:07 PM
> Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
> 
> > --- On Mon, 30/3/09, Goswin von Brederlow
> <goswin-v-b@web.de> wrote:
> >
> >> From: Goswin von Brederlow
> <goswin-v-b@web.de>
> >> Subject: Re: Adding more drives/saturating the
> bandwidth
> >> To: Jon@eHardcastle.com
> >> Cc: linux-raid@vger.kernel.org
> >> Date: Monday, 30 March, 2009, 4:40 PM
> >> Jon Hardcastle <jd_hardcastle@yahoo.com>
> writes:
> >> 
> >> > Hey guys, How do you know if your machine can
> handle
> >> adding some more drives to it? How can you check
> that there
> >> is enough BUS IO to handle extra sata cards and
> also that
> >> the machine is powerful enough to support say an 8
> drive
> >> raid 5...
> >> 
> >> A) Try & error.
> >> B) look up the speed of the bus and half it. Any
> bandwidth
> >> left?
> >>    make sure the cpu isn't at 100% already as
> well
> >> 
> >> MfG
> >>         Goswin
> >
> > Cheers guys, I dont think CPU will be an issue as when
> i looked yesterday whilst copying to my 6 drive raid 5 array
> it was at ~10% (the only time i get access issues is when i
> am smart checking all 6 discs and trying to stream a movie
> of it at the same time!)
> >
> > As for try and error... sounds scary as once I have
> added a drive to the array I can undo the process! I suppose
> I could add them as JBOD's and then thrash the hell out
> of them.. whilst accessing the rest of the array... What
> does halving the bus speed tell me in anycase?
> 
> Just see if you can get a decent bandwidth from each disk:
> 
> for i in /dev/sd?; do dd if=$i of=/dev/null bs=1M
> count=10240 & done
> iostat -k 10
> 
> 
> Halving the bus speed gives you a reasonable low
> expectation of how
> much data you should be able to pull of the disks. If your
> disks can't
> even fill half the bus bandwidth you certainly can cope
> with more
> disks or something is seriously wrong.
> 
> MfG
>         Goswin
> --

Thank you, can i assume that is a destructive test? :-D


-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'..Be fearful when others are greedy, and be greedy when others are fearful.'
-----------------------

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-03-31 21:07     ` Goswin von Brederlow
  2009-04-01  8:15       ` Jon Hardcastle
@ 2009-04-01  8:56       ` Jon Hardcastle
  2009-04-01 15:59         ` Goswin von Brederlow
  2009-04-01 14:56       ` Andrew Burgess
  2 siblings, 1 reply; 16+ messages in thread
From: Jon Hardcastle @ 2009-04-01  8:56 UTC (permalink / raw)
  To: Jon, Goswin von Brederlow; +Cc: linux-raid



--- On Tue, 31/3/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:

> From: Goswin von Brederlow <goswin-v-b@web.de>
> Subject: Re: Adding more drives/saturating the bandwidth
> To: Jon@eHardcastle.com
> Cc: "Goswin von Brederlow" <goswin-v-b@web.de>, linux-raid@vger.kernel.org
> Date: Tuesday, 31 March, 2009, 10:07 PM
> Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
> 
> > --- On Mon, 30/3/09, Goswin von Brederlow
> <goswin-v-b@web.de> wrote:
> >
> >> From: Goswin von Brederlow
> <goswin-v-b@web.de>
> >> Subject: Re: Adding more drives/saturating the
> bandwidth
> >> To: Jon@eHardcastle.com
> >> Cc: linux-raid@vger.kernel.org
> >> Date: Monday, 30 March, 2009, 4:40 PM
> >> Jon Hardcastle <jd_hardcastle@yahoo.com>
> writes:
> >> 
> >> > Hey guys, How do you know if your machine can
> handle
> >> adding some more drives to it? How can you check
> that there
> >> is enough BUS IO to handle extra sata cards and
> also that
> >> the machine is powerful enough to support say an 8
> drive
> >> raid 5...
> >> 
> >> A) Try & error.
> >> B) look up the speed of the bus and half it. Any
> bandwidth
> >> left?
> >>    make sure the cpu isn't at 100% already as
> well
> >> 
> >> MfG
> >>         Goswin
> >
> > Cheers guys, I dont think CPU will be an issue as when
> i looked yesterday whilst copying to my 6 drive raid 5 array
> it was at ~10% (the only time i get access issues is when i
> am smart checking all 6 discs and trying to stream a movie
> of it at the same time!)
> >
> > As for try and error... sounds scary as once I have
> added a drive to the array I can undo the process! I suppose
> I could add them as JBOD's and then thrash the hell out
> of them.. whilst accessing the rest of the array... What
> does halving the bus speed tell me in anycase?
> 
> Just see if you can get a decent bandwidth from each disk:
> 
> for i in /dev/sd?; do dd if=$i of=/dev/null bs=1M
> count=10240 & done
> iostat -k 10
> 
> 
> Halving the bus speed gives you a reasonable low
> expectation of how
> much data you should be able to pull of the disks. If your
> disks can't
> even fill half the bus bandwidth you certainly can cope
> with more
> disks or something is seriously wrong.
> 
> MfG
>         Goswin
> --


Whoops! of course it isn't; sorry. it is presumably a read test thought rather than a read/write? Safe to do whilst the array is constructed?

-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'..Be fearful when others are greedy, and be greedy when others are fearful.'
-----------------------

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-03-31 21:07     ` Goswin von Brederlow
  2009-04-01  8:15       ` Jon Hardcastle
  2009-04-01  8:56       ` Jon Hardcastle
@ 2009-04-01 14:56       ` Andrew Burgess
  2009-04-01 15:17         ` David Lethe
  2009-04-01 18:06         ` Goswin von Brederlow
  2 siblings, 2 replies; 16+ messages in thread
From: Andrew Burgess @ 2009-04-01 14:56 UTC (permalink / raw)
  To: linux raid mailing list


> >> > Hey guys, How do you know if your machine can handle
> >> adding some more drives to it? How can you check that there
> >> is enough BUS IO to handle extra sata cards and also that
> >> the machine is powerful enough to support say an 8 drive
> >> raid 5...

Look at it from the viewpoint of raid performance rather than disk
performance. How can the throughput be less with more disks?

So what if your bus bandwidth is saturated now? Then it will be
saturated with more disks too, but the raid bandwidth should not change,
it's still the saturation bandwidth.

And if it's not saturated now then the raid throughput will increase
assuming you can get more disks operating in parallel.

I'm sure there are corner cases we can nitpick but isn't this correct in
general?


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: Adding more drives/saturating the bandwidth
  2009-04-01 14:56       ` Andrew Burgess
@ 2009-04-01 15:17         ` David Lethe
  2009-04-01 18:06         ` Goswin von Brederlow
  1 sibling, 0 replies; 16+ messages in thread
From: David Lethe @ 2009-04-01 15:17 UTC (permalink / raw)
  To: Andrew Burgess, linux raid mailing list

> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Andrew Burgess
> Sent: Wednesday, April 01, 2009 9:57 AM
> To: linux raid mailing list
> Subject: Re: Adding more drives/saturating the bandwidth
> 
> 
> > >> > Hey guys, How do you know if your machine can handle
> > >> adding some more drives to it? How can you check that there
> > >> is enough BUS IO to handle extra sata cards and also that
> > >> the machine is powerful enough to support say an 8 drive
> > >> raid 5...
> 
> Look at it from the viewpoint of raid performance rather than disk
> performance. How can the throughput be less with more disks?
> 
> So what if your bus bandwidth is saturated now? Then it will be
> saturated with more disks too, but the raid bandwidth should not
> change,
> it's still the saturation bandwidth.
> 
> And if it's not saturated now then the raid throughput will increase
> assuming you can get more disks operating in parallel.
> 
> I'm sure there are corner cases we can nitpick but isn't this correct
> in
> general?
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
In grand sense of things, isn't this question rather irrelevant?  The
answer is one of perception.  Would you ask the same question about
whether or not you could add more programs to run and saturating a CPU?
Of course not, the answer is easy. 

If you need the extra disks, then you need more disks because the
applications running on that machine needs more disk space then you have
available.  Your choice is always to free up existing space, add more
disks, or lighten the overhead by moving applications elsewhere ..
assuming overall performance is "satisfactory" in the eyes of the people
who control your budget.

So isn't this all a moot point?  Set a baseline for appropriate
performance before you begin and create a plan in event the updated
hardware config doesn't meet it, then react accordingly.

David



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-04-01  8:56       ` Jon Hardcastle
@ 2009-04-01 15:59         ` Goswin von Brederlow
  2009-04-01 16:15           ` Greg Freemyer
  0 siblings, 1 reply; 16+ messages in thread
From: Goswin von Brederlow @ 2009-04-01 15:59 UTC (permalink / raw)
  To: Jon; +Cc: Goswin von Brederlow, linux-raid

Jon Hardcastle <jd_hardcastle@yahoo.com> writes:

> --- On Tue, 31/3/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>> Just see if you can get a decent bandwidth from each disk:
>> 
>> for i in /dev/sd?; do dd if=$i of=/dev/null bs=1M
>> count=10240 & done
>> iostat -k 10
>> 
>> 
>> Halving the bus speed gives you a reasonable low
>> expectation of how
>> much data you should be able to pull of the disks. If your
>> disks can't
>> even fill half the bus bandwidth you certainly can cope
>> with more
>> disks or something is seriously wrong.
>> 
>> MfG
>>         Goswin

Perfectly save. Just don't do it the other way around:

!!!WARNING WRITE TEST!!!

for i in /dev/sd?; do dd if=/dev/zero of=$i bs=1M count=10240 & done
iostat -k 10

MfG
        Goswin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-04-01 15:59         ` Goswin von Brederlow
@ 2009-04-01 16:15           ` Greg Freemyer
  0 siblings, 0 replies; 16+ messages in thread
From: Greg Freemyer @ 2009-04-01 16:15 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: Jon, linux-raid

On Wed, Apr 1, 2009 at 11:59 AM, Goswin von Brederlow <goswin-v-b@web.de> wrote:
> Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
>
>> --- On Tue, 31/3/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>>> Just see if you can get a decent bandwidth from each disk:
>>>
>>> for i in /dev/sd?; do dd if=$i of=/dev/null bs=1M
>>> count=10240 & done
>>> iostat -k 10
>>>
>>>
>>> Halving the bus speed gives you a reasonable low
>>> expectation of how
>>> much data you should be able to pull of the disks. If your
>>> disks can't
>>> even fill half the bus bandwidth you certainly can cope
>>> with more
>>> disks or something is seriously wrong.
>>>
>>> MfG
>>>         Goswin
>
> Perfectly save. Just don't do it the other way around:
>
> !!!WARNING WRITE TEST!!!
>
> for i in /dev/sd?; do dd if=/dev/zero of=$i bs=1M count=10240 & done
> iostat -k 10

Be advised: on some suse kernels /dev/zero is known to have a
performance bug that would keep it from saturating a PCI-express bus.
I think it was the kernel they had about a year ago (10.3 release).

Greg
-- 
Greg Freemyer
Head of EDD Tape Extraction and Processing team
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf

The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-04-01 14:56       ` Andrew Burgess
  2009-04-01 15:17         ` David Lethe
@ 2009-04-01 18:06         ` Goswin von Brederlow
  2009-04-01 18:57           ` Richard Scobie
  1 sibling, 1 reply; 16+ messages in thread
From: Goswin von Brederlow @ 2009-04-01 18:06 UTC (permalink / raw)
  To: Andrew Burgess; +Cc: linux raid mailing list

Andrew Burgess <aab@cichlid.com> writes:

>> >> > Hey guys, How do you know if your machine can handle
>> >> adding some more drives to it? How can you check that there
>> >> is enough BUS IO to handle extra sata cards and also that
>> >> the machine is powerful enough to support say an 8 drive
>> >> raid 5...
>
> Look at it from the viewpoint of raid performance rather than disk
> performance. How can the throughput be less with more disks?
>
> So what if your bus bandwidth is saturated now? Then it will be
> saturated with more disks too, but the raid bandwidth should not change,
> it's still the saturation bandwidth.
>
> And if it's not saturated now then the raid throughput will increase
> assuming you can get more disks operating in parallel.
>
> I'm sure there are corner cases we can nitpick but isn't this correct in
> general?

To show a worstcase say you have a 5 disk raid5 with 64k chunk
size. Further say your application writes chunks of 256k to disk at
random offsets (but aligned to 256k). Each write is a nice full
stripe, the parity is calculated and 320k are written.

Now think about the same with 6 disk raid5. Suddenly you have partial
stripes. And the alignment on stripe boundaries is gone too. So now
you need to read 384k (I think) of data, compute or delta (whichever
requires less reads) the parity and write back 384k in 4 out of 6
cases and read 64k and write back 320k otherwise. So on average you
read 277.33k and write 362.66k (= 640k combined). That is twice the
previous bandwidth not to mention the delay for reading.

So by adding a drive your throughput is suddenly halfed. Reading in
degraded mode suffers a slowdown too. CPU goes up too.


The performance of a raid is so much dependent on its access pattern
that imho one can not talk about a general case. But note that the
more drives you have the bigger a stripe becomes and you need larger
sequential writes to avoid reads.

MfG
        Goswin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-04-01 18:06         ` Goswin von Brederlow
@ 2009-04-01 18:57           ` Richard Scobie
  2009-04-03 20:42             ` Goswin von Brederlow
  0 siblings, 1 reply; 16+ messages in thread
From: Richard Scobie @ 2009-04-01 18:57 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: Andrew Burgess, linux raid mailing list

Goswin von Brederlow wrote:

> 
> Now think about the same with 6 disk raid5. Suddenly you have partial
> stripes. And the alignment on stripe boundaries is gone too. So now
> you need to read 384k (I think) of data, compute or delta (whichever
> requires less reads) the parity and write back 384k in 4 out of 6
> cases and read 64k and write back 320k otherwise. So on average you
> read 277.33k and write 362.66k (= 640k combined). That is twice the
> previous bandwidth not to mention the delay for reading.
> 
> So by adding a drive your throughput is suddenly halfed. Reading in
> degraded mode suffers a slowdown too. CPU goes up too.
> 
> 
> The performance of a raid is so much dependent on its access pattern
> that imho one can not talk about a general case. But note that the
> more drives you have the bigger a stripe becomes and you need larger
> sequential writes to avoid reads.

I take your point, but don't filesystems like XFS and ext4 play nice in 
this scenario by combining multiple sub-stripe writes into stripe sized 
writes out to disk?

Regards,

Richard

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-04-01 18:57           ` Richard Scobie
@ 2009-04-03 20:42             ` Goswin von Brederlow
  2009-04-03 21:06               ` Robin Hill
  0 siblings, 1 reply; 16+ messages in thread
From: Goswin von Brederlow @ 2009-04-03 20:42 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Andrew Burgess, linux raid mailing list

Richard Scobie <richard@sauce.co.nz> writes:

> Goswin von Brederlow wrote:
>
>>
>> Now think about the same with 6 disk raid5. Suddenly you have partial
>> stripes. And the alignment on stripe boundaries is gone too. So now
>> you need to read 384k (I think) of data, compute or delta (whichever
>> requires less reads) the parity and write back 384k in 4 out of 6
>> cases and read 64k and write back 320k otherwise. So on average you
>> read 277.33k and write 362.66k (= 640k combined). That is twice the
>> previous bandwidth not to mention the delay for reading.
>>
>> So by adding a drive your throughput is suddenly halfed. Reading in
>> degraded mode suffers a slowdown too. CPU goes up too.
>>
>>
>> The performance of a raid is so much dependent on its access pattern
>> that imho one can not talk about a general case. But note that the
>> more drives you have the bigger a stripe becomes and you need larger
>> sequential writes to avoid reads.
>
> I take your point, but don't filesystems like XFS and ext4 play nice
> in this scenario by combining multiple sub-stripe writes into stripe
> sized writes out to disk?
>
> Regards,
>
> Richard

Some FS have a parameter to tune to the stripe size. If that actually
helps or not I leave for you to test.

But ask yourself: Have any a tool to retune after you've grown the raid?

MfG
        Goswin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Adding more drives/saturating the bandwidth
  2009-04-03 20:42             ` Goswin von Brederlow
@ 2009-04-03 21:06               ` Robin Hill
  0 siblings, 0 replies; 16+ messages in thread
From: Robin Hill @ 2009-04-03 21:06 UTC (permalink / raw)
  To: linux raid mailing list

[-- Attachment #1: Type: text/plain, Size: 1958 bytes --]

On Fri Apr 03, 2009 at 10:42:20PM +0200, Goswin von Brederlow wrote:

> Richard Scobie <richard@sauce.co.nz> writes:
> 
> > Goswin von Brederlow wrote:
> >
> >>
> >> Now think about the same with 6 disk raid5. Suddenly you have partial
> >> stripes. And the alignment on stripe boundaries is gone too. So now
> >> you need to read 384k (I think) of data, compute or delta (whichever
> >> requires less reads) the parity and write back 384k in 4 out of 6
> >> cases and read 64k and write back 320k otherwise. So on average you
> >> read 277.33k and write 362.66k (= 640k combined). That is twice the
> >> previous bandwidth not to mention the delay for reading.
> >>
> >> So by adding a drive your throughput is suddenly halfed. Reading in
> >> degraded mode suffers a slowdown too. CPU goes up too.
> >>
> >>
> >> The performance of a raid is so much dependent on its access pattern
> >> that imho one can not talk about a general case. But note that the
> >> more drives you have the bigger a stripe becomes and you need larger
> >> sequential writes to avoid reads.
> >
> > I take your point, but don't filesystems like XFS and ext4 play nice
> > in this scenario by combining multiple sub-stripe writes into stripe
> > sized writes out to disk?
> >
> > Regards,
> >
> > Richard
> 
> Some FS have a parameter to tune to the stripe size. If that actually
> helps or not I leave for you to test.
> 
> But ask yourself: Have any a tool to retune after you've grown the raid?
> 
Both XFS and ext2/3 (and presumably 4 as well) allow you to alter the
stripe size after growing the raid (ext2/3 via tune2fs and XFS via mount
options).  No idea about other filesystems though.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2009-04-03 21:06 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-03-26 12:43 Adding more drives/saturating the bandwidth Jon Hardcastle
2009-03-30 15:40 ` Goswin von Brederlow
2009-03-30 16:28   ` Nagilum
2009-03-31  8:23   ` Jon Hardcastle
2009-03-31 13:05     ` Greg Freemyer
2009-03-31 21:07     ` Goswin von Brederlow
2009-04-01  8:15       ` Jon Hardcastle
2009-04-01  8:56       ` Jon Hardcastle
2009-04-01 15:59         ` Goswin von Brederlow
2009-04-01 16:15           ` Greg Freemyer
2009-04-01 14:56       ` Andrew Burgess
2009-04-01 15:17         ` David Lethe
2009-04-01 18:06         ` Goswin von Brederlow
2009-04-01 18:57           ` Richard Scobie
2009-04-03 20:42             ` Goswin von Brederlow
2009-04-03 21:06               ` Robin Hill

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.