All of lore.kernel.org
 help / color / mirror / Atom feed
* Linux Raid performance
@ 2010-03-31 19:42 Learner Study
  2010-03-31 20:15 ` Keld Simonsen
  0 siblings, 1 reply; 40+ messages in thread
From: Learner Study @ 2010-03-31 19:42 UTC (permalink / raw)
  To: linux-raid, keld; +Cc: learner.study

Hi Linux Raid Experts:

I was looking at following wiki on raid perf on linux:

https://raid.wiki.kernel.org/index.php/Performance

and notice that the performance numbers are with 2.6.12 kernel.

Do we perf numbers for:
- latest kernel (something like 2.6.27 / 2.6.31)
- raid 5 and 6

Can someone please point me to appropriate link?

Thanks!

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-03-31 19:42 Linux Raid performance Learner Study
@ 2010-03-31 20:15 ` Keld Simonsen
  2010-04-02  3:07   ` Learner Study
  0 siblings, 1 reply; 40+ messages in thread
From: Keld Simonsen @ 2010-03-31 20:15 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid, keld

On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
> Hi Linux Raid Experts:
> 
> I was looking at following wiki on raid perf on linux:
> 
> https://raid.wiki.kernel.org/index.php/Performance
> 
> and notice that the performance numbers are with 2.6.12 kernel.
> 
> Do we perf numbers for:
> - latest kernel (something like 2.6.27 / 2.6.31)
> - raid 5 and 6
> 
> Can someone please point me to appropriate link?

The link mentioned above has a number of other performance reports, for other levels of the kernel.
Anyway you should be able to get comparable results for newer kernels, the kernel has not become 
slower since 2.6.12 on RAID.

best regards
Keld

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-03-31 20:15 ` Keld Simonsen
@ 2010-04-02  3:07   ` Learner Study
  2010-04-02  9:58     ` Nicolae Mihalache
  2010-04-02 11:05     ` Keld Simonsen
  0 siblings, 2 replies; 40+ messages in thread
From: Learner Study @ 2010-04-02  3:07 UTC (permalink / raw)
  To: Keld Simonsen; +Cc: linux-raid, keld, learner.study

Hi Keld:

Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
single core CPUs. How does that scale with multiple cores? Something
like intel's jasper forest???

If available, can u pls point me to numbers with multi-core CPU?

Thanks!

On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
> On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
>> Hi Linux Raid Experts:
>>
>> I was looking at following wiki on raid perf on linux:
>>
>> https://raid.wiki.kernel.org/index.php/Performance
>>
>> and notice that the performance numbers are with 2.6.12 kernel.
>>
>> Do we perf numbers for:
>> - latest kernel (something like 2.6.27 / 2.6.31)
>> - raid 5 and 6
>>
>> Can someone please point me to appropriate link?
>
> The link mentioned above has a number of other performance reports, for other levels of the kernel.
> Anyway you should be able to get comparable results for newer kernels, the kernel has not become
> slower since 2.6.12 on RAID.
>
> best regards
> Keld
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02  3:07   ` Learner Study
@ 2010-04-02  9:58     ` Nicolae Mihalache
  2010-04-02 17:58       ` Learner Study
  2010-04-02 11:05     ` Keld Simonsen
  1 sibling, 1 reply; 40+ messages in thread
From: Nicolae Mihalache @ 2010-04-02  9:58 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid

I see some benchmarks performed at boot time on my Xeon E5410 2.33GHz
that shows
...
[   37.935702] raid6: sse2x1    3562 MB/s
[   38.003702] raid6: sse2x2    6422 MB/s
[   38.003702] raid6: using algorithm sse2x2 (6422 MB/s)

This  speed is higher that the DDR2 667 theoretical speed of
5333MBs/sec. So I expect the limiting factor will never be the CPU, so
it would not make sense to use multi-core.

Am I completely off?


nicolae


Learner Study wrote:
> Hi Keld:
>
> Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
> the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
> single core CPUs. How does that scale with multiple cores? Something
> like intel's jasper forest???
>
> If available, can u pls point me to numbers with multi-core CPU?
>
> Thanks!
>
> On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
>   
>> On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
>>     
>>> Hi Linux Raid Experts:
>>>
>>> I was looking at following wiki on raid perf on linux:
>>>
>>> https://raid.wiki.kernel.org/index.php/Performance
>>>
>>> and notice that the performance numbers are with 2.6.12 kernel.
>>>
>>> Do we perf numbers for:
>>> - latest kernel (something like 2.6.27 / 2.6.31)
>>> - raid 5 and 6
>>>
>>> Can someone please point me to appropriate link?
>>>       
>> The link mentioned above has a number of other performance reports, for other levels of the kernel.
>> Anyway you should be able to get comparable results for newer kernels, the kernel has not become
>> slower since 2.6.12 on RAID.
>>
>> best regards
>> Keld
>>
>>     
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02  3:07   ` Learner Study
  2010-04-02  9:58     ` Nicolae Mihalache
@ 2010-04-02 11:05     ` Keld Simonsen
  2010-04-02 11:18       ` Keld Simonsen
  2010-04-02 17:55       ` Learner Study
  1 sibling, 2 replies; 40+ messages in thread
From: Keld Simonsen @ 2010-04-02 11:05 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid, keld

On Thu, Apr 01, 2010 at 08:07:25PM -0700, Learner Study wrote:
> Hi Keld:
> 
> Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
> the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
> single core CPUs. How does that scale with multiple cores? Something
> like intel's jasper forest???

I have not checked if the benchmarks were on multi core machines. 
It should not matter much if there were more than one CPU, but
of cause it helps a little. bonnie++ test reports cpu usage, and this
is not insignificant, say in the 20 -60 % range for some tests,
but nowhere near a bottleneck. There was one with a raid5 performance
seq read of about 500 MB/s with 36 % cpu utilization, so it is
definitely possible to come beyound 150 MB/s. The speed is largely
dependent on number of disk drives you employ.


> If available, can u pls point me to numbers with multi-core CPU?

I dont have such benchmarks AFAIK. But new benchmarks are always welcome,
so please feel free to submit your findings.

Best regards
keld

> Thanks!
> 
> On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
> > On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
> >> Hi Linux Raid Experts:
> >>
> >> I was looking at following wiki on raid perf on linux:
> >>
> >> https://raid.wiki.kernel.org/index.php/Performance
> >>
> >> and notice that the performance numbers are with 2.6.12 kernel.
> >>
> >> Do we perf numbers for:
> >> - latest kernel (something like 2.6.27 / 2.6.31)
> >> - raid 5 and 6
> >>
> >> Can someone please point me to appropriate link?
> >
> > The link mentioned above has a number of other performance reports, for other levels of the kernel.
> > Anyway you should be able to get comparable results for newer kernels, the kernel has not become
> > slower since 2.6.12 on RAID.
> >
> > best regards
> > Keld
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02 11:05     ` Keld Simonsen
@ 2010-04-02 11:18       ` Keld Simonsen
  2010-04-02 17:55       ` Learner Study
  1 sibling, 0 replies; 40+ messages in thread
From: Keld Simonsen @ 2010-04-02 11:18 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid, keld

Hi!

Furthermore, I am not sure how much multiple CPU helps you.
It seems like each array are handled by a separate process.
This process possibly has internal data for management of the array, and it may 
be bound to run on a single processor, with no multithreading. 

Neil, could you explain if and how Linux MD takes advantage of
multiple processors available, for a single array?

Best regards
keld


On Fri, Apr 02, 2010 at 01:05:06PM +0200, Keld Simonsen wrote:
> On Thu, Apr 01, 2010 at 08:07:25PM -0700, Learner Study wrote:
> > Hi Keld:
> > 
> > Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
> > the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
> > single core CPUs. How does that scale with multiple cores? Something
> > like intel's jasper forest???
> 
> I have not checked if the benchmarks were on multi core machines. 
> It should not matter much if there were more than one CPU, but
> of cause it helps a little. bonnie++ test reports cpu usage, and this
> is not insignificant, say in the 20 -60 % range for some tests,
> but nowhere near a bottleneck. There was one with a raid5 performance
> seq read of about 500 MB/s with 36 % cpu utilization, so it is
> definitely possible to come beyound 150 MB/s. The speed is largely
> dependent on number of disk drives you employ.
> 
> 
> > If available, can u pls point me to numbers with multi-core CPU?
> 
> I dont have such benchmarks AFAIK. But new benchmarks are always welcome,
> so please feel free to submit your findings.
> 
> Best regards
> keld
> 
> > Thanks!
> > 
> > On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
> > > On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
> > >> Hi Linux Raid Experts:
> > >>
> > >> I was looking at following wiki on raid perf on linux:
> > >>
> > >> https://raid.wiki.kernel.org/index.php/Performance
> > >>
> > >> and notice that the performance numbers are with 2.6.12 kernel.
> > >>
> > >> Do we perf numbers for:
> > >> - latest kernel (something like 2.6.27 / 2.6.31)
> > >> - raid 5 and 6
> > >>
> > >> Can someone please point me to appropriate link?
> > >
> > > The link mentioned above has a number of other performance reports, for other levels of the kernel.
> > > Anyway you should be able to get comparable results for newer kernels, the kernel has not become
> > > slower since 2.6.12 on RAID.
> > >
> > > best regards
> > > Keld
> > >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02 11:05     ` Keld Simonsen
  2010-04-02 11:18       ` Keld Simonsen
@ 2010-04-02 17:55       ` Learner Study
  2010-04-02 21:14         ` Keld Simonsen
  2010-04-03  0:39         ` Mark Knecht
  1 sibling, 2 replies; 40+ messages in thread
From: Learner Study @ 2010-04-02 17:55 UTC (permalink / raw)
  To: Keld Simonsen; +Cc: linux-raid, keld, learner.study

Hi Keld:

Thanks for your email...

1. Can you pls point me to this benchmark (which shows 500MB/s)? I
would like to know which CPU, HDDs and kernel version used to achieve
this...

2. Secondly, I would like to understand how raid stack (md driver)
scales as we add more cores...if single core gives ~500MB/s, can two
core give ~1000MB/s? can four cores give ~2000MB/s? etc....

Thanks for your time.

On Fri, Apr 2, 2010 at 4:05 AM, Keld Simonsen <keld@keldix.com> wrote:
> On Thu, Apr 01, 2010 at 08:07:25PM -0700, Learner Study wrote:
>> Hi Keld:
>>
>> Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
>> the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
>> single core CPUs. How does that scale with multiple cores? Something
>> like intel's jasper forest???
>
> I have not checked if the benchmarks were on multi core machines.
> It should not matter much if there were more than one CPU, but
> of cause it helps a little. bonnie++ test reports cpu usage, and this
> is not insignificant, say in the 20 -60 % range for some tests,
> but nowhere near a bottleneck. There was one with a raid5 performance
> seq read of about 500 MB/s with 36 % cpu utilization, so it is
> definitely possible to come beyound 150 MB/s. The speed is largely
> dependent on number of disk drives you employ.
>
>
>> If available, can u pls point me to numbers with multi-core CPU?
>
> I dont have such benchmarks AFAIK. But new benchmarks are always welcome,
> so please feel free to submit your findings.
>
> Best regards
> keld
>
>> Thanks!
>>
>> On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
>> > On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
>> >> Hi Linux Raid Experts:
>> >>
>> >> I was looking at following wiki on raid perf on linux:
>> >>
>> >> https://raid.wiki.kernel.org/index.php/Performance
>> >>
>> >> and notice that the performance numbers are with 2.6.12 kernel.
>> >>
>> >> Do we perf numbers for:
>> >> - latest kernel (something like 2.6.27 / 2.6.31)
>> >> - raid 5 and 6
>> >>
>> >> Can someone please point me to appropriate link?
>> >
>> > The link mentioned above has a number of other performance reports, for other levels of the kernel.
>> > Anyway you should be able to get comparable results for newer kernels, the kernel has not become
>> > slower since 2.6.12 on RAID.
>> >
>> > best regards
>> > Keld
>> >
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02  9:58     ` Nicolae Mihalache
@ 2010-04-02 17:58       ` Learner Study
  0 siblings, 0 replies; 40+ messages in thread
From: Learner Study @ 2010-04-02 17:58 UTC (permalink / raw)
  To: Nicolae Mihalache; +Cc: linux-raid, learner.study

Hi Nicolae:

Can you please let me know:
- what HDDs (rpm etc.) of your system?
- which linux kernel are you using?

Thanks!

On Fri, Apr 2, 2010 at 2:58 AM, Nicolae Mihalache <mache@abcpages.com> wrote:
> I see some benchmarks performed at boot time on my Xeon E5410 2.33GHz
> that shows
> ...
> [   37.935702] raid6: sse2x1    3562 MB/s
> [   38.003702] raid6: sse2x2    6422 MB/s
> [   38.003702] raid6: using algorithm sse2x2 (6422 MB/s)
>
> This  speed is higher that the DDR2 667 theoretical speed of
> 5333MBs/sec. So I expect the limiting factor will never be the CPU, so
> it would not make sense to use multi-core.
>
> Am I completely off?
>
>
> nicolae
>
>
> Learner Study wrote:
>> Hi Keld:
>>
>> Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
>> the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
>> single core CPUs. How does that scale with multiple cores? Something
>> like intel's jasper forest???
>>
>> If available, can u pls point me to numbers with multi-core CPU?
>>
>> Thanks!
>>
>> On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
>>
>>> On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
>>>
>>>> Hi Linux Raid Experts:
>>>>
>>>> I was looking at following wiki on raid perf on linux:
>>>>
>>>> https://raid.wiki.kernel.org/index.php/Performance
>>>>
>>>> and notice that the performance numbers are with 2.6.12 kernel.
>>>>
>>>> Do we perf numbers for:
>>>> - latest kernel (something like 2.6.27 / 2.6.31)
>>>> - raid 5 and 6
>>>>
>>>> Can someone please point me to appropriate link?
>>>>
>>> The link mentioned above has a number of other performance reports, for other levels of the kernel.
>>> Anyway you should be able to get comparable results for newer kernels, the kernel has not become
>>> slower since 2.6.12 on RAID.
>>>
>>> best regards
>>> Keld
>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02 17:55       ` Learner Study
@ 2010-04-02 21:14         ` Keld Simonsen
  2010-04-02 21:37           ` Learner Study
  2010-04-03  0:10           ` Learner Study
  2010-04-03  0:39         ` Mark Knecht
  1 sibling, 2 replies; 40+ messages in thread
From: Keld Simonsen @ 2010-04-02 21:14 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid, keld

On Fri, Apr 02, 2010 at 10:55:53AM -0700, Learner Study wrote:
> Hi Keld:
> 
> Thanks for your email...
> 
> 1. Can you pls point me to this benchmark (which shows 500MB/s)? I
> would like to know which CPU, HDDs and kernel version used to achieve
> this...

http://home.comcast.net/~jpiszcz/20080329-raid/
496843	 KB/s for sequential input with 10 raptor drives
There probably is an email in the archives with more info on the
test. 

> 2. Secondly, I would like to understand how raid stack (md driver)
> scales as we add more cores...if single core gives ~500MB/s, can two
> core give ~1000MB/s? can four cores give ~2000MB/s? etc....

No, the performance is normally limited by the number of drives.
I would not wsay that more cores woould do a little 
but it would be in the order of 1-2 % I think.
This is also dependent on wheteher the code actually runs threaded.
I doubt it....

best regard
keld

> 
> Thanks for your time.
> 
> On Fri, Apr 2, 2010 at 4:05 AM, Keld Simonsen <keld@keldix.com> wrote:
> > On Thu, Apr 01, 2010 at 08:07:25PM -0700, Learner Study wrote:
> >> Hi Keld:
> >>
> >> Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
> >> the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
> >> single core CPUs. How does that scale with multiple cores? Something
> >> like intel's jasper forest???
> >
> > I have not checked if the benchmarks were on multi core machines.
> > It should not matter much if there were more than one CPU, but
> > of cause it helps a little. bonnie++ test reports cpu usage, and this
> > is not insignificant, say in the 20 -60 % range for some tests,
> > but nowhere near a bottleneck. There was one with a raid5 performance
> > seq read of about 500 MB/s with 36 % cpu utilization, so it is
> > definitely possible to come beyound 150 MB/s. The speed is largely
> > dependent on number of disk drives you employ.
> >
> >
> >> If available, can u pls point me to numbers with multi-core CPU?
> >
> > I dont have such benchmarks AFAIK. But new benchmarks are always welcome,
> > so please feel free to submit your findings.
> >
> > Best regards
> > keld
> >
> >> Thanks!
> >>
> >> On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
> >> > On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
> >> >> Hi Linux Raid Experts:
> >> >>
> >> >> I was looking at following wiki on raid perf on linux:
> >> >>
> >> >> https://raid.wiki.kernel.org/index.php/Performance
> >> >>
> >> >> and notice that the performance numbers are with 2.6.12 kernel.
> >> >>
> >> >> Do we perf numbers for:
> >> >> - latest kernel (something like 2.6.27 / 2.6.31)
> >> >> - raid 5 and 6
> >> >>
> >> >> Can someone please point me to appropriate link?
> >> >
> >> > The link mentioned above has a number of other performance reports, for other levels of the kernel.
> >> > Anyway you should be able to get comparable results for newer kernels, the kernel has not become
> >> > slower since 2.6.12 on RAID.
> >> >
> >> > best regards
> >> > Keld
> >> >
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02 21:14         ` Keld Simonsen
@ 2010-04-02 21:37           ` Learner Study
  2010-04-03 11:20             ` Keld Simonsen
  2010-04-03  0:10           ` Learner Study
  1 sibling, 1 reply; 40+ messages in thread
From: Learner Study @ 2010-04-02 21:37 UTC (permalink / raw)
  To: Keld Simonsen; +Cc: linux-raid, keld, learner.study

I have seen ~180MB/s RAID5 performance with 4 disks...are you saying
that I could achieve even higher if I have more number of disks (so
instead of 3+1, try 6+1 or 9+1)?
Logically, this sounds right but wanted to verify my thought process
with you....

Thanks!

On Fri, Apr 2, 2010 at 2:14 PM, Keld Simonsen <keld@keldix.com> wrote:
> On Fri, Apr 02, 2010 at 10:55:53AM -0700, Learner Study wrote:
>> Hi Keld:
>>
>> Thanks for your email...
>>
>> 1. Can you pls point me to this benchmark (which shows 500MB/s)? I
>> would like to know which CPU, HDDs and kernel version used to achieve
>> this...
>
> http://home.comcast.net/~jpiszcz/20080329-raid/
> 496843   KB/s for sequential input with 10 raptor drives
> There probably is an email in the archives with more info on the
> test.
>
>> 2. Secondly, I would like to understand how raid stack (md driver)
>> scales as we add more cores...if single core gives ~500MB/s, can two
>> core give ~1000MB/s? can four cores give ~2000MB/s? etc....
>
> No, the performance is normally limited by the number of drives.
> I would not wsay that more cores woould do a little
> but it would be in the order of 1-2 % I think.
> This is also dependent on wheteher the code actually runs threaded.
> I doubt it....
>
> best regard
> keld
>
>>
>> Thanks for your time.
>>
>> On Fri, Apr 2, 2010 at 4:05 AM, Keld Simonsen <keld@keldix.com> wrote:
>> > On Thu, Apr 01, 2010 at 08:07:25PM -0700, Learner Study wrote:
>> >> Hi Keld:
>> >>
>> >> Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
>> >> the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
>> >> single core CPUs. How does that scale with multiple cores? Something
>> >> like intel's jasper forest???
>> >
>> > I have not checked if the benchmarks were on multi core machines.
>> > It should not matter much if there were more than one CPU, but
>> > of cause it helps a little. bonnie++ test reports cpu usage, and this
>> > is not insignificant, say in the 20 -60 % range for some tests,
>> > but nowhere near a bottleneck. There was one with a raid5 performance
>> > seq read of about 500 MB/s with 36 % cpu utilization, so it is
>> > definitely possible to come beyound 150 MB/s. The speed is largely
>> > dependent on number of disk drives you employ.
>> >
>> >
>> >> If available, can u pls point me to numbers with multi-core CPU?
>> >
>> > I dont have such benchmarks AFAIK. But new benchmarks are always welcome,
>> > so please feel free to submit your findings.
>> >
>> > Best regards
>> > keld
>> >
>> >> Thanks!
>> >>
>> >> On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
>> >> > On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
>> >> >> Hi Linux Raid Experts:
>> >> >>
>> >> >> I was looking at following wiki on raid perf on linux:
>> >> >>
>> >> >> https://raid.wiki.kernel.org/index.php/Performance
>> >> >>
>> >> >> and notice that the performance numbers are with 2.6.12 kernel.
>> >> >>
>> >> >> Do we perf numbers for:
>> >> >> - latest kernel (something like 2.6.27 / 2.6.31)
>> >> >> - raid 5 and 6
>> >> >>
>> >> >> Can someone please point me to appropriate link?
>> >> >
>> >> > The link mentioned above has a number of other performance reports, for other levels of the kernel.
>> >> > Anyway you should be able to get comparable results for newer kernels, the kernel has not become
>> >> > slower since 2.6.12 on RAID.
>> >> >
>> >> > best regards
>> >> > Keld
>> >> >
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> >> the body of a message to majordomo@vger.kernel.org
>> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02 21:14         ` Keld Simonsen
  2010-04-02 21:37           ` Learner Study
@ 2010-04-03  0:10           ` Learner Study
  1 sibling, 0 replies; 40+ messages in thread
From: Learner Study @ 2010-04-03  0:10 UTC (permalink / raw)
  To: Keld Simonsen; +Cc: linux-raid, keld, learner.study

Hi:

I can't seem to find email in the archive on what test configuration
(what kernel, which CPU, how many disks, disk RPM etc.) was used to
achieve 500MB/s..

Can someone please point me to the appropriate discussion...

Thanks!

On Fri, Apr 2, 2010 at 2:14 PM, Keld Simonsen <keld@keldix.com> wrote:
> On Fri, Apr 02, 2010 at 10:55:53AM -0700, Learner Study wrote:
>> Hi Keld:
>>
>> Thanks for your email...
>>
>> 1. Can you pls point me to this benchmark (which shows 500MB/s)? I
>> would like to know which CPU, HDDs and kernel version used to achieve
>> this...
>
> http://home.comcast.net/~jpiszcz/20080329-raid/
> 496843   KB/s for sequential input with 10 raptor drives
> There probably is an email in the archives with more info on the
> test.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02 17:55       ` Learner Study
  2010-04-02 21:14         ` Keld Simonsen
@ 2010-04-03  0:39         ` Mark Knecht
  2010-04-03  1:00           ` John Robinson
  2010-04-03  1:14           ` Richard Scobie
  1 sibling, 2 replies; 40+ messages in thread
From: Mark Knecht @ 2010-04-03  0:39 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid, keld

On Fri, Apr 2, 2010 at 10:55 AM, Learner Study <learner.study@gmail.com> wrote:
<SNIP>
>
> 2. Secondly, I would like to understand how raid stack (md driver)
> scales as we add more cores...if single core gives ~500MB/s, can two
> core give ~1000MB/s? can four cores give ~2000MB/s? etc....
<SNIP>

More cores by themselves certainly won't do it for you.

1) More disks in parallel. (striped data)

2) More ports to attach those drives.

3) More bandwidth on those ports. SATA3 is better than SATA2 is better
than SATA is better than PATA, etc. (Obviously disks must match ports,
right? SATA1 disks on SATA3 ports isn't the right thing...)

4) More bus bandwidth getting to those ports. PCI-Express16 is better
than PCI-Express1 is better than PCI, etc.

5) Faster RAID architectures for the number of disks chosen.

Once all of that is in place then possibly more cores will help, but I
suspect even then it probably hard to use 4 billion CPU cycles/second
doing nothing but disk I/O. SATA controllers are all doing DMA so CPU
overhead is relatively *very* low.

HTH,
Mark

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  0:39         ` Mark Knecht
@ 2010-04-03  1:00           ` John Robinson
  2010-04-03  1:14           ` Richard Scobie
  1 sibling, 0 replies; 40+ messages in thread
From: John Robinson @ 2010-04-03  1:00 UTC (permalink / raw)
  To: Mark Knecht; +Cc: Learner Study, linux-raid, keld

On 03/04/2010 01:39, Mark Knecht wrote:
> On Fri, Apr 2, 2010 at 10:55 AM, Learner Study <learner.study@gmail.com> wrote:
> <SNIP>
>> 2. Secondly, I would like to understand how raid stack (md driver)
>> scales as we add more cores...if single core gives ~500MB/s, can two
>> core give ~1000MB/s? can four cores give ~2000MB/s? etc....
> <SNIP>
> 
> More cores by themselves certainly won't do it for you.
> 
> 1) More disks in parallel. (striped data)
> 
> 2) More ports to attach those drives.
> 
> 3) More bandwidth on those ports. SATA3 is better than SATA2 is better
> than SATA is better than PATA, etc. (Obviously disks must match ports,
> right? SATA1 disks on SATA3 ports isn't the right thing...)
> 
> 4) More bus bandwidth getting to those ports. PCI-Express16 is better
> than PCI-Express1 is better than PCI, etc.
> 
> 5) Faster RAID architectures for the number of disks chosen.
> 
> Once all of that is in place then possibly more cores will help, but I
> suspect even then it probably hard to use 4 billion CPU cycles/second
> doing nothing but disk I/O. SATA controllers are all doing DMA so CPU
> overhead is relatively *very* low.

Right. As has recently been demonstrated on this list, one core on a 
slow Xeon can do about 8TB/s of RAID-6 calculations, whereas the 
theoretical limit on memory bandwidth for the platform is about 6TB/s, 
so one CPU thread is already faster than the whole system's memory 
bandwidth. After that, current discs manage about 150MB/s at their peak 
so you'd need 40+ discs in one array to reach the memory bandwidth 
limit. The upshot appears to me to be that with current architectures 
and discs, there's no need for multi-core/multi-threading. Having said 
that, individual arrays currently run single-threaded, but multiple 
arrays can run on separate CPU cores if necessary, with traditional 
process scheduling.

There is experimental support for multi-threading in the kernel right 
now, which was awful because the threading model didn't work, and which 
has even more recently been replaced with another experimental 
multi-threading patch using btrfs thread pooling, which is as yet 
unproved. So, multi-core / multi-threading support is on the way, but at 
the moment is not required.

I haven't included references because a quick search of the last month's 
archives of this list will reveal all of them.

Overall, the bottleneck right now is the discs, as has been the case 
since ooh forever.

Cheers,

John.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  0:39         ` Mark Knecht
  2010-04-03  1:00           ` John Robinson
@ 2010-04-03  1:14           ` Richard Scobie
  2010-04-03  1:32             ` Mark Knecht
                               ` (3 more replies)
  1 sibling, 4 replies; 40+ messages in thread
From: Richard Scobie @ 2010-04-03  1:14 UTC (permalink / raw)
  To: Mark Knecht; +Cc: Learner Study, linux-raid, keld

Mark Knecht wrote:

> Once all of that is in place then possibly more cores will help, but I
> suspect even then it probably hard to use 4 billion CPU cycles/second
> doing nothing but disk I/O. SATA controllers are all doing DMA so CPU
> overhead is relatively *very* low.

There is the RAID5/6 parity calculations to be considered on writes and 
this appears to be single threaded. There is an experimental multicore 
kernel option I believe, but recent discussion indicates there may be 
some problems with it.

A very quick test on a box here on a Xeon E5440 (4 x 2.8GHz) and a SAS 
attached 16 x 750GB SATA md RAID6. The array is 72% full and probably 
quite fragmented and currently the system is idle.

dd if=/dev/zero of=/mnt/storage/dump bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 87.2374 s, 240 MB/s

Looking at the outputs of vmstat 5 and mpstat -P ALL 5 during this, one 
core (probably doing parity generation) was around 7.56% idle and the 
other 3 were around 88.5, 67.5 and 51.8% idle.

The same test run when the system was commissioned and the array was 
empty, acheived 565MB/s writes.

Regards,

Richard


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  1:14           ` Richard Scobie
@ 2010-04-03  1:32             ` Mark Knecht
  2010-04-03  1:37               ` Richard Scobie
  2010-04-03  3:00             ` Learner Study
                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 40+ messages in thread
From: Mark Knecht @ 2010-04-03  1:32 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Learner Study, linux-raid, keld

On Fri, Apr 2, 2010 at 6:14 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> Mark Knecht wrote:
>
>> Once all of that is in place then possibly more cores will help, but I
>> suspect even then it probably hard to use 4 billion CPU cycles/second
>> doing nothing but disk I/O. SATA controllers are all doing DMA so CPU
>> overhead is relatively *very* low.
>
> There is the RAID5/6 parity calculations to be considered on writes and this
> appears to be single threaded. There is an experimental multicore kernel
> option I believe, but recent discussion indicates there may be some problems
> with it.
>
> A very quick test on a box here on a Xeon E5440 (4 x 2.8GHz) and a SAS
> attached 16 x 750GB SATA md RAID6. The array is 72% full and probably quite
> fragmented and currently the system is idle.
>
> dd if=/dev/zero of=/mnt/storage/dump bs=1M count=20000
> 20000+0 records in
> 20000+0 records out
> 20971520000 bytes (21 GB) copied, 87.2374 s, 240 MB/s
>
> Looking at the outputs of vmstat 5 and mpstat -P ALL 5 during this, one core
> (probably doing parity generation) was around 7.56% idle and the other 3
> were around 88.5, 67.5 and 51.8% idle.
>
> The same test run when the system was commissioned and the array was empty,
> acheived 565MB/s writes.
>
> Regards,
>
> Richard

Richard,
   Good point. I was limited in my thinking to the sorts of arrays I
might use at home being no wider than 3, 4 or 5 disks. However for our
N-wide array as N approaches infinity so do the cycles required to run
it. I don think that applies to the OP but I don't know that.

   Thanks for making the point.

Cheers,
Mark

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  1:32             ` Mark Knecht
@ 2010-04-03  1:37               ` Richard Scobie
  2010-04-03  3:06                 ` Learner Study
  0 siblings, 1 reply; 40+ messages in thread
From: Richard Scobie @ 2010-04-03  1:37 UTC (permalink / raw)
  To: Mark Knecht; +Cc: Learner Study, linux-raid, keld

Mark Knecht wrote:

> Richard,
>     Good point. I was limited in my thinking to the sorts of arrays I
> might use at home being no wider than 3, 4 or 5 disks. However for our
> N-wide array as N approaches infinity so do the cycles required to run
> it. I don think that applies to the OP but I don't know that.
>

I said I thought the busiest CPU was the parity generation one, but in 
hindsight this cannot be correct, as it was almost maxed out at half the 
write speed the array achieved when it was empty.

Regards,

Richard

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  1:14           ` Richard Scobie
  2010-04-03  1:32             ` Mark Knecht
@ 2010-04-03  3:00             ` Learner Study
  2010-04-03 19:27               ` Richard Scobie
  2010-04-03 18:14             ` MRK
  2010-04-14 20:50             ` Bill Davidsen
  3 siblings, 1 reply; 40+ messages in thread
From: Learner Study @ 2010-04-03  3:00 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Mark Knecht, linux-raid, keld, learner.study

Hi Richard:

Thanks for sharing your results. So, 565MB/s with 16 disks in an
array. It would be nice to see how it scales as we add more disks.

What kernel version did you use for this test?

Thanks!

On Fri, Apr 2, 2010 at 6:14 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> Mark Knecht wrote:
>
>> Once all of that is in place then possibly more cores will help, but I
>> suspect even then it probably hard to use 4 billion CPU cycles/second
>> doing nothing but disk I/O. SATA controllers are all doing DMA so CPU
>> overhead is relatively *very* low.
>
> There is the RAID5/6 parity calculations to be considered on writes and this
> appears to be single threaded. There is an experimental multicore kernel
> option I believe, but recent discussion indicates there may be some problems
> with it.
>
> A very quick test on a box here on a Xeon E5440 (4 x 2.8GHz) and a SAS
> attached 16 x 750GB SATA md RAID6. The array is 72% full and probably quite
> fragmented and currently the system is idle.
>
> dd if=/dev/zero of=/mnt/storage/dump bs=1M count=20000
> 20000+0 records in
> 20000+0 records out
> 20971520000 bytes (21 GB) copied, 87.2374 s, 240 MB/s
>
> Looking at the outputs of vmstat 5 and mpstat -P ALL 5 during this, one core
> (probably doing parity generation) was around 7.56% idle and the other 3
> were around 88.5, 67.5 and 51.8% idle.
>
> The same test run when the system was commissioned and the array was empty,
> acheived 565MB/s writes.
>
> Regards,
>
> Richard
>
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  1:37               ` Richard Scobie
@ 2010-04-03  3:06                 ` Learner Study
  0 siblings, 0 replies; 40+ messages in thread
From: Learner Study @ 2010-04-03  3:06 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Mark Knecht, linux-raid, keld, learner.study

Thanks to everyone for sharing their experiences...this was helpful.

I don't have luxury to have 16 SAS/SATA drives...I guess it would be
great if someone can share results with even higher number of
disks...I would like to know what is the max performance that can be
reached...I understand the theoretical is more like a couple of
Terabytes...but won't we get into some sort of linux file system
related or other kernel bottlenecks as we increase the # of disks?

On Fri, Apr 2, 2010 at 6:37 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> Mark Knecht wrote:
>
>> Richard,
>>    Good point. I was limited in my thinking to the sorts of arrays I
>> might use at home being no wider than 3, 4 or 5 disks. However for our
>> N-wide array as N approaches infinity so do the cycles required to run
>> it. I don think that applies to the OP but I don't know that.
>>
>
> I said I thought the busiest CPU was the parity generation one, but in
> hindsight this cannot be correct, as it was almost maxed out at half the
> write speed the array achieved when it was empty.
>
> Regards,
>
> Richard
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-02 21:37           ` Learner Study
@ 2010-04-03 11:20             ` Keld Simonsen
  2010-04-03 15:56               ` Learner Study
  0 siblings, 1 reply; 40+ messages in thread
From: Keld Simonsen @ 2010-04-03 11:20 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid, keld

On Fri, Apr 02, 2010 at 02:37:40PM -0700, Learner Study wrote:
> I have seen ~180MB/s RAID5 performance with 4 disks...are you saying
> that I could achieve even higher if I have more number of disks (so
> instead of 3+1, try 6+1 or 9+1)?
> Logically, this sounds right but wanted to verify my thought process
> with you....

Yes, with more spindles you can generally expect more performance.
Beware of bottlenecks, tho.

Best regards
keld

> Thanks!
> 
> On Fri, Apr 2, 2010 at 2:14 PM, Keld Simonsen <keld@keldix.com> wrote:
> > On Fri, Apr 02, 2010 at 10:55:53AM -0700, Learner Study wrote:
> >> Hi Keld:
> >>
> >> Thanks for your email...
> >>
> >> 1. Can you pls point me to this benchmark (which shows 500MB/s)? I
> >> would like to know which CPU, HDDs and kernel version used to achieve
> >> this...
> >
> > http://home.comcast.net/~jpiszcz/20080329-raid/
> > 496843   KB/s for sequential input with 10 raptor drives
> > There probably is an email in the archives with more info on the
> > test.
> >
> >> 2. Secondly, I would like to understand how raid stack (md driver)
> >> scales as we add more cores...if single core gives ~500MB/s, can two
> >> core give ~1000MB/s? can four cores give ~2000MB/s? etc....
> >
> > No, the performance is normally limited by the number of drives.
> > I would not wsay that more cores woould do a little
> > but it would be in the order of 1-2 % I think.
> > This is also dependent on wheteher the code actually runs threaded.
> > I doubt it....
> >
> > best regard
> > keld
> >
> >>
> >> Thanks for your time.
> >>
> >> On Fri, Apr 2, 2010 at 4:05 AM, Keld Simonsen <keld@keldix.com> wrote:
> >> > On Thu, Apr 01, 2010 at 08:07:25PM -0700, Learner Study wrote:
> >> >> Hi Keld:
> >> >>
> >> >> Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
> >> >> the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
> >> >> single core CPUs. How does that scale with multiple cores? Something
> >> >> like intel's jasper forest???
> >> >
> >> > I have not checked if the benchmarks were on multi core machines.
> >> > It should not matter much if there were more than one CPU, but
> >> > of cause it helps a little. bonnie++ test reports cpu usage, and this
> >> > is not insignificant, say in the 20 -60 % range for some tests,
> >> > but nowhere near a bottleneck. There was one with a raid5 performance
> >> > seq read of about 500 MB/s with 36 % cpu utilization, so it is
> >> > definitely possible to come beyound 150 MB/s. The speed is largely
> >> > dependent on number of disk drives you employ.
> >> >
> >> >
> >> >> If available, can u pls point me to numbers with multi-core CPU?
> >> >
> >> > I dont have such benchmarks AFAIK. But new benchmarks are always welcome,
> >> > so please feel free to submit your findings.
> >> >
> >> > Best regards
> >> > keld
> >> >
> >> >> Thanks!
> >> >>
> >> >> On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
> >> >> > On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
> >> >> >> Hi Linux Raid Experts:
> >> >> >>
> >> >> >> I was looking at following wiki on raid perf on linux:
> >> >> >>
> >> >> >> https://raid.wiki.kernel.org/index.php/Performance
> >> >> >>
> >> >> >> and notice that the performance numbers are with 2.6.12 kernel.
> >> >> >>
> >> >> >> Do we perf numbers for:
> >> >> >> - latest kernel (something like 2.6.27 / 2.6.31)
> >> >> >> - raid 5 and 6
> >> >> >>
> >> >> >> Can someone please point me to appropriate link?
> >> >> >
> >> >> > The link mentioned above has a number of other performance reports, for other levels of the kernel.
> >> >> > Anyway you should be able to get comparable results for newer kernels, the kernel has not become
> >> >> > slower since 2.6.12 on RAID.
> >> >> >
> >> >> > best regards
> >> >> > Keld
> >> >> >
> >> >> --
> >> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> >> the body of a message to majordomo@vger.kernel.org
> >> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> >
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03 11:20             ` Keld Simonsen
@ 2010-04-03 15:56               ` Learner Study
  2010-04-04  1:58                 ` Keld Simonsen
  0 siblings, 1 reply; 40+ messages in thread
From: Learner Study @ 2010-04-03 15:56 UTC (permalink / raw)
  To: Keld Simonsen; +Cc: linux-raid, keld, learner.study

Can you please throw light on what kind of bottlenecks that may impact perf....

Thanks!

On Sat, Apr 3, 2010 at 4:20 AM, Keld Simonsen <keld@keldix.com> wrote:
> On Fri, Apr 02, 2010 at 02:37:40PM -0700, Learner Study wrote:
>> I have seen ~180MB/s RAID5 performance with 4 disks...are you saying
>> that I could achieve even higher if I have more number of disks (so
>> instead of 3+1, try 6+1 or 9+1)?
>> Logically, this sounds right but wanted to verify my thought process
>> with you....
>
> Yes, with more spindles you can generally expect more performance.
> Beware of bottlenecks, tho.
>
> Best regards
> keld
>
>> Thanks!
>>
>> On Fri, Apr 2, 2010 at 2:14 PM, Keld Simonsen <keld@keldix.com> wrote:
>> > On Fri, Apr 02, 2010 at 10:55:53AM -0700, Learner Study wrote:
>> >> Hi Keld:
>> >>
>> >> Thanks for your email...
>> >>
>> >> 1. Can you pls point me to this benchmark (which shows 500MB/s)? I
>> >> would like to know which CPU, HDDs and kernel version used to achieve
>> >> this...
>> >
>> > http://home.comcast.net/~jpiszcz/20080329-raid/
>> > 496843   KB/s for sequential input with 10 raptor drives
>> > There probably is an email in the archives with more info on the
>> > test.
>> >
>> >> 2. Secondly, I would like to understand how raid stack (md driver)
>> >> scales as we add more cores...if single core gives ~500MB/s, can two
>> >> core give ~1000MB/s? can four cores give ~2000MB/s? etc....
>> >
>> > No, the performance is normally limited by the number of drives.
>> > I would not wsay that more cores woould do a little
>> > but it would be in the order of 1-2 % I think.
>> > This is also dependent on wheteher the code actually runs threaded.
>> > I doubt it....
>> >
>> > best regard
>> > keld
>> >
>> >>
>> >> Thanks for your time.
>> >>
>> >> On Fri, Apr 2, 2010 at 4:05 AM, Keld Simonsen <keld@keldix.com> wrote:
>> >> > On Thu, Apr 01, 2010 at 08:07:25PM -0700, Learner Study wrote:
>> >> >> Hi Keld:
>> >> >>
>> >> >> Do we have raid5/6 numbers for linux on any multi-core CPU? Most of
>> >> >> the benchmarks I have seen on wiki show raid5 perf to be ~150MB/s with
>> >> >> single core CPUs. How does that scale with multiple cores? Something
>> >> >> like intel's jasper forest???
>> >> >
>> >> > I have not checked if the benchmarks were on multi core machines.
>> >> > It should not matter much if there were more than one CPU, but
>> >> > of cause it helps a little. bonnie++ test reports cpu usage, and this
>> >> > is not insignificant, say in the 20 -60 % range for some tests,
>> >> > but nowhere near a bottleneck. There was one with a raid5 performance
>> >> > seq read of about 500 MB/s with 36 % cpu utilization, so it is
>> >> > definitely possible to come beyound 150 MB/s. The speed is largely
>> >> > dependent on number of disk drives you employ.
>> >> >
>> >> >
>> >> >> If available, can u pls point me to numbers with multi-core CPU?
>> >> >
>> >> > I dont have such benchmarks AFAIK. But new benchmarks are always welcome,
>> >> > so please feel free to submit your findings.
>> >> >
>> >> > Best regards
>> >> > keld
>> >> >
>> >> >> Thanks!
>> >> >>
>> >> >> On Wed, Mar 31, 2010 at 1:15 PM, Keld Simonsen <keld@keldix.com> wrote:
>> >> >> > On Wed, Mar 31, 2010 at 12:42:57PM -0700, Learner Study wrote:
>> >> >> >> Hi Linux Raid Experts:
>> >> >> >>
>> >> >> >> I was looking at following wiki on raid perf on linux:
>> >> >> >>
>> >> >> >> https://raid.wiki.kernel.org/index.php/Performance
>> >> >> >>
>> >> >> >> and notice that the performance numbers are with 2.6.12 kernel.
>> >> >> >>
>> >> >> >> Do we perf numbers for:
>> >> >> >> - latest kernel (something like 2.6.27 / 2.6.31)
>> >> >> >> - raid 5 and 6
>> >> >> >>
>> >> >> >> Can someone please point me to appropriate link?
>> >> >> >
>> >> >> > The link mentioned above has a number of other performance reports, for other levels of the kernel.
>> >> >> > Anyway you should be able to get comparable results for newer kernels, the kernel has not become
>> >> >> > slower since 2.6.12 on RAID.
>> >> >> >
>> >> >> > best regards
>> >> >> > Keld
>> >> >> >
>> >> >> --
>> >> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> >> >> the body of a message to majordomo@vger.kernel.org
>> >> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >> >
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> >> the body of a message to majordomo@vger.kernel.org
>> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  1:14           ` Richard Scobie
  2010-04-03  1:32             ` Mark Knecht
  2010-04-03  3:00             ` Learner Study
@ 2010-04-03 18:14             ` MRK
  2010-04-03 19:56               ` Richard Scobie
  2010-04-14 20:50             ` Bill Davidsen
  3 siblings, 1 reply; 40+ messages in thread
From: MRK @ 2010-04-03 18:14 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Mark Knecht, Learner Study, linux-raid, keld

Richard Scobie wrote:
> Mark Knecht wrote:
>
>> Once all of that is in place then possibly more cores will help, but I
>> suspect even then it probably hard to use 4 billion CPU cycles/second
>> doing nothing but disk I/O. SATA controllers are all doing DMA so CPU
>> overhead is relatively *very* low.
>
> There is the RAID5/6 parity calculations to be considered on writes 
> and this appears to be single threaded. There is an experimental 
> multicore kernel option I believe, but recent discussion indicates 
> there may be some problems with it.
>
> A very quick test on a box here on a Xeon E5440 (4 x 2.8GHz) and a SAS 
> attached 16 x 750GB SATA md RAID6. The array is 72% full and probably 
> quite fragmented and currently the system is idle.
>
> dd if=/dev/zero of=/mnt/storage/dump bs=1M count=20000
> 20000+0 records in
> 20000+0 records out
> 20971520000 bytes (21 GB) copied, 87.2374 s, 240 MB/s
>
> Looking at the outputs of vmstat 5 and mpstat -P ALL 5 during this, 
> one core (probably doing parity generation) was around 7.56% idle and 
> the other 3 were around 88.5, 67.5 and 51.8% idle.
>
> The same test run when the system was commissioned and the array was 
> empty, acheived 565MB/s writes.

I was able to achieve about 430MB/sec on a 24 disks raid-6 with dd on an 
XFS filesystem which was 70% full. I don't think it made great 
difference even if it was empty. It was a 54xx Xeon CPU.
I spent some time trying to optimize it but that was the best I could 
get. Anyway both my benchmark and Richard's one imply a very significant 
bottleneck somehwere.
16 SATA disks have aggregated I/O streaming performance of about 
1.4GB/sec so getting 500MB/sec it's 3 times slower.
Raid-0 does not have this problem: there is an old post of Mark Delfman 
on this ML in which he was able to obtain about 1.7GB/sec with 10 SAS 
disks (15Krpm) in RAID-0, which is much higher than 500MB/s and it's 
about the bare disk speed.
I always thought the reason of the slower raid 5/6 was the parity 
computation but now that Nicolae has pointed out that the parity 
computation speed is so high, the reason must be elsewhere.
Could that be RAM I/O? Raid 5/6 copies data, then probably reads it 
again for the parity computation and then writes the parity out... the 
CPU cache is too small to hold a stripe for large arrays so it's at 
least 3 RAM accesses but yet it should be way faster than this imho.

MRK

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  3:00             ` Learner Study
@ 2010-04-03 19:27               ` Richard Scobie
  0 siblings, 0 replies; 40+ messages in thread
From: Richard Scobie @ 2010-04-03 19:27 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid

Learner Study wrote:

> What kernel version did you use for this test?

2.6.27.19-78.2.30.fc9.x86_64

Regards,

Richard

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03 18:14             ` MRK
@ 2010-04-03 19:56               ` Richard Scobie
  2010-04-04 15:00                 ` MRK
  0 siblings, 1 reply; 40+ messages in thread
From: Richard Scobie @ 2010-04-03 19:56 UTC (permalink / raw)
  To: MRK; +Cc: Mark Knecht, Learner Study, linux-raid, keld

MRK wrote:

> I spent some time trying to optimize it but that was the best I could 
> get. Anyway both my benchmark and Richard's one imply a very significant 
> bottleneck somehwere.

This bottleneck is the SAS controller, at least in my case. I did the 
same math regarding streaming performance of one drive times number of 
drive and wondered where the shortfall was, after tests showed I could 
only streaming read at 850MB/s on the same array.

A query to an LSI engineer got the following response, which basically 
boils down to "you get what you pay for" - SAS vs SATA drives.

"Yes, you're at the "practical" limit.

With that setup and SAS disks, you will exceed 1200 MB/s.  Could go
higher than 1,400 MB/s given the right server chipset.

However with SATA disks, and the way they break up data transfers, 815
to 850 MB/s is the best you can do.

Under SATA, there are multiple connections per I/O request.
   * Command Initiator -> HDD
   * DMA Setup  Initiator -> HDD
   * DMA Activate  HDD -> Initiator
   * Data   HDD -> Initiator
   * Status    HDD -> Initiator
And there is little ability with typical SATA disks to combine traffic
from different I/Os on the same connection.  So you get lots of
individual connections being made, used, & broken.

Contrast that with SAS which has typically 2 connections per I/O, and
will combine traffic from more than 1 I/O per connection.  It uses the
SAS links much more efficiently."


Regards,

Richard

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03 15:56               ` Learner Study
@ 2010-04-04  1:58                 ` Keld Simonsen
  0 siblings, 0 replies; 40+ messages in thread
From: Keld Simonsen @ 2010-04-04  1:58 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid, keld

On Sat, Apr 03, 2010 at 08:56:59AM -0700, Learner Study wrote:
> Can you please throw light on what kind of bottlenecks that may impact perf....

See for example https://raid.wiki.kernel.org/index.php/Performance#Bottlenecks

best regards
keld

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03 19:56               ` Richard Scobie
@ 2010-04-04 15:00                 ` MRK
  2010-04-04 18:26                   ` Learner Study
  2010-04-04 23:24                   ` Richard Scobie
  0 siblings, 2 replies; 40+ messages in thread
From: MRK @ 2010-04-04 15:00 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Mark Knecht, Learner Study, linux-raid, keld

Richard Scobie wrote:
> MRK wrote:
>
>> I spent some time trying to optimize it but that was the best I could 
>> get. Anyway both my benchmark and Richard's one imply a very 
>> significant bottleneck somehwere.
>
> This bottleneck is the SAS controller, at least in my case. I did the 
> same math regarding streaming performance of one drive times number of 
> drive and wondered where the shortfall was, after tests showed I could 
> only streaming read at 850MB/s on the same array.
>
> A query to an LSI engineer got the following response, which basically 
> boils down to "you get what you pay for" - SAS vs SATA drives.
>
> "Yes, you're at the "practical" limit.
>
> With that setup and SAS disks, you will exceed 1200 MB/s.  Could go
> higher than 1,400 MB/s given the right server chipset.
>
> However with SATA disks, and the way they break up data transfers, 815
> to 850 MB/s is the best you can do.
>
> Under SATA, there are multiple connections per I/O request.
>   * Command Initiator -> HDD
>   * DMA Setup  Initiator -> HDD
>   * DMA Activate  HDD -> Initiator
>   * Data   HDD -> Initiator
>   * Status    HDD -> Initiator
> And there is little ability with typical SATA disks to combine traffic
> from different I/Os on the same connection.  So you get lots of
> individual connections being made, used, & broken.
>
> Contrast that with SAS which has typically 2 connections per I/O, and
> will combine traffic from more than 1 I/O per connection.  It uses the
> SAS links much more efficiently."

Firstly: Happy Easter!  :-)

Secondly:

If this is true then one won't achieve higher speeds even on RAID-0. If 
anybody can test this... I cannot right now

I am a bit surprised though. The SATA "link" is one per drive, so if 1 
drive is able to do 90MB/sec, N drives on N cables should do Nx90MB/sec.
If this is not so, then the chipset of the controller must be the 
bottleneck.
If this is so, the newer LSI controllers at 6.0gbit/sec could be able to 
do better (they supposedly have a faster chip). Also maybe one could buy 
more controller cards and divide drives among those. These two 
workarounds would still be cheaper than SAS drives.




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-04 15:00                 ` MRK
@ 2010-04-04 18:26                   ` Learner Study
  2010-04-04 18:46                     ` Mark Knecht
  2010-04-04 23:24                   ` Richard Scobie
  1 sibling, 1 reply; 40+ messages in thread
From: Learner Study @ 2010-04-04 18:26 UTC (permalink / raw)
  To: MRK; +Cc: Richard Scobie, Mark Knecht, linux-raid, keld, learner.study

Happy Easter!!!

So, 550-600MB/s is the best we have seen with Linux raid using 16-24 SAS drives.

Not sure if its appropriate to ask on this list - has someone seen
better numbers with non-linux raid stack? Perhaps freebsd/lustre..

Thanks for your time!

On Sun, Apr 4, 2010 at 8:00 AM, MRK <mrk@shiftmail.org> wrote:
> Richard Scobie wrote:
>>
>> MRK wrote:
>>
>>> I spent some time trying to optimize it but that was the best I could
>>> get. Anyway both my benchmark and Richard's one imply a very significant
>>> bottleneck somehwere.
>>
>> This bottleneck is the SAS controller, at least in my case. I did the same
>> math regarding streaming performance of one drive times number of drive and
>> wondered where the shortfall was, after tests showed I could only streaming
>> read at 850MB/s on the same array.
>>
>> A query to an LSI engineer got the following response, which basically
>> boils down to "you get what you pay for" - SAS vs SATA drives.
>>
>> "Yes, you're at the "practical" limit.
>>
>> With that setup and SAS disks, you will exceed 1200 MB/s.  Could go
>> higher than 1,400 MB/s given the right server chipset.
>>
>> However with SATA disks, and the way they break up data transfers, 815
>> to 850 MB/s is the best you can do.
>>
>> Under SATA, there are multiple connections per I/O request.
>>  * Command Initiator -> HDD
>>  * DMA Setup  Initiator -> HDD
>>  * DMA Activate  HDD -> Initiator
>>  * Data   HDD -> Initiator
>>  * Status    HDD -> Initiator
>> And there is little ability with typical SATA disks to combine traffic
>> from different I/Os on the same connection.  So you get lots of
>> individual connections being made, used, & broken.
>>
>> Contrast that with SAS which has typically 2 connections per I/O, and
>> will combine traffic from more than 1 I/O per connection.  It uses the
>> SAS links much more efficiently."
>
> Firstly: Happy Easter!  :-)
>
> Secondly:
>
> If this is true then one won't achieve higher speeds even on RAID-0. If
> anybody can test this... I cannot right now
>
> I am a bit surprised though. The SATA "link" is one per drive, so if 1 drive
> is able to do 90MB/sec, N drives on N cables should do Nx90MB/sec.
> If this is not so, then the chipset of the controller must be the
> bottleneck.
> If this is so, the newer LSI controllers at 6.0gbit/sec could be able to do
> better (they supposedly have a faster chip). Also maybe one could buy more
> controller cards and divide drives among those. These two workarounds would
> still be cheaper than SAS drives.
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-04 18:26                   ` Learner Study
@ 2010-04-04 18:46                     ` Mark Knecht
  2010-04-04 21:28                       ` Jools Wills
  2010-04-04 22:24                       ` Guy Watkins
  0 siblings, 2 replies; 40+ messages in thread
From: Mark Knecht @ 2010-04-04 18:46 UTC (permalink / raw)
  To: Learner Study; +Cc: linux-raid

On Sun, Apr 4, 2010 at 11:26 AM, Learner Study <learner.study@gmail.com> wrote:
> Happy Easter!!!
>
> So, 550-600MB/s is the best we have seen with Linux raid using 16-24 SAS drives.
>
> Not sure if its appropriate to ask on this list - has someone seen
> better numbers with non-linux raid stack? Perhaps freebsd/lustre..
>
> Thanks for your time!
>

Are you just trolling or something? Why are you asking about non-linux
RAID on a Linux software RAID list?

I cannot believe the name Learner Study is real, is it? Possibly it
would make sense to first study, and from that hopefully learn, and
then come ask some questions with some depth to them?

- Mark

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-04 18:46                     ` Mark Knecht
@ 2010-04-04 21:28                       ` Jools Wills
  2010-04-04 22:38                         ` Mark Knecht
  2010-04-04 22:24                       ` Guy Watkins
  1 sibling, 1 reply; 40+ messages in thread
From: Jools Wills @ 2010-04-04 21:28 UTC (permalink / raw)
  To: Mark Knecht; +Cc: Learner Study, linux-raid

On Sun, 2010-04-04 at 11:46 -0700, Mark Knecht wrote:

> Are you just trolling or something? Why are you asking about non-linux
> RAID on a Linux software RAID list?

Don't want to start any flame war, but I had no problem with the mail,
and it seemed a relevant question. After all, we need to compare against
other systems to remain on top right? to strive to be the best?

If platform A and B is doing XYZ in terms of raid, then it is surely up
for discussion. I think such attitudes like this make people think we
are fanatical, rather than educated developers trying to improve things.

Best Regards

Jools

Jools Wills
-- 
IT Consultant
Oxford Inspire - http://www.oxfordinspire.co.uk - be inspired
t: 01235 519446 m: 07966 577498
jools@oxfordinspire.co.uk


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: Linux Raid performance
  2010-04-04 18:46                     ` Mark Knecht
  2010-04-04 21:28                       ` Jools Wills
@ 2010-04-04 22:24                       ` Guy Watkins
  2010-04-05 13:49                         ` Drew
  1 sibling, 1 reply; 40+ messages in thread
From: Guy Watkins @ 2010-04-04 22:24 UTC (permalink / raw)
  To: 'Mark Knecht', 'Learner Study'; +Cc: linux-raid

} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Mark Knecht
} Sent: Sunday, April 04, 2010 2:47 PM
} To: Learner Study
} Cc: linux-raid@vger.kernel.org
} Subject: Re: Linux Raid performance
} 
} On Sun, Apr 4, 2010 at 11:26 AM, Learner Study <learner.study@gmail.com>
} wrote:
} > Happy Easter!!!
} >
} > So, 550-600MB/s is the best we have seen with Linux raid using 16-24 SAS
} drives.
} >
} > Not sure if its appropriate to ask on this list - has someone seen
} > better numbers with non-linux raid stack? Perhaps freebsd/lustre..
} >
} > Thanks for your time!
} >
} 
} Are you just trolling or something? Why are you asking about non-linux
} RAID on a Linux software RAID list?
} 
} I cannot believe the name Learner Study is real, is it? Possibly it
} would make sense to first study, and from that hopefully learn, and
} then come ask some questions with some depth to them?
} 
} - Mark

This seemed like a fair question to me.  I just wonder why Winblows was left
out.  After all, if any OS, including Winblows can outperform Linux, we
should know about it.

Maybe you are afraid the other operating systems out perform Linux?


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-04 21:28                       ` Jools Wills
@ 2010-04-04 22:38                         ` Mark Knecht
  2010-04-05 10:07                           ` Learner Study
  0 siblings, 1 reply; 40+ messages in thread
From: Mark Knecht @ 2010-04-04 22:38 UTC (permalink / raw)
  To: jools; +Cc: Learner Study, linux-raid

On Sun, Apr 4, 2010 at 2:28 PM, Jools Wills <jools@oxfordinspire.co.uk> wrote:
> On Sun, 2010-04-04 at 11:46 -0700, Mark Knecht wrote:
>
>> Are you just trolling or something? Why are you asking about non-linux
>> RAID on a Linux software RAID list?
>
> Don't want to start any flame war, but I had no problem with the mail,
> and it seemed a relevant question. After all, we need to compare against
> other systems to remain on top right? to strive to be the best?
>
> If platform A and B is doing XYZ in terms of raid, then it is surely up
> for discussion. I think such attitudes like this make people think we
> are fanatical, rather than educated developers trying to improve things.
>
> Best Regards
>
> Jools
>
> Jools Wills
> --
> IT Consultant
> Oxford Inspire - http://www.oxfordinspire.co.uk - be inspired
> t: 01235 519446 m: 07966 577498
> jools@oxfordinspire.co.uk
>
>

You are right. There wasn't anything wrong with the questionas asked
and it was not my intent to start any wars. My apologies if it sounded
that way.

My reading of this thread in total has been pretty much that the OP
doesn't know much about RAID (fair enough), or PC architecture (fair
enough) and wants to ask little questions over and over to get
information without doing much work on his own. (possibly not so
fair.) I should just shut my mouth and let the messages go by which I
will do with this individual from here on out.

My apologies,
Mark

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-04 15:00                 ` MRK
  2010-04-04 18:26                   ` Learner Study
@ 2010-04-04 23:24                   ` Richard Scobie
  2010-04-05 11:20                     ` MRK
  1 sibling, 1 reply; 40+ messages in thread
From: Richard Scobie @ 2010-04-04 23:24 UTC (permalink / raw)
  To: MRK; +Cc: Mark Knecht, Learner Study, linux-raid, keld

MRK wrote:

> If this is so, the newer LSI controllers at 6.0gbit/sec could be able to
> do better (they supposedly have a faster chip). Also maybe one could buy
> more controller cards and divide drives among those. These two

Yes, both of these would work.

Someone posted previously on this list and was writing at 1.7GB/s using 
10 x 15K SAS drives md RAID0. He did mention the troughput was higher 
with the LSI SAS2 cards, even with SAS1 port expanders connected.

Regards,

Richard

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-04 22:38                         ` Mark Knecht
@ 2010-04-05 10:07                           ` Learner Study
  2010-04-05 16:35                             ` John Robinson
  0 siblings, 1 reply; 40+ messages in thread
From: Learner Study @ 2010-04-05 10:07 UTC (permalink / raw)
  To: Mark Knecht; +Cc: jools, linux-raid, learner.study

Hi Mark, all:

My apologies if I sounded like a pest by asking too many
questions...Since I don't have access to array of disks, I sought to
the list to see if I could use the info already out there instead of
reinventing it...how well suited is linux raid for 10G kind of
traffic..I don't have resources to check it myself and thought of
asking for help/guidance!

Thanks!

On Sun, Apr 4, 2010 at 3:38 PM, Mark Knecht <markknecht@gmail.com> wrote:
> On Sun, Apr 4, 2010 at 2:28 PM, Jools Wills <jools@oxfordinspire.co.uk> wrote:
>> On Sun, 2010-04-04 at 11:46 -0700, Mark Knecht wrote:
>>
>>> Are you just trolling or something? Why are you asking about non-linux
>>> RAID on a Linux software RAID list?
>>
>> Don't want to start any flame war, but I had no problem with the mail,
>> and it seemed a relevant question. After all, we need to compare against
>> other systems to remain on top right? to strive to be the best?
>>
>> If platform A and B is doing XYZ in terms of raid, then it is surely up
>> for discussion. I think such attitudes like this make people think we
>> are fanatical, rather than educated developers trying to improve things.
>>
>> Best Regards
>>
>> Jools
>>
>> Jools Wills
>> --
>> IT Consultant
>> Oxford Inspire - http://www.oxfordinspire.co.uk - be inspired
>> t: 01235 519446 m: 07966 577498
>> jools@oxfordinspire.co.uk
>>
>>
>
> You are right. There wasn't anything wrong with the questionas asked
> and it was not my intent to start any wars. My apologies if it sounded
> that way.
>
> My reading of this thread in total has been pretty much that the OP
> doesn't know much about RAID (fair enough), or PC architecture (fair
> enough) and wants to ask little questions over and over to get
> information without doing much work on his own. (possibly not so
> fair.) I should just shut my mouth and let the messages go by which I
> will do with this individual from here on out.
>
> My apologies,
> Mark
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-04 23:24                   ` Richard Scobie
@ 2010-04-05 11:20                     ` MRK
  2010-04-05 19:49                       ` Richard Scobie
  0 siblings, 1 reply; 40+ messages in thread
From: MRK @ 2010-04-05 11:20 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Mark Knecht, Learner Study, linux-raid, keld

Richard Scobie wrote:
> MRK wrote:
>
>> If this is so, the newer LSI controllers at 6.0gbit/sec could be able to
>> do better (they supposedly have a faster chip). Also maybe one could buy
>> more controller cards and divide drives among those. These two
>
> Yes, both of these would work.
>
> Someone posted previously on this list and was writing at 1.7GB/s 
> using 10 x 15K SAS drives md RAID0. He did mention the troughput was 
> higher with the LSI SAS2 cards, even with SAS1 port expanders connected.

Not so fast... actually I see a problem with previous deduction of what 
is the bottleneck.

The answer from the LSI engineer leads to think that the bottleneck with 
SATA is the number of IOPS, it's because there are 5 connections 
established and then broken for each I/O. And this is independent from 
the size transferred by each I/O operation via DMA (the overhead of the 
data transfer is the same in SAS and SATA case, it's always the same DMA 
chip doing the transfer).

However, if really the total number of IOPS is the bottleneck in SATA 
with the 3.0gbit/sec LSI cards, why they don't slow down a single SSD 
doing 4k random I/O?

Look at this
http://www.anandtech.com/show/2954/5
OCZ vertex LE doing 162 MB/sec at 4K aligned random writes, that means 
41472 IOPS, independent and unmergeable requests. And that is SATA not SAS.
This is on Windows. Unfortunately we don't know the controller which was 
used for this benchmark.

During MD-RAID sequential dd write I have seen linux (via iostat -x 1) 
merging requests by a factor at least 400 (sometimes much higher), so I 
suppose requests issued to the controller would be at least 1.6 MB long 
(original requests are certainly not shorter than 4K, and 4K x 400=1.6MB).

If the system tops out at about 600MB/sec and writes issued are 1.6MB 
long or more, it means that the controller tops out at 375 IOPS or less.

So how come the controller of the anandtech test above is capable of 
doing 41472 IOPS?


This is also interesting:

Richard Scobie wrote:
> This bottleneck is the SAS controller, at least in my case. I did the 
> same math regarding streaming performance of one drive times number of 
> drive and wondered where the shortfall was, after tests showed I could 
> only streaming read at 850MB/s on the same array. 
I think if you use dd to read from the 16 underlying devices 
simultaneously, independently, and not using MD, (output to /dev/null) 
you should obtain the full disk speed of 1.4 GB/sec or so (aggregated). 
I think I did this test in the past and I noticed this. Can you try? I 
don't have our big disk array in my hands any more :-(


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-04 22:24                       ` Guy Watkins
@ 2010-04-05 13:49                         ` Drew
  0 siblings, 0 replies; 40+ messages in thread
From: Drew @ 2010-04-05 13:49 UTC (permalink / raw)
  To: linux-raid

> This seemed like a fair question to me.  I just wonder why Winblows was left
> out.  After all, if any OS, including Winblows can outperform Linux, we
> should know about it.

Because M$ Windows implementation of software raid sucks?

As part of my job I support windows servers and almost every 3rd party
manual I've ever read says something along the lines of "If your data
is mission critical, do not depend on windows built-in software raid.
Get a real hardware raid card." Some then go on to explain all the
pitfalls if a user is still inclined to use it, and some of the issues
are scary.

Contrast that to the reports I see on here of people using 6+, 10+,
12+ member raid arrays on mdadm (software) raid and having no issues
other then perhaps some performance tweaking.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-05 10:07                           ` Learner Study
@ 2010-04-05 16:35                             ` John Robinson
  0 siblings, 0 replies; 40+ messages in thread
From: John Robinson @ 2010-04-05 16:35 UTC (permalink / raw)
  To: Learner Study; +Cc: Linux RAID

On 05/04/2010 11:07, Learner Study wrote:
> Hi Mark, all:
> 
> My apologies if I sounded like a pest by asking too many
> questions...

Don't worry about it, but do take care - I'm afraid I was similarly 
sceptical about the email alias you've chosen, and got the impression 
from some of your questions that you might be a kid asking people here 
to do your homework for you, which does happen from time to time.

> Since I don't have access to array of disks, I sought to
> the list to see if I could use the info already out there instead of
> reinventing it...how well suited is linux raid for 10G kind of
> traffic..

If you mean 10G bits per second, or at least enough for a 10GigE 
network, then yes I think it is, as evidenced by people here saying 
they've got up to 1400MB/s, as long as you've got enough really fast SAS 
drives hooked up via high-class SAS interfaces over wide PCI Express 
buses to Xeon CPUs with fast RAM, though none of this is exactly 
commodity-class hardware.

As far as I know, 10G bytes per second is beyond PC hardware at the 
moment, and I have no knowledge or experience of the kinds of hardware 
that could reach those speeds.

You got me thinking about the bottlenecks, though. Just for fun (so you 
can see what an odd idea of fun I have), I just restarted my big storage 
server box (Core 2 Quad 3.2GHz on 1600MHz FSB, dual-channel DDR2-800 
memory, Intel P45/ICH10R motherboard) into memtest86+, and it's telling 
me that my memory bandwidth is 4.5GB/s. That's probably about the limit 
of this architecture, though the newer Core i5/7 chips with dual- and 
triple-channel memory controllers integrated can probably manage more.

Now, if I'm running RAID-5 or 6, and Linux md has to copy every page I 
want to write to disc (therefore read it from RAM and write it back 
again), and then read it again to calculate the parity block(s), then 
the SATA/SAS controllers have to read it, that's at least 4 memory 
operations for every write to disc, which is going to mean there's a 
memory subsystem bottleneck/throttle on this machine of about 
1.125GB/sec. I skipped over the bit where I have to write the parity to 
RAM and also have the SATA/SAS controllers read that to write it to the 
disc, but that's going to make a fairly modest difference, maybe down to 
1GB/sec. Since my CPU can calculate RAID-5 or 6 parity on a single core 
at 8GB/sec, that's not the bottleneck.

And I do understand that I'd be rather optimistic to think my RAM 
bandwidth was as much as 4.5GB/s for much of the time.

Reads should be quicker though :-)

Cheers,

John.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-05 11:20                     ` MRK
@ 2010-04-05 19:49                       ` Richard Scobie
  2010-04-05 21:03                         ` Drew
  0 siblings, 1 reply; 40+ messages in thread
From: Richard Scobie @ 2010-04-05 19:49 UTC (permalink / raw)
  To: MRK; +Cc: Mark Knecht, Learner Study, linux-raid, keld

MRK wrote:

> However, if really the total number of IOPS is the bottleneck in SATA
> with the 3.0gbit/sec LSI cards, why they don't slow down a single SSD
> doing 4k random I/O?

We don't know, as we have no information for one of these SSDs attached 
to an LSI SAS controller.

I'm not sure this is an apples to apples comparison. The SSD is one 
device probably connected  to a motherboard SATA controller channel.

The RAID array is 16 devices attached to a port expander in turn 
attached to a SAS controller. At a most simplistic level, surely the SAS 
controller has overhead attached to which drive is being addressed.


> I think if you use dd to read from the 16 underlying devices
> simultaneously, independently, and not using MD, (output to /dev/null)
> you should obtain the full disk speed of 1.4 GB/sec or so (aggregated).
> I think I did this test in the past and I noticed this. Can you try? I
> don't have our big disk array in my hands any more :-(

I'll bear it in mind next time I am in a position to try it.

Regards,

Richard

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-05 19:49                       ` Richard Scobie
@ 2010-04-05 21:03                         ` Drew
  2010-04-05 22:20                           ` Richard Scobie
  2010-04-05 23:49                           ` Roger Heflin
  0 siblings, 2 replies; 40+ messages in thread
From: Drew @ 2010-04-05 21:03 UTC (permalink / raw)
  To: linux-raid

> The RAID array is 16 devices attached to a port expander in turn attached to
> a SAS controller. At a most simplistic level, surely the SAS controller has
> overhead attached to which drive is being addressed.

Don't forget that with a port expander is still limited to the bus
speed of the link between it and the HBA. It doesn't matter how many
drives you hang off an expander, you will still never exceed the rated
speed (1.5/3/6Gb/s) for that one port on the HBA.

Say you have four drives behind an expander on a 6Gb/s link. Each
drive in the array could still bonnie++ at the full 6Gb/s but once you
try to write to all four drives simultaneously (RAID-5/6), the best
you can get out of each is around 1.5Gb/s.

That's why I don't use expanders except for archival SATA drives,
which AFAIK only go one expander deep. The performance penalty isn't
worth the cost savings in my books.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-05 21:03                         ` Drew
@ 2010-04-05 22:20                           ` Richard Scobie
  2010-04-05 23:49                           ` Roger Heflin
  1 sibling, 0 replies; 40+ messages in thread
From: Richard Scobie @ 2010-04-05 22:20 UTC (permalink / raw)
  To: Drew; +Cc: linux-raid

Drew wrote:
>> The RAID array is 16 devices attached to a port expander in turn attached to
>> a SAS controller. At a most simplistic level, surely the SAS controller has
>> overhead attached to which drive is being addressed.
>
> Don't forget that with a port expander is still limited to the bus
> speed of the link between it and the HBA. It doesn't matter how many
> drives you hang off an expander, you will still never exceed the rated
> speed (1.5/3/6Gb/s) for that one port on the HBA.

I'm not sure if you are thinking of a port multiplier instead of a port 
expander.

In any case, in my setup, there are 4 x 3Gb/s lanes connecting the HBA 
to the port expander and each drive is connected to it's own port on the 
expander at 3Gb/s ( obviously each drive is not streaming at 3Gb/s).

So there is plenty of bandwidth there.

> Say you have four drives behind an expander on a 6Gb/s link. Each
> drive in the array could still bonnie++ at the full 6Gb/s but once you
> try to write to all four drives simultaneously (RAID-5/6), the best
> you can get out of each is around 1.5Gb/s.

4 x 100MB/s (average outer edge speed of SATA drive) = 3.2Gb/s which is 
less than 6Gb and works out to 0.8Gb/s to each drive, so I'm not sure 
what you mean here.

Regards,

Richard

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-05 21:03                         ` Drew
  2010-04-05 22:20                           ` Richard Scobie
@ 2010-04-05 23:49                           ` Roger Heflin
  1 sibling, 0 replies; 40+ messages in thread
From: Roger Heflin @ 2010-04-05 23:49 UTC (permalink / raw)
  To: Drew; +Cc: linux-raid

Drew wrote:
>> The RAID array is 16 devices attached to a port expander in turn attached to
>> a SAS controller. At a most simplistic level, surely the SAS controller has
>> overhead attached to which drive is being addressed.
> 
> Don't forget that with a port expander is still limited to the bus
> speed of the link between it and the HBA. It doesn't matter how many
> drives you hang off an expander, you will still never exceed the rated
> speed (1.5/3/6Gb/s) for that one port on the HBA.

If it is a SAS connect to the RAID array, they are often quad channel 
cables (12Gbits/second-ie 4x3Gbps), this is what is on the external 
connection of the card, not a single channel sas/sata like the lower 
end stuff, and most of the more expensive expanders and raid cabinents 
use that.

Still the entire 16disk setup will be limited to be less that whatever 
the interconnect is, and if you start piling more than 16 disks onto 
it things get pretty messy pretty fast.

> 
> Say you have four drives behind an expander on a 6Gb/s link. Each
> drive in the array could still bonnie++ at the full 6Gb/s but once you
> try to write to all four drives simultaneously (RAID-5/6), the best
> you can get out of each is around 1.5Gb/s.
> 
> That's why I don't use expanders except for archival SATA drives,
> which AFAIK only go one expander deep. The performance penalty isn't
> worth the cost savings in my books.
> 
> 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Linux Raid performance
  2010-04-03  1:14           ` Richard Scobie
                               ` (2 preceding siblings ...)
  2010-04-03 18:14             ` MRK
@ 2010-04-14 20:50             ` Bill Davidsen
  3 siblings, 0 replies; 40+ messages in thread
From: Bill Davidsen @ 2010-04-14 20:50 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Mark Knecht, Learner Study, linux-raid, keld

Richard Scobie wrote:
> Mark Knecht wrote:
>
>> Once all of that is in place then possibly more cores will help, but I
>> suspect even then it probably hard to use 4 billion CPU cycles/second
>> doing nothing but disk I/O. SATA controllers are all doing DMA so CPU
>> overhead is relatively *very* low.
>
> There is the RAID5/6 parity calculations to be considered on writes 
> and this appears to be single threaded. There is an experimental 
> multicore kernel option I believe, but recent discussion indicates 
> there may be some problems with it.

That is being polite. With that option set just doing a 'check' on a 
raid-5 will generate 100s of threads and max out all cores. I was trying 
to run the experimental FC13 64 bit kernel, and all of a sudden the 
machine came to a crawl and the cpu use went to 95%+ on all cores. Also 
drove the CPU temp way up, so I have to regard this as unsuitable for 
anything but light testing.

-- 
Bill Davidsen <davidsen@tmr.com>
  "We can't solve today's problems by using the same thinking we
   used in creating them." - Einstein


^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2010-04-14 20:50 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-03-31 19:42 Linux Raid performance Learner Study
2010-03-31 20:15 ` Keld Simonsen
2010-04-02  3:07   ` Learner Study
2010-04-02  9:58     ` Nicolae Mihalache
2010-04-02 17:58       ` Learner Study
2010-04-02 11:05     ` Keld Simonsen
2010-04-02 11:18       ` Keld Simonsen
2010-04-02 17:55       ` Learner Study
2010-04-02 21:14         ` Keld Simonsen
2010-04-02 21:37           ` Learner Study
2010-04-03 11:20             ` Keld Simonsen
2010-04-03 15:56               ` Learner Study
2010-04-04  1:58                 ` Keld Simonsen
2010-04-03  0:10           ` Learner Study
2010-04-03  0:39         ` Mark Knecht
2010-04-03  1:00           ` John Robinson
2010-04-03  1:14           ` Richard Scobie
2010-04-03  1:32             ` Mark Knecht
2010-04-03  1:37               ` Richard Scobie
2010-04-03  3:06                 ` Learner Study
2010-04-03  3:00             ` Learner Study
2010-04-03 19:27               ` Richard Scobie
2010-04-03 18:14             ` MRK
2010-04-03 19:56               ` Richard Scobie
2010-04-04 15:00                 ` MRK
2010-04-04 18:26                   ` Learner Study
2010-04-04 18:46                     ` Mark Knecht
2010-04-04 21:28                       ` Jools Wills
2010-04-04 22:38                         ` Mark Knecht
2010-04-05 10:07                           ` Learner Study
2010-04-05 16:35                             ` John Robinson
2010-04-04 22:24                       ` Guy Watkins
2010-04-05 13:49                         ` Drew
2010-04-04 23:24                   ` Richard Scobie
2010-04-05 11:20                     ` MRK
2010-04-05 19:49                       ` Richard Scobie
2010-04-05 21:03                         ` Drew
2010-04-05 22:20                           ` Richard Scobie
2010-04-05 23:49                           ` Roger Heflin
2010-04-14 20:50             ` Bill Davidsen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.