All of lore.kernel.org
 help / color / mirror / Atom feed
* xfs > md 50% write performance drop on .30+ kernel?
@ 2009-10-12 16:58 mark delfman
  2009-10-12 18:40 ` Richard Scobie
                   ` (3 more replies)
  0 siblings, 4 replies; 23+ messages in thread
From: mark delfman @ 2009-10-12 16:58 UTC (permalink / raw)
  To: Linux RAID Mailing List

Hi... in recent tests we are seeing a 50% drop in performance from
XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)

In short:  Performance to MD0 direct = circa 1.7GBsec (see below), via
xfs circa 850MBsec.  On previous system (2.6.28) there was no drop in
performance (in fact often an increase).

I am hopefully that this is simply a matter of barriers etc on the
newer kernel and MD, but we have tried many options and nothing seems
to change this so would very much appreciate advice.


Below is the configuration / test results

Hardware:  Decent performance quad core with LSI SAS controller:  10 x
15K SAS drives
(note we have tried this on various hardware and various amounts of drives).

Newer kernel setup  (performance drop)
Kernel 2.6.30.8  (open SUSE userspace)
mdadm - v3.0 - 2nd June 2009
Library version:   1.02.31 (2009-03-03)
Driver version:    4.14.0

RAID0 created: mdadm -C /dev/md0 -l0 -n10 /dev/sd[b-k]
RAID0 Performance:
dd if=/dev/zero of=/dev/md0 bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 12.6685 s, 1.7 GB/s


XFS Created:  (can see from output it is self aligning - but tried
various alignments)

# mkfs.xfs -f /dev/md0
meta-data=/dev/md0               isize=256    agcount=32, agsize=22888176 blks
         =                                           sectsz=512   attr=2
data     =                       bsize=4096   blocks=732421600, imaxpct=5
         =                       sunit=16     swidth=160 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=0
realtime =none                   extsz=655360 blocks=0, rtextents=0


Mounted:  mount -o nobarrier /dev/md0 /mnt/md0
/dev/md0 on /mnt/md0 type xfs (rw,nobarrier)
(tried with barriers / async)

Performance:

linux-poly:~ # dd if=/dev/zero of=/mnt/md0/test bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 23.631 s, 887 MB/s



Note:

Older kernel setup (no performance drop)
Newer kernel setup
Kernel 2.6.28.4
mdadm  2.6.8
Library version:   1.02.27 (2008-06-25)
Driver version:    4.14.0

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-12 16:58 xfs > md 50% write performance drop on .30+ kernel? mark delfman
@ 2009-10-12 18:40 ` Richard Scobie
  2009-10-13  1:33 ` Christoph Hellwig
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Richard Scobie @ 2009-10-12 18:40 UTC (permalink / raw)
  To: mark delfman; +Cc: Linux RAID Mailing List

mark delfman wrote:
> Hi... in recent tests we are seeing a 50% drop in performance from
> XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)

I can very loosely (based on very early testing and slightly different 
hardware) agree.

A couple of 2.6.27.19-78.2.30.fc9.x86_64 machines - 8GB RAM LSI SAS 
controller and 16 x WD RE3 750GB SATA md RAID6.

With stripe cache set to 16384, I see dd writes of 590MB/s.

Started testing a similar machine yesterday - 12GB RAM LSI SAS 
controller and 16 x WD RE3 1TB SATA md RAID6.

With stripe cache set to 16384, I see dd writes of around 290MB/s and 
when bumped up to 32768 (the maximum), it increases to 407MB.

Read performance is about the same as the older system - around 960MB/s.

Both systems are using XFS with an external journal.

As I say, early days - I've not compared scheduler etc. settings between 
kernels, but somewhatr disappointing.

Regards,

Richard


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-12 16:58 xfs > md 50% write performance drop on .30+ kernel? mark delfman
  2009-10-12 18:40 ` Richard Scobie
@ 2009-10-13  1:33 ` Christoph Hellwig
  2009-10-13  1:57   ` NeilBrown
  2009-10-13 11:06   ` mark delfman
  2009-10-13  3:38 ` Richard Scobie
  2009-10-13 18:49 ` Greg Freemyer
  3 siblings, 2 replies; 23+ messages in thread
From: Christoph Hellwig @ 2009-10-13  1:33 UTC (permalink / raw)
  To: mark delfman; +Cc: Linux RAID Mailing List

On Mon, Oct 12, 2009 at 05:58:20PM +0100, mark delfman wrote:
> Hi... in recent tests we are seeing a 50% drop in performance from
> XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)
> 
> In short:  Performance to MD0 direct = circa 1.7GBsec (see below), via
> xfs circa 850MBsec.  On previous system (2.6.28) there was no drop in
> performance (in fact often an increase).
> 
> I am hopefully that this is simply a matter of barriers etc on the
> newer kernel and MD, but we have tried many options and nothing seems
> to change this so would very much appreciate advice.

Did barrier support for RAID0 got introduced in 2.6.30?


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13  1:33 ` Christoph Hellwig
@ 2009-10-13  1:57   ` NeilBrown
  2009-10-13 11:06   ` mark delfman
  1 sibling, 0 replies; 23+ messages in thread
From: NeilBrown @ 2009-10-13  1:57 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: mark delfman, Linux RAID Mailing List

On Tue, October 13, 2009 12:33 pm, Christoph Hellwig wrote:
> On Mon, Oct 12, 2009 at 05:58:20PM +0100, mark delfman wrote:
>> Hi... in recent tests we are seeing a 50% drop in performance from
>> XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)
>>
>> In short:  Performance to MD0 direct = circa 1.7GBsec (see below), via
>> xfs circa 850MBsec.  On previous system (2.6.28) there was no drop in
>> performance (in fact often an increase).
>>
>> I am hopefully that this is simply a matter of barriers etc on the
>> newer kernel and MD, but we have tried many options and nothing seems
>> to change this so would very much appreciate advice.
>
> Did barrier support for RAID0 got introduced in 2.6.30?

No, though it is due to go in for 2.6.33

NeilBrown


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-12 16:58 xfs > md 50% write performance drop on .30+ kernel? mark delfman
  2009-10-12 18:40 ` Richard Scobie
  2009-10-13  1:33 ` Christoph Hellwig
@ 2009-10-13  3:38 ` Richard Scobie
  2009-10-13 10:21   ` Asdo
  2009-10-13 18:49 ` Greg Freemyer
  3 siblings, 1 reply; 23+ messages in thread
From: Richard Scobie @ 2009-10-13  3:38 UTC (permalink / raw)
  To: mark delfman; +Cc: Linux RAID Mailing List

mark delfman wrote:
> Hi... in recent tests we are seeing a 50% drop in performance from
> XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)


Richard Scobie wrote:
 > Started testing a similar machine yesterday - 12GB RAM LSI SAS
 > controller and 16 x WD RE3 1TB SATA md RAID6.
 >
 > With stripe cache set to 16384, I see dd writes of around 290MB/s and
 > when bumped up to 32768 (the maximum), it increases to 407MB.

An omission to the above - the machine is running 
kernel-2.6.30.8-64.fc11.x86_64.

Regards,

Richard


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13  3:38 ` Richard Scobie
@ 2009-10-13 10:21   ` Asdo
  2009-10-13 10:34     ` Mikael Abrahamsson
  2009-10-13 19:53     ` Richard Scobie
  0 siblings, 2 replies; 23+ messages in thread
From: Asdo @ 2009-10-13 10:21 UTC (permalink / raw)
  To: Richard Scobie; +Cc: linux-raid

Richard Scobie wrote:
> mark delfman wrote:
>> Hi... in recent tests we are seeing a 50% drop in performance from
>> XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)
>
>
> Richard Scobie wrote:
> > Started testing a similar machine yesterday - 12GB RAM LSI SAS
> > controller and 16 x WD RE3 1TB SATA md RAID6.
> >
> > With stripe cache set to 16384, I see dd writes of around 290MB/s and
> > when bumped up to 32768 (the maximum), it increases to 407MB.
>
> An omission to the above - the machine is running 
> kernel-2.6.30.8-64.fc11.x86_64.

That performance is amazing for me. With 2.6.31 kernel and 
stripe_cache_size 32768 I got around 185MB/sec dd writes (bs=1M) though 
xfs (or 400MB/sec dd to the device directly). My machine was a dual xeon 
5430 and about 13 SATA Hitachi 7200 rpm disks, MD raid-5, chunk size 
1MB, anticipatory scheduler, no LVM. The controller was a 3ware 9650-16ML .
Do you think it was controller's overhead? I have heard mixed opinions 
about 3wares. What are the fastest controllers around for MD-raid use?

Thank you

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 10:21   ` Asdo
@ 2009-10-13 10:34     ` Mikael Abrahamsson
  2009-10-13 14:49       ` Asdo
  2009-10-13 19:53     ` Richard Scobie
  1 sibling, 1 reply; 23+ messages in thread
From: Mikael Abrahamsson @ 2009-10-13 10:34 UTC (permalink / raw)
  To: Asdo; +Cc: Richard Scobie, linux-raid

On Tue, 13 Oct 2009, Asdo wrote:

> I have heard mixed opinions about 3wares. What are the fastest 
> controllers around for MD-raid use?

The 3wares are not the fastest around, I don't even get the performance 
you're describing. I use them because I can get them cheap on the used 
market and because they're stable.

There are plenty of articles around describing read/write latency for the 
3wares as one part of the problem. Just to put into perspective, I get 
approx 5-10 megabyte/s write performance on a 3ware 9500S hw-raid5 with 
ext3, whereas I get at least 10x times that for read. The performance with 
dd directly to the device is 100+ megabyte/s though, it's just thru the fs 
it is slow.

The 3wares usually work better if you single disk them and use md, then I 
get 30-50 megabyte/s write performance thru the fs anyway, which is what I 
guess you do as well.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13  1:33 ` Christoph Hellwig
  2009-10-13  1:57   ` NeilBrown
@ 2009-10-13 11:06   ` mark delfman
  2009-10-13 11:09     ` Majed B.
  2009-10-13 22:52     ` Christoph Hellwig
  1 sibling, 2 replies; 23+ messages in thread
From: mark delfman @ 2009-10-13 11:06 UTC (permalink / raw)
  To: Linux RAID Mailing List

A little more information which I ‘think’ seems to point at MD.....

Creating an EXT3 FS on an MD RAID also shows a circa 50% performance drop.
We have tried a multitude of RAID options (raid6/0 various chunks etc).

Using a hardware based raid XFS / EXT3 shows no performance drop
(although the hardware raid is significantly slower than MD in the
first place)

We are happy to keep testing and offering anything that could be
useful, we are just a little stuck thinking of anything else to do....




On Tue, Oct 13, 2009 at 2:33 AM, Christoph Hellwig <hch@infradead.org> wrote:
> On Mon, Oct 12, 2009 at 05:58:20PM +0100, mark delfman wrote:
>> Hi... in recent tests we are seeing a 50% drop in performance from
>> XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)
>>
>> In short:  Performance to MD0 direct = circa 1.7GBsec (see below), via
>> xfs circa 850MBsec.  On previous system (2.6.28) there was no drop in
>> performance (in fact often an increase).
>>
>> I am hopefully that this is simply a matter of barriers etc on the
>> newer kernel and MD, but we have tried many options and nothing seems
>> to change this so would very much appreciate advice.
>
> Did barrier support for RAID0 got introduced in 2.6.30?
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 11:06   ` mark delfman
@ 2009-10-13 11:09     ` Majed B.
       [not found]       ` <66781b10910130412x309d9de2l574ba12a9ed4100a@mail.gmail.com>
  2009-10-13 22:52     ` Christoph Hellwig
  1 sibling, 1 reply; 23+ messages in thread
From: Majed B. @ 2009-10-13 11:09 UTC (permalink / raw)
  To: mark delfman; +Cc: Linux RAID Mailing List

Out of curiosity, why are you upgrading the kernel? Are you after a
certain feature offered by the new kernel?

If not, roll back to a kernel version where you don't face performance drops.

On Tue, Oct 13, 2009 at 2:06 PM, mark delfman
<markdelfman@googlemail.com> wrote:
> A little more information which I ‘think’ seems to point at MD.....
>
> Creating an EXT3 FS on an MD RAID also shows a circa 50% performance drop.
> We have tried a multitude of RAID options (raid6/0 various chunks etc).
>
> Using a hardware based raid XFS / EXT3 shows no performance drop
> (although the hardware raid is significantly slower than MD in the
> first place)
>
> We are happy to keep testing and offering anything that could be
> useful, we are just a little stuck thinking of anything else to do....
>
>
>
>
> On Tue, Oct 13, 2009 at 2:33 AM, Christoph Hellwig <hch@infradead.org> wrote:
>> On Mon, Oct 12, 2009 at 05:58:20PM +0100, mark delfman wrote:
>>> Hi... in recent tests we are seeing a 50% drop in performance from
>>> XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)
>>>
>>> In short:  Performance to MD0 direct = circa 1.7GBsec (see below), via
>>> xfs circa 850MBsec.  On previous system (2.6.28) there was no drop in
>>> performance (in fact often an increase).
>>>
>>> I am hopefully that this is simply a matter of barriers etc on the
>>> newer kernel and MD, but we have tried many options and nothing seems
>>> to change this so would very much appreciate advice.
>>
>> Did barrier support for RAID0 got introduced in 2.6.30?
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
       [not found]       ` <66781b10910130412x309d9de2l574ba12a9ed4100a@mail.gmail.com>
@ 2009-10-13 11:15         ` Majed B.
  2009-10-13 11:29           ` mark delfman
  2009-10-13 14:30           ` Asdo
  0 siblings, 2 replies; 23+ messages in thread
From: Majed B. @ 2009-10-13 11:15 UTC (permalink / raw)
  To: mark delfman; +Cc: LinuxRaid

Mark, kindly use reply-all :)

I think you could apply the patch for the LSI SAS2 support instead of
using the whole new kernel, assuming the patch doesn't depend on other
things...

On Tue, Oct 13, 2009 at 2:12 PM, mark delfman
<markdelfman@googlemail.com> wrote:
> We upgrading mainly because of support for the emerging LSI SAS2 cards
> (which we are beta testing now)
-- 
       Majed B.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 11:15         ` Majed B.
@ 2009-10-13 11:29           ` mark delfman
  2009-10-13 14:30           ` Asdo
  1 sibling, 0 replies; 23+ messages in thread
From: mark delfman @ 2009-10-13 11:29 UTC (permalink / raw)
  To: Majed B.; +Cc: LinuxRaid

i think that is an option... but we have a little time to try to
resolve this and i think it would be good if we find the core problem
(for all).



On Tue, Oct 13, 2009 at 12:15 PM, Majed B. <majedb@gmail.com> wrote:
> Mark, kindly use reply-all :)
>
> I think you could apply the patch for the LSI SAS2 support instead of
> using the whole new kernel, assuming the patch doesn't depend on other
> things...
>
> On Tue, Oct 13, 2009 at 2:12 PM, mark delfman
> <markdelfman@googlemail.com> wrote:
>> We upgrading mainly because of support for the emerging LSI SAS2 cards
>> (which we are beta testing now)
> --
>       Majed B.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 11:15         ` Majed B.
  2009-10-13 11:29           ` mark delfman
@ 2009-10-13 14:30           ` Asdo
  2009-10-13 15:13             ` mark delfman
  1 sibling, 1 reply; 23+ messages in thread
From: Asdo @ 2009-10-13 14:30 UTC (permalink / raw)
  To: mark delfman; +Cc: LinuxRaid


> On Tue, Oct 13, 2009 at 2:12 PM, mark delfman
> <markdelfman@googlemail.com> wrote:
>   
>> We upgrading mainly because of support for the emerging LSI SAS2 cards
>> (which we are beta testing now)
>>     

What is this LSI SAS2 card you have with 10+ ports? The only 10+ ports 
LSI card I see is the 84016E and it is a SAS1.

You say the driver for such card is included in the vanilla kernel at 
2.6.30? That would be very nice... I grepped the 2.6.31 kernel source 
for LSI cards but I can't find device strings such as 84016E ...

Thank you

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 10:34     ` Mikael Abrahamsson
@ 2009-10-13 14:49       ` Asdo
  0 siblings, 0 replies; 23+ messages in thread
From: Asdo @ 2009-10-13 14:49 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: linux-raid

Mikael Abrahamsson wrote:
> There are plenty of articles around describing read/write latency for 
> the 3wares as one part of the problem. 
I saw them, but those articles I found refer to the older 3wares, the 
9500 like the one you have, and in fact it seems the 9500 has a very bad 
performance from what you describe
> Just to put into perspective, I get approx 5-10 megabyte/s write 
> performance on a 3ware 9500S hw-raid5 with ext3, whereas I get at 
> least 10x times that for read. The performance with dd directly to the 
> device is 100+ megabyte/s though, it's just thru the fs it is slow.
>
> The 3wares usually work better if you single disk them and use md, 
> then I get 30-50 megabyte/s write performance thru the fs anyway, 
> which is what I guess you do as well.
Well, much more than 30-50MB/sec that as I wrote, about 185MB/sec with 
xfs, and the overall write speed raises to about 330MB/sec if there are 
multiple simultaneous write requests from multiple processes.
I'd guess either the 3ware 9500 is very bad (likely) or you are not 
using xfs, or it's not aligned, or you haven't upped the 
stripe_cache_size...

But anyway...
Anybody here tried multiple controller cards and can make a suggestion 
on a controller that is fast for linux MD use? (having at least 16 ports 
if possible)

Thank you

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 14:30           ` Asdo
@ 2009-10-13 15:13             ` mark delfman
  2009-10-13 15:15               ` mark delfman
  0 siblings, 1 reply; 23+ messages in thread
From: mark delfman @ 2009-10-13 15:13 UTC (permalink / raw)
  To: Asdo; +Cc: LinuxRaid

We dont use 10 ports, we use 8 ports > 36 port expander. The 8 ports
act as a single wide port.

We are hitting a performance limit of circa 1.6 - 1.9GBsec regardless
of number of drives, so it max's at around 8 / 9 drives (with 15K).
RAID6 around 900MBsec i recall.  We expect more with emerging
expanders.

We were hoping to use DM MPIO to increase performance using multiple
cards and paths, but MPIO at best matches performance of a single
card, most likely pulls it down.... but this is a different topic i
guess.

XFS in the past has often increased performance - not allows on simple
sequential writes, but FS's are a lot better at intelligently caching
data... so I am very keen to help in whatever way i can to resolve the
FS > MD performance problem.

Thanks again.... Mark


On Tue, Oct 13, 2009 at 3:30 PM, Asdo <asdo@shiftmail.org> wrote:
>
>> On Tue, Oct 13, 2009 at 2:12 PM, mark delfman
>> <markdelfman@googlemail.com> wrote:
>>
>>>
>>> We upgrading mainly because of support for the emerging LSI SAS2 cards
>>> (which we are beta testing now)
>>>
>
> What is this LSI SAS2 card you have with 10+ ports? The only 10+ ports LSI
> card I see is the 84016E and it is a SAS1.
>
> You say the driver for such card is included in the vanilla kernel at
> 2.6.30? That would be very nice... I grepped the 2.6.31 kernel source for
> LSI cards but I can't find device strings such as 84016E ...
>
> Thank you
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 15:13             ` mark delfman
@ 2009-10-13 15:15               ` mark delfman
  0 siblings, 0 replies; 23+ messages in thread
From: mark delfman @ 2009-10-13 15:15 UTC (permalink / raw)
  To: Asdo; +Cc: LinuxRaid

PS - LSI havent publically released the SAS2 (non-raid) card as yet.
They have just released the RAID version and will follow with the
non-raid soon.

Their chips are being used on some of the later server motherboards,
so SAS2 is around but new.



On Tue, Oct 13, 2009 at 4:13 PM, mark delfman
<markdelfman@googlemail.com> wrote:
> We dont use 10 ports, we use 8 ports > 36 port expander. The 8 ports
> act as a single wide port.
>
> We are hitting a performance limit of circa 1.6 - 1.9GBsec regardless
> of number of drives, so it max's at around 8 / 9 drives (with 15K).
> RAID6 around 900MBsec i recall.  We expect more with emerging
> expanders.
>
> We were hoping to use DM MPIO to increase performance using multiple
> cards and paths, but MPIO at best matches performance of a single
> card, most likely pulls it down.... but this is a different topic i
> guess.
>
> XFS in the past has often increased performance - not allows on simple
> sequential writes, but FS's are a lot better at intelligently caching
> data... so I am very keen to help in whatever way i can to resolve the
> FS > MD performance problem.
>
> Thanks again.... Mark
>
>
> On Tue, Oct 13, 2009 at 3:30 PM, Asdo <asdo@shiftmail.org> wrote:
>>
>>> On Tue, Oct 13, 2009 at 2:12 PM, mark delfman
>>> <markdelfman@googlemail.com> wrote:
>>>
>>>>
>>>> We upgrading mainly because of support for the emerging LSI SAS2 cards
>>>> (which we are beta testing now)
>>>>
>>
>> What is this LSI SAS2 card you have with 10+ ports? The only 10+ ports LSI
>> card I see is the 84016E and it is a SAS1.
>>
>> You say the driver for such card is included in the vanilla kernel at
>> 2.6.30? That would be very nice... I grepped the 2.6.31 kernel source for
>> LSI cards but I can't find device strings such as 84016E ...
>>
>> Thank you
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-12 16:58 xfs > md 50% write performance drop on .30+ kernel? mark delfman
                   ` (2 preceding siblings ...)
  2009-10-13  3:38 ` Richard Scobie
@ 2009-10-13 18:49 ` Greg Freemyer
  3 siblings, 0 replies; 23+ messages in thread
From: Greg Freemyer @ 2009-10-13 18:49 UTC (permalink / raw)
  To: mark delfman; +Cc: Linux RAID Mailing List

On Mon, Oct 12, 2009 at 12:58 PM, mark delfman
<markdelfman@googlemail.com> wrote:
> Hi... in recent tests we are seeing a 50% drop in performance from
> XFS>MD on a 2.6.30 kernel (compared to a 2.6.28 kernel)
>
> In short:  Performance to MD0 direct = circa 1.7GBsec (see below), via
> xfs circa 850MBsec.  On previous system (2.6.28) there was no drop in
> performance (in fact often an increase).
>
> I am hopefully that this is simply a matter of barriers etc on the
> newer kernel and MD, but we have tried many options and nothing seems
> to change this so would very much appreciate advice.
>
>
> Below is the configuration / test results
>
> Hardware:  Decent performance quad core with LSI SAS controller:  10 x
> 15K SAS drives
> (note we have tried this on various hardware and various amounts of drives).
>
> Newer kernel setup  (performance drop)
> Kernel 2.6.30.8  (open SUSE userspace)
> mdadm - v3.0 - 2nd June 2009
> Library version:   1.02.31 (2009-03-03)
> Driver version:    4.14.0
>
> RAID0 created: mdadm -C /dev/md0 -l0 -n10 /dev/sd[b-k]
> RAID0 Performance:
> dd if=/dev/zero of=/dev/md0 bs=1M count=20000
> 20000+0 records in
> 20000+0 records out
> 20971520000 bytes (21 GB) copied, 12.6685 s, 1.7 GB/s
>
>
> XFS Created:  (can see from output it is self aligning - but tried
> various alignments)
>
> # mkfs.xfs -f /dev/md0
> meta-data=/dev/md0               isize=256    agcount=32, agsize=22888176 blks
>         =                                           sectsz=512   attr=2
> data     =                       bsize=4096   blocks=732421600, imaxpct=5
>         =                       sunit=16     swidth=160 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal log           bsize=4096   blocks=32768, version=2
>         =                       sectsz=512   sunit=16 blks, lazy-count=0
> realtime =none                   extsz=655360 blocks=0, rtextents=0
>
>
> Mounted:  mount -o nobarrier /dev/md0 /mnt/md0
> /dev/md0 on /mnt/md0 type xfs (rw,nobarrier)
> (tried with barriers / async)
>
> Performance:
>
> linux-poly:~ # dd if=/dev/zero of=/mnt/md0/test bs=1M count=20000
> 20000+0 records in
> 20000+0 records out
> 20971520000 bytes (21 GB) copied, 23.631 s, 887 MB/s
>
>
>
> Note:
>
> Older kernel setup (no performance drop)
> Newer kernel setup
> Kernel 2.6.28.4
> mdadm  2.6.8
> Library version:   1.02.27 (2008-06-25)
> Driver version:    4.14.0

It doesn't look like you are using device mapper, but I just saw this posted:

========
We used to issue EOPNOTSUPP in response to barriers (so flushing ceased to be
supported when it became barrier-based). 'Basic' barrier support was added
first (2.6.30-rc2), as Mike says, by waiting for relevant I/O to complete.
Then this was extended (2.6.31-rc1) to send barriers to the underlying devices
for most dm types of dm targets.

To see which dm targets in a particular source tree forward barriers run:
(set to a non-zero value).
 grep 'ti->num_flush_requests =' drivers/md/dm*c
=========

So barriers went through a implementation change in 2.6.30.  Thought
it might give you one more thing to chase down

Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 10:21   ` Asdo
  2009-10-13 10:34     ` Mikael Abrahamsson
@ 2009-10-13 19:53     ` Richard Scobie
  2009-10-13 21:52       ` mark delfman
  1 sibling, 1 reply; 23+ messages in thread
From: Richard Scobie @ 2009-10-13 19:53 UTC (permalink / raw)
  To: Asdo; +Cc: linux-raid

Asdo wrote:

> Do you think it was controller's overhead? I have heard mixed opinions 
> about 3wares. What are the fastest controllers around for MD-raid use?

The fastest setup I have found have been LSI SAS cards - LSISAS3442E-R, 
with the onboard RAID firmware replaced with the IT firmware.

This is connected to port expander based JBOD chassis loaded with either 
  SAS or SATA drives.

Regards,

Richard

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 19:53     ` Richard Scobie
@ 2009-10-13 21:52       ` mark delfman
  0 siblings, 0 replies; 23+ messages in thread
From: mark delfman @ 2009-10-13 21:52 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Asdo, linux-raid

This is the same experience we have, without a doubt the ‘dumb’ LSI
SAS cards + MD are certainly faster and more flexible (actually, I
should say MD is faster and more flexible).

The LSI SAS2 chips scale up better via the expanders (even if the
expanders / drives are SAS1)

BUT – problem with all this performance, is it is disappointing when
we lose it on XFS :(




On Tue, Oct 13, 2009 at 8:53 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> Asdo wrote:
>
>> Do you think it was controller's overhead? I have heard mixed opinions
>> about 3wares. What are the fastest controllers around for MD-raid use?
>
> The fastest setup I have found have been LSI SAS cards - LSISAS3442E-R, with
> the onboard RAID firmware replaced with the IT firmware.
>
> This is connected to port expander based JBOD chassis loaded with either
>  SAS or SATA drives.
>
> Regards,
>
> Richard
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 11:06   ` mark delfman
  2009-10-13 11:09     ` Majed B.
@ 2009-10-13 22:52     ` Christoph Hellwig
  2009-10-14 19:34       ` mark delfman
  1 sibling, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2009-10-13 22:52 UTC (permalink / raw)
  To: mark delfman; +Cc: Linux RAID Mailing List

On Tue, Oct 13, 2009 at 12:06:24PM +0100, mark delfman wrote:
> A little more information which I ?think? seems to point at MD.....
> 
> Creating an EXT3 FS on an MD RAID also shows a circa 50% performance drop.
> We have tried a multitude of RAID options (raid6/0 various chunks etc).
> 
> Using a hardware based raid XFS / EXT3 shows no performance drop
> (although the hardware raid is significantly slower than MD in the
> first place)
> 
> We are happy to keep testing and offering anything that could be
> useful, we are just a little stuck thinking of anything else to do....

Can you test with conv=direct added to the dd command lines?  If that
shows the problems too it's probably writeback-related.  If not the
problems must be somewhere lower in the stack.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-13 22:52     ` Christoph Hellwig
@ 2009-10-14 19:34       ` mark delfman
  2009-10-27 10:28         ` Thomas Fjellstrom
  0 siblings, 1 reply; 23+ messages in thread
From: mark delfman @ 2009-10-14 19:34 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Linux RAID Mailing List

Hi Chris... we tried the direct DD as requested and the problem is
still there...
1.3GBsec > 325MBsec  (even more dromatic)... hopefully this helps
narrow it down?


Write > MD
linux-poly:~ # dd if=/dev/zero of=/dev/md0 oflag=direct bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 15.7671 s, 1.3 GB/s


Write > XFS > MD
linux-poly:~ # dd if=/dev/zero of=/mnt/md0/test oflag=direct bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 64.616 s, 325 MB/s



On Tue, Oct 13, 2009 at 11:52 PM, Christoph Hellwig <hch@infradead.org> wrote:
> On Tue, Oct 13, 2009 at 12:06:24PM +0100, mark delfman wrote:
>> A little more information which I ?think? seems to point at MD.....
>>
>> Creating an EXT3 FS on an MD RAID also shows a circa 50% performance drop.
>> We have tried a multitude of RAID options (raid6/0 various chunks etc).
>>
>> Using a hardware based raid XFS / EXT3 shows no performance drop
>> (although the hardware raid is significantly slower than MD in the
>> first place)
>>
>> We are happy to keep testing and offering anything that could be
>> useful, we are just a little stuck thinking of anything else to do....
>
> Can you test with conv=direct added to the dd command lines?  If that
> shows the problems too it's probably writeback-related.  If not the
> problems must be somewhere lower in the stack.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-14 19:34       ` mark delfman
@ 2009-10-27 10:28         ` Thomas Fjellstrom
  2009-10-27 11:11           ` Thomas Fjellstrom
  0 siblings, 1 reply; 23+ messages in thread
From: Thomas Fjellstrom @ 2009-10-27 10:28 UTC (permalink / raw)
  To: mark delfman; +Cc: Christoph Hellwig, Linux RAID Mailing List

On Wed October 14 2009, mark delfman wrote:
> Hi Chris... we tried the direct DD as requested and the problem is
> still there...
> 1.3GBsec > 325MBsec  (even more dromatic)... hopefully this helps
> narrow it down?
> 
> 
> Write > MD
> linux-poly:~ # dd if=/dev/zero of=/dev/md0 oflag=direct bs=1M count=20000
> 20000+0 records in
> 20000+0 records out
> 20971520000 bytes (21 GB) copied, 15.7671 s, 1.3 GB/s
> 
> 
> Write > XFS > MD
> linux-poly:~ # dd if=/dev/zero of=/mnt/md0/test oflag=direct bs=1M
>  count=20000 20000+0 records in
> 20000+0 records out
> 20971520000 bytes (21 GB) copied, 64.616 s, 325 MB/s

If it helps, I'm seeing the same sort of thing.
The most I can seemingly tweak out of my new 5x1TB array is 170MB/s write.
Using dd with oflags=direct drops it down to 31MB/s.

Oddly, I see spikes of over 200MB/s write when not using oflags=direct,
but it slows down in between to 11MB/s so over all,
it averages a max of 170MB/s. the device itself is capable of over 500MB/s.
(66% drop?)

small test:

$ dd if=/dev/zero of=/mnt/test-data/test.file bs=512KiB count=4096 oflag=direct
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 71.8088 s, 29.9 MB/s

$ dd if=/dev/zero of=/mnt/test-data/test.file bs=512KiB count=4096
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 19.7101 s, 109 MB/s

$ sudo dd if=/dev/md0 of=/tmp/test-data.img bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.39796 s, 448 MB/s

$ sudo dd if=/tmp/test-data.img of=/dev/md0 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.05666 s, 522 MB/s

$ cd /mnt/test-data/test
$ iozone -A -s4G -y512k -q512k
       ...                                                               
              KB  reclen   write rewrite    read    reread    
         4194304     512  161732  333316   382361   388726 


[snip]
> 


info, if it helps:

# mdadm -D /dev/md0
/dev/md0:
        Version : 1.01
  Creation Time : Wed Oct 14 08:55:25 2009
     Raid Level : raid5
     Array Size : 3907049472 (3726.05 GiB 4000.82 GB)
  Used Dev Size : 976762368 (931.51 GiB 1000.20 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Tue Oct 27 04:18:50 2009
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : natasha:0  (local to host natasha)
           UUID : 7d0e9847:ec3a4a46:32b60a80:06d0ee1c
         Events : 4952

    Number   Major   Minor   RaidDevice State
       0       8       64        0      active sync   /dev/sde
       1       8       80        1      active sync   /dev/sdf
       2       8       32        2      active sync   /dev/sdc
       3       8       48        3      active sync   /dev/sdd
       5       8       96        4      active sync   /dev/sdg

# xfs_info /dev/md0
meta-data=/dev/md0               isize=256    agcount=32, agsize=30523776 blks
         =                       sectsz=4096  attr=2
data     =                       bsize=4096   blocks=976760832, imaxpct=5
         =                       sunit=128    swidth=512 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=476934, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=0
realtime =none                   extsz=2097152 blocks=0, rtextents=0

-- 
Thomas Fjellstrom
tfjellstrom@shaw.ca

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-27 10:28         ` Thomas Fjellstrom
@ 2009-10-27 11:11           ` Thomas Fjellstrom
  2010-01-02  6:54             ` fibre raid
  0 siblings, 1 reply; 23+ messages in thread
From: Thomas Fjellstrom @ 2009-10-27 11:11 UTC (permalink / raw)
  To: mark delfman; +Cc: Christoph Hellwig, Linux RAID Mailing List

On Tue October 27 2009, Thomas Fjellstrom wrote:
> On Wed October 14 2009, mark delfman wrote:
> > Hi Chris... we tried the direct DD as requested and the problem is
> > still there...
> > 1.3GBsec > 325MBsec  (even more dromatic)... hopefully this helps
> > narrow it down?
> >
> >
> > Write > MD
> > linux-poly:~ # dd if=/dev/zero of=/dev/md0 oflag=direct bs=1M
> > count=20000 20000+0 records in
> > 20000+0 records out
> > 20971520000 bytes (21 GB) copied, 15.7671 s, 1.3 GB/s
> >
> >
> > Write > XFS > MD
> > linux-poly:~ # dd if=/dev/zero of=/mnt/md0/test oflag=direct bs=1M
> >  count=20000 20000+0 records in
> > 20000+0 records out
> > 20971520000 bytes (21 GB) copied, 64.616 s, 325 MB/s
> 
> If it helps, I'm seeing the same sort of thing.
> The most I can seemingly tweak out of my new 5x1TB array is 170MB/s
>  write. Using dd with oflags=direct drops it down to 31MB/s.
> 
> Oddly, I see spikes of over 200MB/s write when not using oflags=direct,
> but it slows down in between to 11MB/s so over all,
> it averages a max of 170MB/s. the device itself is capable of over
>  500MB/s. (66% drop?)
> 
> small test:
> 
> $ dd if=/dev/zero of=/mnt/test-data/test.file bs=512KiB count=4096
>  oflag=direct 4096+0 records in
> 4096+0 records out
> 2147483648 bytes (2.1 GB) copied, 71.8088 s, 29.9 MB/s
> 
> $ dd if=/dev/zero of=/mnt/test-data/test.file bs=512KiB count=4096
> 4096+0 records in
> 4096+0 records out
> 2147483648 bytes (2.1 GB) copied, 19.7101 s, 109 MB/s
> 
> $ sudo dd if=/dev/md0 of=/tmp/test-data.img bs=1M count=1024
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 2.39796 s, 448 MB/s
> 
> $ sudo dd if=/tmp/test-data.img of=/dev/md0 bs=1M count=1024
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 2.05666 s, 522 MB/s
> 
> $ cd /mnt/test-data/test
> $ iozone -A -s4G -y512k -q512k
>        ...
>               KB  reclen   write rewrite    read    reread
>          4194304     512  161732  333316   382361   388726
> 
> 
> [snip]
> 
> 
> 
> info, if it helps:
> 
> # mdadm -D /dev/md0
> /dev/md0:
>         Version : 1.01
>   Creation Time : Wed Oct 14 08:55:25 2009
>      Raid Level : raid5
>      Array Size : 3907049472 (3726.05 GiB 4000.82 GB)
>   Used Dev Size : 976762368 (931.51 GiB 1000.20 GB)
>    Raid Devices : 5
>   Total Devices : 5
>     Persistence : Superblock is persistent
> 
>     Update Time : Tue Oct 27 04:18:50 2009
>           State : clean
>  Active Devices : 5
> Working Devices : 5
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>            Name : natasha:0  (local to host natasha)
>            UUID : 7d0e9847:ec3a4a46:32b60a80:06d0ee1c
>          Events : 4952
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       64        0      active sync   /dev/sde
>        1       8       80        1      active sync   /dev/sdf
>        2       8       32        2      active sync   /dev/sdc
>        3       8       48        3      active sync   /dev/sdd
>        5       8       96        4      active sync   /dev/sdg
> 
> # xfs_info /dev/md0
> meta-data=/dev/md0               isize=256    agcount=32, agsize=30523776
>  blks =                       sectsz=4096  attr=2
> data     =                       bsize=4096   blocks=976760832, imaxpct=5
>          =                       sunit=128    swidth=512 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=476934, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=0
> realtime =none                   extsz=2097152 blocks=0, rtextents=0
> 

ran 4 dd's in parallel all writing to a different file on the array:

4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 33.4193 s, 64.3 MB/s
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 35.5599 s, 60.4 MB/s
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 36.4677 s, 58.9 MB/s
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 37.912 s, 56.6 MB/s

iostat showed spikes of up to 300MB/s and it usually hovered over 200MB/s.  
I tried bumping it to 8 at a time, but it seems to max out at just over 
200MB/s.  was hoping that with enough jobs, it might scale up to the devices 
actual max throughput.

-- 
Thomas Fjellstrom
tfjellstrom@shaw.ca

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: xfs > md 50% write performance drop on .30+ kernel?
  2009-10-27 11:11           ` Thomas Fjellstrom
@ 2010-01-02  6:54             ` fibre raid
  0 siblings, 0 replies; 23+ messages in thread
From: fibre raid @ 2010-01-02  6:54 UTC (permalink / raw)
  To: tfjellstrom, mark delfman, Christoph Hellwig, Linux RAID Mailing List

Hi Mark,

I'm catching up on my thread-reading and saw your performance report
concerning MD on 2.6.30 (running RAID 0 at 1.7GBps) versus layering
XFS on top, which reduces performance about 50%. Having read (what I
think is) the full thread, it does not seem there was any conclusion
on this. Did you reach a conclusion to determine the cause, etc? I am
curious to see what the issue what be as I'm seeing this issue as well
on my end.

Best regards,
-T

On Tue, Oct 27, 2009 at 3:11 AM, Thomas Fjellstrom <tfjellstrom@shaw.ca> wrote:
> On Tue October 27 2009, Thomas Fjellstrom wrote:
>> On Wed October 14 2009, mark delfman wrote:
>> > Hi Chris... we tried the direct DD as requested and the problem is
>> > still there...
>> > 1.3GBsec > 325MBsec  (even more dromatic)... hopefully this helps
>> > narrow it down?
>> >
>> >
>> > Write > MD
>> > linux-poly:~ # dd if=/dev/zero of=/dev/md0 oflag=direct bs=1M
>> > count=20000 20000+0 records in
>> > 20000+0 records out
>> > 20971520000 bytes (21 GB) copied, 15.7671 s, 1.3 GB/s
>> >
>> >
>> > Write > XFS > MD
>> > linux-poly:~ # dd if=/dev/zero of=/mnt/md0/test oflag=direct bs=1M
>> >  count=20000 20000+0 records in
>> > 20000+0 records out
>> > 20971520000 bytes (21 GB) copied, 64.616 s, 325 MB/s
>>
>> If it helps, I'm seeing the same sort of thing.
>> The most I can seemingly tweak out of my new 5x1TB array is 170MB/s
>>  write. Using dd with oflags=direct drops it down to 31MB/s.
>>
>> Oddly, I see spikes of over 200MB/s write when not using oflags=direct,
>> but it slows down in between to 11MB/s so over all,
>> it averages a max of 170MB/s. the device itself is capable of over
>>  500MB/s. (66% drop?)
>>
>> small test:
>>
>> $ dd if=/dev/zero of=/mnt/test-data/test.file bs=512KiB count=4096
>>  oflag=direct 4096+0 records in
>> 4096+0 records out
>> 2147483648 bytes (2.1 GB) copied, 71.8088 s, 29.9 MB/s
>>
>> $ dd if=/dev/zero of=/mnt/test-data/test.file bs=512KiB count=4096
>> 4096+0 records in
>> 4096+0 records out
>> 2147483648 bytes (2.1 GB) copied, 19.7101 s, 109 MB/s
>>
>> $ sudo dd if=/dev/md0 of=/tmp/test-data.img bs=1M count=1024
>> 1024+0 records in
>> 1024+0 records out
>> 1073741824 bytes (1.1 GB) copied, 2.39796 s, 448 MB/s
>>
>> $ sudo dd if=/tmp/test-data.img of=/dev/md0 bs=1M count=1024
>> 1024+0 records in
>> 1024+0 records out
>> 1073741824 bytes (1.1 GB) copied, 2.05666 s, 522 MB/s
>>
>> $ cd /mnt/test-data/test
>> $ iozone -A -s4G -y512k -q512k
>>        ...
>>               KB  reclen   write rewrite    read    reread
>>          4194304     512  161732  333316   382361   388726
>>
>>
>> [snip]
>>
>>
>>
>> info, if it helps:
>>
>> # mdadm -D /dev/md0
>> /dev/md0:
>>         Version : 1.01
>>   Creation Time : Wed Oct 14 08:55:25 2009
>>      Raid Level : raid5
>>      Array Size : 3907049472 (3726.05 GiB 4000.82 GB)
>>   Used Dev Size : 976762368 (931.51 GiB 1000.20 GB)
>>    Raid Devices : 5
>>   Total Devices : 5
>>     Persistence : Superblock is persistent
>>
>>     Update Time : Tue Oct 27 04:18:50 2009
>>           State : clean
>>  Active Devices : 5
>> Working Devices : 5
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>            Name : natasha:0  (local to host natasha)
>>            UUID : 7d0e9847:ec3a4a46:32b60a80:06d0ee1c
>>          Events : 4952
>>
>>     Number   Major   Minor   RaidDevice State
>>        0       8       64        0      active sync   /dev/sde
>>        1       8       80        1      active sync   /dev/sdf
>>        2       8       32        2      active sync   /dev/sdc
>>        3       8       48        3      active sync   /dev/sdd
>>        5       8       96        4      active sync   /dev/sdg
>>
>> # xfs_info /dev/md0
>> meta-data=/dev/md0               isize=256    agcount=32, agsize=30523776
>>  blks =                       sectsz=4096  attr=2
>> data     =                       bsize=4096   blocks=976760832, imaxpct=5
>>          =                       sunit=128    swidth=512 blks
>> naming   =version 2              bsize=4096   ascii-ci=0
>> log      =internal               bsize=4096   blocks=476934, version=2
>>          =                       sectsz=4096  sunit=1 blks, lazy-count=0
>> realtime =none                   extsz=2097152 blocks=0, rtextents=0
>>
>
> ran 4 dd's in parallel all writing to a different file on the array:
>
> 4096+0 records in
> 4096+0 records out
> 2147483648 bytes (2.1 GB) copied, 33.4193 s, 64.3 MB/s
> 4096+0 records in
> 4096+0 records out
> 2147483648 bytes (2.1 GB) copied, 35.5599 s, 60.4 MB/s
> 4096+0 records in
> 4096+0 records out
> 2147483648 bytes (2.1 GB) copied, 36.4677 s, 58.9 MB/s
> 4096+0 records in
> 4096+0 records out
> 2147483648 bytes (2.1 GB) copied, 37.912 s, 56.6 MB/s
>
> iostat showed spikes of up to 300MB/s and it usually hovered over 200MB/s.
> I tried bumping it to 8 at a time, but it seems to max out at just over
> 200MB/s.  was hoping that with enough jobs, it might scale up to the devices
> actual max throughput.
>
> --
> Thomas Fjellstrom
> tfjellstrom@shaw.ca
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2010-01-02  6:54 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-10-12 16:58 xfs > md 50% write performance drop on .30+ kernel? mark delfman
2009-10-12 18:40 ` Richard Scobie
2009-10-13  1:33 ` Christoph Hellwig
2009-10-13  1:57   ` NeilBrown
2009-10-13 11:06   ` mark delfman
2009-10-13 11:09     ` Majed B.
     [not found]       ` <66781b10910130412x309d9de2l574ba12a9ed4100a@mail.gmail.com>
2009-10-13 11:15         ` Majed B.
2009-10-13 11:29           ` mark delfman
2009-10-13 14:30           ` Asdo
2009-10-13 15:13             ` mark delfman
2009-10-13 15:15               ` mark delfman
2009-10-13 22:52     ` Christoph Hellwig
2009-10-14 19:34       ` mark delfman
2009-10-27 10:28         ` Thomas Fjellstrom
2009-10-27 11:11           ` Thomas Fjellstrom
2010-01-02  6:54             ` fibre raid
2009-10-13  3:38 ` Richard Scobie
2009-10-13 10:21   ` Asdo
2009-10-13 10:34     ` Mikael Abrahamsson
2009-10-13 14:49       ` Asdo
2009-10-13 19:53     ` Richard Scobie
2009-10-13 21:52       ` mark delfman
2009-10-13 18:49 ` Greg Freemyer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.