linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown
@ 2010-06-08  9:55 Michael Tokarev
  2010-06-08 12:29 ` Dave Chinner
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Tokarev @ 2010-06-08  9:55 UTC (permalink / raw)
  To: Linux-kernel

Hello.

I've got a.. difficult issue here, and am asking if anyone else
has some expirence or information about it.

Production environment (database).  Machine with an Adaptec
RAID SCSI controller, 6 drives in raid10 array, XFS filesystem
and Oracle database on top of it (with - hopefully - proper
sunit/swidth).

Upgrading kernel from 2.6.27 to 2.6.32, and users starts screaming
about very bad performance.  Iostat reports increased I/O latencies,
I/O time increases from ~5ms to ~30ms.  Switching back to 2.6.27,
and everything is back to normal (or, rather, usual).

I tried testing I/O with a sample program which performs direct random
I/O on a given device, and all speeds are actually better in .32
compared with .27, except of random concurrent r+w test, where .27
gives a bit more chances to reads than .32.  Looking at the synthetic
tests I'd expect .32 to be faster, but apparently it is not.

This is only one machine here which is still running 2.6.27, all the
rest are upgraded to 2.6.32, and I see good performance of .32 there.
But this is also the only machine with hardware raid controller, which
is onboard and hence not easy to get rid of, so I'm sorta forced to
use it (I prefer software raid solution because of numerous reasons).

One possible cause of this that comes to mind is block device write
barriers.  But I can't find when they're actually implemented.

The most problematic issue here is that this is only one machine that
behaves like this, and it is a production server, so I've very little
chances to experiment with it.

So before the next try, I'd love to have some suggestions about what
to look for.   In particular, I think it's worth the effort to look
at write barriers, but again, I don't know how to check if they're
actually being used.

Anyone have suggestions for me to collect and to look at?

Thank you!

/mjt

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown
  2010-06-08  9:55 xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown Michael Tokarev
@ 2010-06-08 12:29 ` Dave Chinner
  2010-06-08 20:34   ` xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown] Michael Tokarev
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2010-06-08 12:29 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Linux-kernel, xfs


[ cc'd XFS list ]

On Tue, Jun 08, 2010 at 01:55:51PM +0400, Michael Tokarev wrote:
> Hello.
> 
> I've got a.. difficult issue here, and am asking if anyone else
> has some expirence or information about it.
> 
> Production environment (database).  Machine with an Adaptec
> RAID SCSI controller, 6 drives in raid10 array, XFS filesystem
> and Oracle database on top of it (with - hopefully - proper
> sunit/swidth).
> 
> Upgrading kernel from 2.6.27 to 2.6.32, and users starts screaming
> about very bad performance.  Iostat reports increased I/O latencies,
> I/O time increases from ~5ms to ~30ms.  Switching back to 2.6.27,
> and everything is back to normal (or, rather, usual).
> 
> I tried testing I/O with a sample program which performs direct random
> I/O on a given device, and all speeds are actually better in .32
> compared with .27, except of random concurrent r+w test, where .27
> gives a bit more chances to reads than .32.  Looking at the synthetic
> tests I'd expect .32 to be faster, but apparently it is not.
> 
> This is only one machine here which is still running 2.6.27, all the
> rest are upgraded to 2.6.32, and I see good performance of .32 there.
> But this is also the only machine with hardware raid controller, which
> is onboard and hence not easy to get rid of, so I'm sorta forced to
> use it (I prefer software raid solution because of numerous reasons).
> 
> One possible cause of this that comes to mind is block device write
> barriers.  But I can't find when they're actually implemented.
> 
> The most problematic issue here is that this is only one machine that
> behaves like this, and it is a production server, so I've very little
> chances to experiment with it.
> 
> So before the next try, I'd love to have some suggestions about what
> to look for.   In particular, I think it's worth the effort to look
> at write barriers, but again, I don't know how to check if they're
> actually being used.
> 
> Anyone have suggestions for me to collect and to look at?

http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown]
  2010-06-08 12:29 ` Dave Chinner
@ 2010-06-08 20:34   ` Michael Tokarev
  2010-06-08 23:18     ` Dave Chinner
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Tokarev @ 2010-06-08 20:34 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux-kernel, xfs

08.06.2010 16:29, Dave Chinner wrote:
> On Tue, Jun 08, 2010 at 01:55:51PM +0400, Michael Tokarev wrote:
>> Hello.
>>
>> I've got a.. difficult issue here, and am asking if anyone else
>> has some expirence or information about it.
>>
>> Production environment (database).  Machine with an Adaptec
>> RAID SCSI controller, 6 drives in raid10 array, XFS filesystem
>> and Oracle database on top of it (with - hopefully - proper
>> sunit/swidth).
>>
>> Upgrading kernel from 2.6.27 to 2.6.32, and users starts screaming
>> about very bad performance.  Iostat reports increased I/O latencies,
>> I/O time increases from ~5ms to ~30ms.  Switching back to 2.6.27,
>> and everything is back to normal (or, rather, usual).
>>
>> I tried testing I/O with a sample program which performs direct random
>> I/O on a given device, and all speeds are actually better in .32
>> compared with .27, except of random concurrent r+w test, where .27
>> gives a bit more chances to reads than .32.  Looking at the synthetic
>> tests I'd expect .32 to be faster, but apparently it is not.
>>
>> This is only one machine here which is still running 2.6.27, all the
>> rest are upgraded to 2.6.32, and I see good performance of .32 there.
>> But this is also the only machine with hardware raid controller, which
>> is onboard and hence not easy to get rid of, so I'm sorta forced to
>> use it (I prefer software raid solution because of numerous reasons).
>>
>> One possible cause of this that comes to mind is block device write
>> barriers.  But I can't find when they're actually implemented.
>>
>> The most problematic issue here is that this is only one machine that
>> behaves like this, and it is a production server, so I've very little
>> chances to experiment with it.
>>
>> So before the next try, I'd love to have some suggestions about what
>> to look for.   In particular, I think it's worth the effort to look
>> at write barriers, but again, I don't know how to check if they're
>> actually being used.
>>
>> Anyone have suggestions for me to collect and to look at?
>
> http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F

Yes, I've seen this.  We use xfs for quite long time.  The on-board
controller does not have battery unit, so it should be no different
than a software raid array or single drive.

But I traced the issue to a particular workload -- see $subject.

Simple test doing random reads or writes of 4k blocks in a 1Gb
file located on an xfs filesystem, Mb/sec:

                      sync  direct
              read   write   write
2.6.27 xfs   1.17    3.69    3.80
2.6.32 xfs   1.26    0.52    5.10
                      ^^^^
2.6.32 ext3  1.19    4.91    5.02

Note the 10 times difference between O_SYNC and O_DIRECT writes
in 2.6.32.  This is, well, huge difference, and this is where
the original slowdown comes from, apparently.  In 2.6.27 both
sync and direct writes are on-par with each other, in .32
direct write has improved, but sync write is just pathetic now.
And compared with previous o_sync, that's about 6 times the
difference which I reported previously.

We're running a legacy oracle application here, on Oracle8,
which does not support O_DIRECT and uses O_SYNC.  So it gets
hit by this issue quite badly - no doubt users start screaming
after switching to .32.

I also tested ext3fs, for comparison.  This one does not have
that problem and works just fine in both .32 and .27.  I also
tried disabling barriers for xfs, which made no difference
whatsoever.

So it's O_SYNC writes on XFS which are problematic.  Together
with hw raid apparently, since no one noticed when I switched
other machines (with sw raid) from .27 to .32.

I'll _try_ to find when the problem first appeared, but it is
not that simple since I've only very small time window for
testing.

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown]
  2010-06-08 20:34   ` xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown] Michael Tokarev
@ 2010-06-08 23:18     ` Dave Chinner
  2010-06-09  6:43       ` Michael Tokarev
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2010-06-08 23:18 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Linux-kernel, xfs

On Wed, Jun 09, 2010 at 12:34:00AM +0400, Michael Tokarev wrote:
> 08.06.2010 16:29, Dave Chinner wrote:
> >On Tue, Jun 08, 2010 at 01:55:51PM +0400, Michael Tokarev wrote:
> >>Hello.
> >>
> >>I've got a.. difficult issue here, and am asking if anyone else
> >>has some expirence or information about it.
> >>
> >>Production environment (database).  Machine with an Adaptec
> >>RAID SCSI controller, 6 drives in raid10 array, XFS filesystem
> >>and Oracle database on top of it (with - hopefully - proper
> >>sunit/swidth).
> >>
> >>Upgrading kernel from 2.6.27 to 2.6.32, and users starts screaming
> >>about very bad performance.  Iostat reports increased I/O latencies,
> >>I/O time increases from ~5ms to ~30ms.  Switching back to 2.6.27,
> >>and everything is back to normal (or, rather, usual).
....
> >>The most problematic issue here is that this is only one machine that
> >>behaves like this, and it is a production server, so I've very little
> >>chances to experiment with it.
> >>
> >>So before the next try, I'd love to have some suggestions about what
> >>to look for.   In particular, I think it's worth the effort to look
> >>at write barriers, but again, I don't know how to check if they're
> >>actually being used.
> >>
> >>Anyone have suggestions for me to collect and to look at?
> >
> >http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F
> 
> Yes, I've seen this.  We use xfs for quite long time.  The on-board
> controller does not have battery unit, so it should be no different
> than a software raid array or single drive.
> 
> But I traced the issue to a particular workload -- see $subject.
> 
> Simple test doing random reads or writes of 4k blocks in a 1Gb
> file located on an xfs filesystem, Mb/sec:
> 
>                      sync  direct
>              read   write   write
> 2.6.27 xfs   1.17    3.69    3.80
> 2.6.32 xfs   1.26    0.52    5.10
>                      ^^^^
> 2.6.32 ext3  1.19    4.91    5.02
> 
> Note the 10 times difference between O_SYNC and O_DIRECT writes
> in 2.6.32.  This is, well, huge difference, and this is where
> the original slowdown comes from, apparently. 

Are you running on the raw block device, or on top of LVM/DM/MD to
split up the space on the RAID drive? DM+MD have grown barrier
support since 2.6.27, so it may be that barriers are now being
passed down to the raid hardware on 2.6.32 and they never were on
2.6.27. Can you paste the output of dmesg when the XFS filesystem in
question is mounted on both 2.6.27 and 2.6.32 so we can see if
there is a difference in the use of barriers?

Also, remember that O_DIRECT does not imply O_SYNC. O_DIRECT writes
only write data, while O_SYNC will also write metadata and/or the
log.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown]
  2010-06-08 23:18     ` Dave Chinner
@ 2010-06-09  6:43       ` Michael Tokarev
  2010-06-09  7:47         ` Dave Chinner
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Tokarev @ 2010-06-09  6:43 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux-kernel, xfs

09.06.2010 03:18, Dave Chinner wrote:
> On Wed, Jun 09, 2010 at 12:34:00AM +0400, Michael Tokarev wrote:
[]
>> Simple test doing random reads or writes of 4k blocks in a 1Gb
>> file located on an xfs filesystem, Mb/sec:
>>
>>                       sync  direct
>>               read   write   write
>> 2.6.27 xfs   1.17    3.69    3.80
>> 2.6.32 xfs   1.26    0.52    5.10
>>                      ^^^^
>> 2.6.32 ext3  1.19    4.91    5.02
>>
>> Note the 10 times difference between O_SYNC and O_DIRECT writes
>> in 2.6.32.  This is, well, huge difference, and this is where
>> the original slowdown comes from, apparently.
>
> Are you running on the raw block device, or on top of LVM/DM/MD to
> split up the space on the RAID drive? DM+MD have grown barrier
> support since 2.6.27, so it may be that barriers are now being
> passed down to the raid hardware on 2.6.32 and they never were on
> 2.6.27. Can you paste the output of dmesg when the XFS filesystem in

That's why I asked how to tell if barriers are actually hitting the
device in question.

No, this is the only machine where DM/MD is _not_ used.  On all other
machines we use MD software raid, this machine comes with an onboard
raid controller that does not work in JBOD mode so I weren't able to
use linux software raid.  This is XFS on top of Adaptec RAID card,
nothing in-between.

Also, as I mentioned in the previous email, remounting with nobarrier
makes no difference whatsoever.

(Another side note here - I discovered that unknown options are
silently ignored in "remount mode" while correctly rejected in
"plain mount" mode, -- it looks like a kernel bug actually, but
it's entirely different issue).

> question is mounted on both 2.6.27 and 2.6.32 so we can see if
> there is a difference in the use of barriers?
>
> Also, remember that O_DIRECT does not imply O_SYNC. O_DIRECT writes
> only write data, while O_SYNC will also write metadata and/or the
> log.

I know this.  I also found osyncisosync and osyncisdsync mount
options, and when I try to use the latter, kernel tells it's the
default and hence deprecated.  I don't need metadata updates, but
it _looks_ like the system is doing such updates (with barriers
or flushes?) anyway even when mounted with -o osyncisdsync it behaves
the same: very slow.

I also experimented with both O_SYNC|O_DIRECT: it is as slow as
without O_DIRECT, i.e. O_SYNC makes whole thing slow regardless
of other options.

I looked at the dmesg outputs, and there's no relevant differences
related to block devices or usage of barriers.  For XFS it always
mounts like this:

  SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
  SGI XFS Quota Management subsystem
  XFS mounting filesystem sda6

and for the device in question, it is always like

  Adaptec aacraid driver 1.1-5[2456]-ms
  aacraid 0000:03:01.0: PCI INT A -> GSI 24 (level, low) -> IRQ 24
  AAC0: kernel 5.1-0[8832] Feb  1 2006
  AAC0: monitor 5.1-0[8832]
  AAC0: bios 5.1-0[8832]
  AAC0: serial 267BE0
  AAC0: Non-DASD support enabled.
  AAC0: 64bit support enabled.
  AAC0: 64 Bit DAC enabled
  scsi0 : aacraid
  scsi 0:0:0:0: Direct-Access     Adaptec  f0500            V1.0 PQ: 0 ANSI: 2
  sd 0:0:0:0: [sda] 286715904 512-byte hardware sectors (146799 MB)
  sd 0:0:0:0: [sda] Write Protect is off
  sd 0:0:0:0: [sda] Mode Sense: 06 00 10 00
  sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA
   sda: sda1 sda2 sda3 < sda5 sda6 >

There are tons of other differences, but that is to be expected (like
format of CPU topology printing which is changed between .27 and .32).

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown]
  2010-06-09  6:43       ` Michael Tokarev
@ 2010-06-09  7:47         ` Dave Chinner
  2010-06-09 19:11           ` Michael Tokarev
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2010-06-09  7:47 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Linux-kernel, xfs

On Wed, Jun 09, 2010 at 10:43:37AM +0400, Michael Tokarev wrote:
> 09.06.2010 03:18, Dave Chinner wrote:
> >On Wed, Jun 09, 2010 at 12:34:00AM +0400, Michael Tokarev wrote:
> []
> >>Simple test doing random reads or writes of 4k blocks in a 1Gb
> >>file located on an xfs filesystem, Mb/sec:
> >>
> >>                      sync  direct
> >>              read   write   write
> >>2.6.27 xfs   1.17    3.69    3.80
> >>2.6.32 xfs   1.26    0.52    5.10
> >>                     ^^^^
> >>2.6.32 ext3  1.19    4.91    5.02

Out of curiousity, what does 2.6.34 get on this workload?

Also, what happens if you test with noop or deadline scheduler,
rather than cfq (or whichever one you are using)? i.e. is this a
scheduler regression rather than a filesystem issue?

Also, a block trace of the sync write workload on both .27 and .32
would be interesting to see what the difference in IO patterns is...

> >>Note the 10 times difference between O_SYNC and O_DIRECT writes
> >>in 2.6.32.  This is, well, huge difference, and this is where
> >>the original slowdown comes from, apparently.
> >
> >Are you running on the raw block device, or on top of LVM/DM/MD to
> >split up the space on the RAID drive? DM+MD have grown barrier
> >support since 2.6.27, so it may be that barriers are now being
> >passed down to the raid hardware on 2.6.32 and they never were on
> >2.6.27. Can you paste the output of dmesg when the XFS filesystem in
> 
> That's why I asked how to tell if barriers are actually hitting the
> device in question.
> 
> No, this is the only machine where DM/MD is _not_ used.  On all other
> machines we use MD software raid, this machine comes with an onboard
> raid controller that does not work in JBOD mode so I weren't able to
> use linux software raid.  This is XFS on top of Adaptec RAID card,
> nothing in-between.

Well, I normally just create a raid0 lun per disk in those cases,
hence the luns present the storage to linux as a JBOD....

> I also experimented with both O_SYNC|O_DIRECT: it is as slow as
> without O_DIRECT, i.e. O_SYNC makes whole thing slow regardless
> of other options.

So it's the inode writeback that is causing the slowdown. We've
recently changed O_SYNC semantics to be real O_SYNC, not O_DSYNC
as .27 is. I can't remember if that was in 2.6.32 or not, but
there's definitely a recent change to O_SYNC behaviouri that would
cause this...

> related to block devices or usage of barriers.  For XFS it always
> mounts like this:
> 
>  SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
>  SGI XFS Quota Management subsystem
>  XFS mounting filesystem sda6

So barriers are being issued.

> and for the device in question, it is always like
> 
>  Adaptec aacraid driver 1.1-5[2456]-ms
>  aacraid 0000:03:01.0: PCI INT A -> GSI 24 (level, low) -> IRQ 24
>  AAC0: kernel 5.1-0[8832] Feb  1 2006

Old firmware. An update might help.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown]
  2010-06-09  7:47         ` Dave Chinner
@ 2010-06-09 19:11           ` Michael Tokarev
  2010-06-10  0:47             ` Dave Chinner
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Tokarev @ 2010-06-09 19:11 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux-kernel, xfs

09.06.2010 11:47, Dave Chinner wrote:
> On Wed, Jun 09, 2010 at 10:43:37AM +0400, Michael Tokarev wrote:
>> 09.06.2010 03:18, Dave Chinner wrote:
>>> On Wed, Jun 09, 2010 at 12:34:00AM +0400, Michael Tokarev wrote:
>> []
>>>> Simple test doing random reads or writes of 4k blocks in a 1Gb
>>>> file located on an xfs filesystem, Mb/sec:
>>>>
>>>>                       sync  direct
>>>>               read   write   write
>>>> 2.6.27 xfs   1.17    3.69    3.80
>>>> 2.6.32 xfs   1.26    0.52    5.10
>>>>                      ^^^^
>>>> 2.6.32 ext3  1.19    4.91    5.02
>
> Out of curiousity, what does 2.6.34 get on this workload?

2.6.34 works quite well:
      2.6.34 xfs    1.14   4.75    5.00

The same is with -o osyncisosync (in .34).  Actually,
osyncis[od]sync mount options does not change anything, not
in .32 nor in .34.

> Also, what happens if you test with noop or deadline scheduler,
> rather than cfq (or whichever one you are using)? i.e. is this a
> scheduler regression rather than a filesystem issue?

Using deadline.  Switching to noop makes no difference whatsoever.

> Also, a block trace of the sync write workload on both .27 and .32
> would be interesting to see what the difference in IO patterns is...

I see.  Will try to collect them.  With the limited timeframe I have
to do any testing.

[]
> Well, I normally just create a raid0 lun per disk in those cases,
> hence the luns present the storage to linux as a JBOD....

That's, um, somewhat ugly :)

>> I also experimented with both O_SYNC|O_DIRECT: it is as slow as
>> without O_DIRECT, i.e. O_SYNC makes whole thing slow regardless
>> of other options.
>
> So it's the inode writeback that is causing the slowdown. We've
> recently changed O_SYNC semantics to be real O_SYNC, not O_DSYNC
> as .27 is. I can't remember if that was in 2.6.32 or not, but
> there's definitely a recent change to O_SYNC behaviouri that would
> cause this...

But there are two mount options that seems to control this behavour:
osyncisosync and osyncisdsync.  Neither of which - seemingly - makes
no difference.

>> related to block devices or usage of barriers.  For XFS it always
>> mounts like this:
>>
>>   SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
>>   SGI XFS Quota Management subsystem
>>   XFS mounting filesystem sda6
>
> So barriers are being issued.

They _are_ being issued, I knew it from the start.  What I asked
several times is if there's a way to know if they're _hitting_ the
actual low-level device (disk or raid controller).  This is entirely
different story... ;)

>> and for the device in question, it is always like
>>
>>   Adaptec aacraid driver 1.1-5[2456]-ms
>>   aacraid 0000:03:01.0: PCI INT A ->  GSI 24 (level, low) ->  IRQ 24
>>   AAC0: kernel 5.1-0[8832] Feb  1 2006
>
> Old firmware. An update might help.

Well, it worked just fine in .27.  So far I see some problem in kernel,
not in controller [firmware]... ;)

Thank you !

/mjt

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown]
  2010-06-09 19:11           ` Michael Tokarev
@ 2010-06-10  0:47             ` Dave Chinner
  2010-06-10  5:59               ` Michael Tokarev
  2010-06-10 14:58               ` Eric Sandeen
  0 siblings, 2 replies; 10+ messages in thread
From: Dave Chinner @ 2010-06-10  0:47 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Linux-kernel, xfs

On Wed, Jun 09, 2010 at 11:11:53PM +0400, Michael Tokarev wrote:
> 09.06.2010 11:47, Dave Chinner wrote:
> >On Wed, Jun 09, 2010 at 10:43:37AM +0400, Michael Tokarev wrote:
> >>09.06.2010 03:18, Dave Chinner wrote:
> >>>On Wed, Jun 09, 2010 at 12:34:00AM +0400, Michael Tokarev wrote:
> >>[]
> >>>>Simple test doing random reads or writes of 4k blocks in a 1Gb
> >>>>file located on an xfs filesystem, Mb/sec:
> >>>>
> >>>>                      sync  direct
> >>>>              read   write   write
> >>>>2.6.27 xfs   1.17    3.69    3.80
> >>>>2.6.32 xfs   1.26    0.52    5.10
> >>>>                     ^^^^
> >>>>2.6.32 ext3  1.19    4.91    5.02
> >
> >Out of curiousity, what does 2.6.34 get on this workload?
> 
> 2.6.34 works quite well:
>      2.6.34 xfs    1.14   4.75    5.00

Ok, so we are looking at a fixed regression, then. What stable
version of 2.6.32 are you testing? A large number of XFS fixes went
into 2.6.32.12 (IIRC, it might have been .13), so maybe the problem
is fixed there. Alternatively, can you use 2.6.34 rather than
2.6.32, or bisect the regression down to a specific set of fixes so
we can consider whether a backport is worth the effort?

> The same is with -o osyncisosync (in .34).  Actually,
> osyncis[od]sync mount options does not change anything, not
> in .32 nor in .34.

I think only osyncisosync exists, and it doesn't do anything
anymore.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown]
  2010-06-10  0:47             ` Dave Chinner
@ 2010-06-10  5:59               ` Michael Tokarev
  2010-06-10 14:58               ` Eric Sandeen
  1 sibling, 0 replies; 10+ messages in thread
From: Michael Tokarev @ 2010-06-10  5:59 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux-kernel, xfs

10.06.2010 04:47, Dave Chinner wrote:
> On Wed, Jun 09, 2010 at 11:11:53PM +0400, Michael Tokarev wrote:
>> 09.06.2010 11:47, Dave Chinner wrote:
>>> On Wed, Jun 09, 2010 at 10:43:37AM +0400, Michael Tokarev wrote:
>>>> 09.06.2010 03:18, Dave Chinner wrote:
>>>>> On Wed, Jun 09, 2010 at 12:34:00AM +0400, Michael Tokarev wrote:
>>>> []
>>>>>> Simple test doing random reads or writes of 4k blocks in a 1Gb
>>>>>> file located on an xfs filesystem, Mb/sec:
>>>>>>
>>>>>>                       sync  direct
>>>>>>               read   write   write
>>>>>> 2.6.27 xfs   1.17    3.69    3.80
>>>>>> 2.6.32 xfs   1.26    0.52    5.10
>>>>>>                      ^^^^
>>>>>> 2.6.32 ext3  1.19    4.91    5.02
>>>
>>> Out of curiousity, what does 2.6.34 get on this workload?
>>
>> 2.6.34 works quite well:
>>       2.6.34 xfs    1.14   4.75    5.00
>
> Ok, so we are looking at a fixed regression, then. What stable
> version of 2.6.32 are you testing? A large number of XFS fixes went
> into 2.6.32.12 (IIRC, it might have been .13), so maybe the problem
> is fixed there. Alternatively, can you use 2.6.34 rather than
> 2.6.32, or bisect the regression down to a specific set of fixes so
> we can consider whether a backport is worth the effort?

I tried 2.6.32.15.  A few previous versions too, but all recent
testing were with 2.6.32.15.  So no, the fix is not in 2.6.32.y
yet, since .15 is the latest currently.

Too bad it'd be very difficult for me to do any bisection, -- users
are not comfortable at all already due to all my experiments, --
f.e. their reports that are collecting for whole night stopped
working completely since a few days ago (because every night I'm
rebooting the machine).

Yes it'd be nice to have this fixed in 2.6.32.y.  And I promise I'll
try to find time for bisection (but not promise the tries will be
successful... ;).  Definitely worth a try anyway.

Thank you!

/mjt

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown]
  2010-06-10  0:47             ` Dave Chinner
  2010-06-10  5:59               ` Michael Tokarev
@ 2010-06-10 14:58               ` Eric Sandeen
  1 sibling, 0 replies; 10+ messages in thread
From: Eric Sandeen @ 2010-06-10 14:58 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Michael Tokarev, Linux-kernel, xfs

Dave Chinner wrote:
> On Wed, Jun 09, 2010 at 11:11:53PM +0400, Michael Tokarev wrote:


>> The same is with -o osyncisosync (in .34).  Actually,
>> osyncis[od]sync mount options does not change anything, not
>> in .32 nor in .34.
> 
> I think only osyncisosync exists, and it doesn't do anything
> anymore.

Just to be pedantic, osyncisdsync "exists," but is deprecated and does
nothing to change defaults:

                } else if (!strcmp(this_char, "osyncisdsync")) {
                        /* no-op, this is now the default */
                        cmn_err(CE_WARN,
        "XFS: osyncisdsync is now the default, option is deprecated.");
                }

huh, didn't realize that osyncisosync does nothing but set a flag that
is never tested other than to show mount options:

  File                  Function      Line
0 xfs_mount.h           <global>      285 #define XFS_MOUNT_OSYNCISOSYNC (1ULL << 13)
1 linux-2.6/xfs_super.c xfs_parseargs 292 mp->m_flags |= XFS_MOUNT_OSYNCISOSYNC;
2 linux-2.6/xfs_super.c xfs_showargs  542 { XFS_MOUNT_OSYNCISOSYNC, "," MNTOPT_OSYNCISOSYNC },

Time to deprecate/remove that one too I guess?

-Eric
 
> Cheers,
> 
> Dave.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-06-10 14:58 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-06-08  9:55 xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown Michael Tokarev
2010-06-08 12:29 ` Dave Chinner
2010-06-08 20:34   ` xfs, 2.6.27=>.32 sync write 10 times slowdown [was: xfs, aacraid 2.6.27 => 2.6.32 results in 6 times slowdown] Michael Tokarev
2010-06-08 23:18     ` Dave Chinner
2010-06-09  6:43       ` Michael Tokarev
2010-06-09  7:47         ` Dave Chinner
2010-06-09 19:11           ` Michael Tokarev
2010-06-10  0:47             ` Dave Chinner
2010-06-10  5:59               ` Michael Tokarev
2010-06-10 14:58               ` Eric Sandeen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).