All of lore.kernel.org
 help / color / mirror / Atom feed
From: Xupeng Yun <xupeng@xupeng.me>
To: Dave Chinner <david@fromorbit.com>
Cc: XFS group <xfs@oss.sgi.com>
Subject: Re: Bad performance with XFS + 2.6.38 / 2.6.39
Date: Mon, 12 Dec 2011 08:40:15 +0800	[thread overview]
Message-ID: <CACaf2aYTsxOBXEJEbQu7gwAminBc3R2usDHvypJW0AqOfnz0Pg@mail.gmail.com> (raw)
In-Reply-To: <20111211233929.GI14273@dastard>


[-- Attachment #1.1: Type: text/plain, Size: 2731 bytes --]

On Mon, Dec 12, 2011 at 07:39, Dave Chinner <david@fromorbit.com> wrote:
>
> > ====== XFS + 2.6.29 ======
>
> Read 21GB @ 11k iops, 210MB/s, av latency of 1.3ms/IO
> Wrote 2.3GB @ 1250 iops, 20MB/s, av latency of 0.27ms/IO
> Total 1.5m IOs, 95% @ <= 2ms
>
> > ====== XFS + 2.6.39 ======
>
> Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
> Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
> Total 460k IOs, 95% @ <= 10ms, 4ms > 50% < 10ms
>
> Looking at the IO stats there, this doesn't look to me like an XFS
> problem. The IO times are much, much longer on 2.6.39, so that's the
> first thing to understand. If the two tests are doing identical IO
> patterns, then I'd be looking at validating raw device performance
> first.
>

Thank you Dave.

I also did raw device and ext4 performance test with 2.6.39, all these
tests are
doing identical IO patterns(non-buffered IO, 16 IO threads, 16KB block size,
mixed random read and write, r:w=9:1):
====== raw device + 2.6.39 ======
Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO
Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.095 ms/IO
Total 1.5M IOs, @ 96% <= 2ms

====== ext4 + 2.6.39 ======
Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO
Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.1 ms/IO
Total 1.5M IOs, @ 96% <= 2ms

====== XFS + 2.6.39 ======
Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
Total 460k IOs, @ 95% <= 10ms, 4ms > 50% < 10ms

here are the detailed test results:
== 2.6.39 ==
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-xfs.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-ext4.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-raw.txt

== 2.6.29 ==
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-xfs.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-ext4.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-raw.txt

>
> > I tried different XFS format options and different mount options, but
> > it did not help.
>
> It won't if the problem is inthe layers below XFS.
>
> e.g. IO scheduler behavioural changes could be the cause (esp. if
> you are using CFQ), the SSD could be in different states or running
> garbage collection intermittently and slowing things down, the
> filesystem could be in different states (did you use a fresh
> filesystem for each of these tests?), etc, recent mkfs.xfs will trim
> the entire device if the kernel supports it, etc.


I did all the tests on the same server with deadline scheduler, and
xfsprogs version
is 3.1.4. I also ran tests with noop scheduler, but not big difference.

--
Xupeng Yun
http://about.me/xupeng

[-- Attachment #1.2: Type: text/html, Size: 3807 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-12-12  0:41 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-11 12:45 Bad performance with XFS + 2.6.38 / 2.6.39 Xupeng Yun
2011-12-11 23:39 ` Dave Chinner
2011-12-12  0:40   ` Xupeng Yun [this message]
2011-12-12  1:00     ` Dave Chinner
2011-12-12  2:00       ` Xupeng Yun
2011-12-12 13:57         ` Christoph Hellwig
2011-12-21  9:08         ` Yann Dupont
2011-12-21 15:10           ` Stan Hoeppner
2011-12-21 17:56             ` Yann Dupont
2011-12-21 22:26               ` Dave Chinner
2011-12-22  9:23                 ` Yann Dupont
2011-12-22 11:02                   ` Yann Dupont
2012-01-02 10:06                     ` Yann Dupont
2012-01-02 16:08                       ` Peter Grandi
2012-01-02 18:02                         ` Peter Grandi
2012-01-04 10:54                         ` Yann Dupont
2012-01-02 20:35                       ` Dave Chinner
2012-01-03  8:20                         ` Yann Dupont
2012-01-04 12:33                           ` Christoph Hellwig
2012-01-04 13:06                             ` Yann Dupont

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACaf2aYTsxOBXEJEbQu7gwAminBc3R2usDHvypJW0AqOfnz0Pg@mail.gmail.com \
    --to=xupeng@xupeng.me \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.