On Mon, Dec 12, 2011 at 07:39, Dave Chinner <david@fromorbit.com> wrote:
>
> > ====== XFS + 2.6.29 ======
>
> Read 21GB @ 11k iops, 210MB/s, av latency of 1.3ms/IO
> Wrote 2.3GB @ 1250 iops, 20MB/s, av latency of 0.27ms/IO
> Total 1.5m IOs, 95% @ <= 2ms
>
> > ====== XFS + 2.6.39 ======
>
> Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
> Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
> Total 460k IOs, 95% @ <= 10ms, 4ms > 50% < 10ms
>
> Looking at the IO stats there, this doesn't look to me like an XFS
> problem. The IO times are much, much longer on 2.6.39, so that's the
> first thing to understand. If the two tests are doing identical IO
> patterns, then I'd be looking at validating raw device performance
> first.
>

Thank you Dave.

I also did raw device and ext4 performance test with 2.6.39, all these tests are
doing identical IO patterns(non-buffered IO, 16 IO threads, 16KB block size,
mixed random read and write, r:w=9:1):
====== raw device + 2.6.39 ======
Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO
Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.095 ms/IO
Total 1.5M IOs, @ 96% <= 2ms

====== ext4 + 2.6.39 ======
Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO
Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.1 ms/IO
Total 1.5M IOs, @ 96% <= 2ms

====== XFS + 2.6.39 ======
Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
Total 460k IOs, @ 95% <= 10ms, 4ms > 50% < 10ms

here are the detailed test results:
== 2.6.39 ==
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-xfs.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-ext4.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-raw.txt

== 2.6.29 ==
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-xfs.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-ext4.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-raw.txt

>
> > I tried different XFS format options and different mount options, but
> > it did not help.
>
> It won't if the problem is inthe layers below XFS.
>
> e.g. IO scheduler behavioural changes could be the cause (esp. if
> you are using CFQ), the SSD could be in different states or running
> garbage collection intermittently and slowing things down, the
> filesystem could be in different states (did you use a fresh
> filesystem for each of these tests?), etc, recent mkfs.xfs will trim
> the entire device if the kernel supports it, etc.


I did all the tests on the same server with deadline scheduler, and xfsprogs version
is 3.1.4. I also ran tests with noop scheduler, but not big difference.

--
Xupeng Yun
http://about.me/xupeng