All of lore.kernel.org
 help / color / mirror / Atom feed
From: Xupeng Yun <xupeng@xupeng.me>
To: XFS group <xfs@oss.sgi.com>
Subject: Bad performance with XFS + 2.6.38 / 2.6.39
Date: Sun, 11 Dec 2011 20:45:14 +0800	[thread overview]
Message-ID: <CACaf2aYZ=k=x8sPFJs4f-4vQxs+qNyoO1EUi8X=iBjWjRhy99Q@mail.gmail.com> (raw)


[-- Attachment #1.1: Type: text/plain, Size: 4795 bytes --]

Hi,

I am using XFS + 2.6.29 on my MySQL servers, they perform great.

I am testing XFS on SSD these days, due to the fact that FITRIM support of
XFS was
shipped with Linux kernel 2.6.38 or newer, I tested XFS + 2.6.38 and XFS +
2.6.39, but
it surprises me that the performance of XFS with these two versions of
kernel drops so
much.

Here are the results of my tests with fio, all these two tests were taken
on the same hardware
with same testing environment (except for different kernel version).

====== XFS + 2.6.29 ======

# mount | grep /mnt/xfs
/dev/sdc1 on /mnt/xfs type xfs (rw,noatime,nodiratime,nobarrier,logbufs=8)
# fio --filename=/mnt/xfs/test --direct=1 --rw=randrw --bs=16k --size=50G
--numjobs=16 --runtime=120 --group_reporting --name=test --rwmixread=90
--thread --ioengine=psync
test: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1
...
test: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1
fio 1.58
Starting 16 threads
test: Laying out IO file(s) (1 file(s) / 51200MB)
Jobs: 16 (f=16): [mmmmmmmmmmmmmmmm] [100.0% done] [181.5M/21118K /s]
[11.4K/1289 iops] [eta 00m:00s]
test: (groupid=0, jobs=16): err= 0: pid=8446
read : io=21312MB, bw=181862KB/s, iops=11366 , runt=120001msec
clat (usec): min=80 , max=146337 , avg=1369.72, stdev=1026.26
lat (usec): min=81 , max=146338 , avg=1370.87, stdev=1026.27
bw (KB/s) : min= 6998, max=13600, per=6.26%, avg=11376.13, stdev=499.42
write: io=2369.4MB, bw=20218KB/s, iops=1263 , runt=120001msec
clat (usec): min=67 , max=145760 , avg=268.28, stdev=894.06
lat (usec): min=67 , max=145761 , avg=269.46, stdev=894.09
bw (KB/s) : min= 509, max= 2166, per=6.26%, avg=1265.42, stdev=213.82
cpu : usr=11.09%, sys=44.83%, ctx=26015341, majf=0, minf=8396
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=1363980/151635/0, short=0/0/0
lat (usec): 100=0.11%, 250=5.85%, 500=3.79%, 750=0.32%, 1000=5.51%
lat (msec): 2=80.06%, 4=1.26%, 10=3.07%, 20=0.01%, 50=0.01%
lat (msec): 100=0.01%, 250=0.01%

Run status group 0 (all jobs):
READ: io=21312MB, aggrb=181862KB/s, minb=186227KB/s, maxb=186227KB/s,
mint=120001msec, maxt=120001msec
WRITE: io=2369.4MB, aggrb=20217KB/s, minb=20703KB/s, maxb=20703KB/s,
mint=120001msec, maxt=120001msec

Disk stats (read/write):
sdc: ios=1361926/151423, merge=0/0, ticks=1793432/27812, in_queue=1820240,
util=99.99%




====== XFS + 2.6.39 ======

# mount | grep /mnt/xfs
/dev/sdc1 on /mnt/xfs type xfs (rw,noatime,nodiratime,nobarrier,logbufs=8)
# fio --filename=/mnt/xfs/test --direct=1 --rw=randrw --bs=16k --size=50G
--numjobs=16 --runtime=120 --group_reporting --name=test --rwmixread=90
--thread --ioengine=psync
test: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1
...
test: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1
fio 1.58
Starting 16 threads
test: Laying out IO file(s) (1 file(s) / 51200MB)
Jobs: 16 (f=16): [mmmmmmmmmmmmmmmm] [100.0% done] [58416K/6680K /s] [3565
/407 iops] [eta 00m:00s]
test: (groupid=0, jobs=16): err= 0: pid=26902
read : io=6507.1MB, bw=55533KB/s, iops=3470 , runt=120004msec
clat (usec): min=155 , max=356038 , avg=4562.52, stdev=4748.18
lat (usec): min=156 , max=356038 , avg=4562.69, stdev=4748.19
bw (KB/s) : min= 1309, max= 4864, per=6.26%, avg=3479.03, stdev=441.47
write: io=741760KB, bw=6181.2KB/s, iops=386 , runt=120004msec
clat (usec): min=71 , max=348236 , avg=390.11, stdev=3106.30
lat (usec): min=71 , max=348236 , avg=390.31, stdev=3106.30
bw (KB/s) : min= 28, max= 921, per=6.29%, avg=389.02, stdev=114.68
cpu : usr=3.43%, sys=11.12%, ctx=21598477, majf=0, minf=7762
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=416508/46360/0, short=0/0/0
lat (usec): 100=2.65%, 250=0.98%, 500=6.58%, 750=31.88%, 1000=0.27%
lat (msec): 2=0.08%, 4=0.23%, 10=55.04%, 20=1.76%, 50=0.49%
lat (msec): 100=0.02%, 250=0.01%, 500=0.01%

Run status group 0 (all jobs):
READ: io=6507.1MB, aggrb=55532KB/s, minb=56865KB/s, maxb=56865KB/s,
mint=120004msec, maxt=120004msec
WRITE: io=741760KB, aggrb=6181KB/s, minb=6329KB/s, maxb=6329KB/s,
mint=120004msec, maxt=120004msec

Disk stats (read/write):
sdc: ios=416285/46351, merge=0/1, ticks=108136/8768, in_queue=116368,
util=93.60%


as the tests result shows, the IOPS of XFS + 2.6.29 is about 12600, but it
drops to about 3900
with XFS + 2.6.39.

I tried different XFS format options and different mount options, but
it did not help.

Any thought?

--
Xupeng Yun
http://about.me/xupeng

[-- Attachment #1.2: Type: text/html, Size: 5422 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

             reply	other threads:[~2011-12-11 12:45 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-11 12:45 Xupeng Yun [this message]
2011-12-11 23:39 ` Bad performance with XFS + 2.6.38 / 2.6.39 Dave Chinner
2011-12-12  0:40   ` Xupeng Yun
2011-12-12  1:00     ` Dave Chinner
2011-12-12  2:00       ` Xupeng Yun
2011-12-12 13:57         ` Christoph Hellwig
2011-12-21  9:08         ` Yann Dupont
2011-12-21 15:10           ` Stan Hoeppner
2011-12-21 17:56             ` Yann Dupont
2011-12-21 22:26               ` Dave Chinner
2011-12-22  9:23                 ` Yann Dupont
2011-12-22 11:02                   ` Yann Dupont
2012-01-02 10:06                     ` Yann Dupont
2012-01-02 16:08                       ` Peter Grandi
2012-01-02 18:02                         ` Peter Grandi
2012-01-04 10:54                         ` Yann Dupont
2012-01-02 20:35                       ` Dave Chinner
2012-01-03  8:20                         ` Yann Dupont
2012-01-04 12:33                           ` Christoph Hellwig
2012-01-04 13:06                             ` Yann Dupont

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACaf2aYZ=k=x8sPFJs4f-4vQxs+qNyoO1EUi8X=iBjWjRhy99Q@mail.gmail.com' \
    --to=xupeng@xupeng.me \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.