linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Herbert Poetzl <herbert@13thfloor.at>
To: Wu Fengguang <wfg@linux.intel.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>
Subject: Re: Bad SSD performance with recent kernels
Date: Sun, 29 Jan 2012 21:15:43 +0100	[thread overview]
Message-ID: <20120129201543.GJ29272@MAIL.13thfloor.at> (raw)
In-Reply-To: <20120129161058.GA13156@localhost>

[-- Attachment #1: Type: text/plain, Size: 3331 bytes --]

On Mon, Jan 30, 2012 at 12:10:58AM +0800, Wu Fengguang wrote:
> On Sun, Jan 29, 2012 at 02:13:51PM +0100, Eric Dumazet wrote:
>> Le dimanche 29 janvier 2012 à 19:16 +0800, Wu Fengguang a écrit :

>>> Note that as long as buffered read(2) is used, it makes almost no
>>> difference (well, at least for now) to do "dd bs=128k" or "dd bs=2MB":
>>> the 128kb readahead size will be used underneath to submit read IO.

>> Hmm...

>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768
>> 32768+0 enregistrements lus
>> 32768+0 enregistrements écrits
>> 4294967296 octets (4,3 GB) copiés, 20,7718 s, 207 MB/s


>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048
>> 2048+0 enregistrements lus
>> 2048+0 enregistrements écrits
>> 4294967296 octets (4,3 GB) copiés, 27,7824 s, 155 MB/s

> Interesting. Here are my test results:

> root@lkp-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768
> 32768+0 records in
> 32768+0 records out
> 4294967296 bytes (4.3 GB) copied, 19.0121 s, 226 MB/s
> root@lkp-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048
> 2048+0 records in
> 2048+0 records out
> 4294967296 bytes (4.3 GB) copied, 19.0214 s, 226 MB/s

> Maybe the /dev/sda performance bug on your machine is sensitive to timing?

here are some more confusing results from tests with dd and bonnie++, 
this time I focused on partition vs. loop vs. linear dm (of same partition)

kernel	  -------------- read --------------  -- write ---  all
	  -------- dd --------  -------- bonnie++ --------------
	  [MB/s]  real    %CPU  [MB/s]  %CPU  [MB/s]  %CPU  %CPU
direct
2.6.38.8  262.91   81.90  28.7	 72.30   6.0  248.53  52.0  15.9
2.6.39.4   36.09  595.17   3.1	 70.62   6.0  250.25  53.0  16.3
3.0.18     50.47  425.65   4.1	 70.00   5.0  251.70  44.0  13.9
3.1.10     27.28  787.32   2.0	 75.65   5.0  251.96  45.0  13.3
3.2.2      27.11  792.28   2.0	 76.89   6.0  250.38  44.0  13.3

loop
2.6.38.8  242.89   88.50  21.5	246.58  15.0  240.92  53.0  14.4
2.6.39.4  241.06   89.19  21.5	238.51  15.0  257.59  57.0  14.8
3.0.18	  261.44   82.23  18.8	256.66  15.0  255.17  48.0  12.6
3.1.10	  253.93   84.64  18.1	107.66   7.0  156.51  28.0  10.6
3.2.2	  262.58   81.82  19.8	110.54   7.0  212.01  40.0  11.6

linear
2.6.38.8  262.57   82.00  36.8	 72.46   6.0  243.25  53.0  16.5
2.6.39.4   25.45  843.93   2.3	 70.70   6.0  248.05  54.0  16.6
3.0.18	   55.45  387.43   5.6	 69.72   6.0  249.42  45.0  14.3
3.1.10	   36.62  586.50   3.3	 74.74   6.0  249.99  46.0  13.4
3.2.2	   28.28  759.26   2.3	 74.20   6.0  248.73  46.0  13.6


it seems that dd performance when using a loop device is unaffected
and even improves with the kernel version, the filesystem performance
OTOH degrades after 3.1 ...

in general, filesystem read performance is bad on everything but
a loop device ... judging from the results I'd conclude that there
are at least two different issues 

tests and test results are attached and can be found here:
http://vserver.13thfloor.at/Stuff/SSD/

I plan to do some more tests on the filesystem with -b and -D
tonight, please let me know if you want to see specific output
and/or have any tests I should run with each kernel ...

HTH,
Herbert

> Thanks,
> Fengguang

[-- Attachment #2: SSD.txz --]
[-- Type: application/octet-stream, Size: 56488 bytes --]

  reply	other threads:[~2012-01-29 20:15 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-01-27  6:00 Bad SSD performance with recent kernels Herbert Poetzl
2012-01-27  6:44 ` Eric Dumazet
2012-01-28 12:51 ` Wu Fengguang
2012-01-28 13:33   ` Eric Dumazet
2012-01-29  5:59     ` Wu Fengguang
2012-01-29  8:42       ` Herbert Poetzl
2012-01-29  9:28         ` Wu Fengguang
2012-01-29 10:03       ` Eric Dumazet
2012-01-29 11:16         ` Wu Fengguang
2012-01-29 13:13           ` Eric Dumazet
2012-01-29 15:52             ` Pádraig Brady
2012-01-29 16:10             ` Wu Fengguang
2012-01-29 20:15               ` Herbert Poetzl [this message]
2012-01-30 11:18                 ` Wu Fengguang
2012-01-30 12:34                   ` Eric Dumazet
2012-01-30 14:01                     ` Wu Fengguang
2012-01-30 14:05                       ` Wu Fengguang
2012-01-30  3:17               ` Shaohua Li
2012-01-30  5:31                 ` Eric Dumazet
2012-01-30  5:45                   ` Shaohua Li
2012-01-30  7:13                 ` Herbert Poetzl
2012-01-30  7:22                   ` Shaohua Li
2012-01-30  7:36                     ` Herbert Poetzl
2012-01-30  8:12                       ` Shaohua Li
2012-01-30 10:31                         ` Shaohua Li
2012-01-30 14:28                           ` Wu Fengguang
2012-01-30 14:51                             ` Eric Dumazet
2012-01-30 22:26                               ` Vivek Goyal
2012-01-31  0:14                                 ` Shaohua Li
2012-01-31  1:07                                   ` Wu Fengguang
2012-01-31  3:00                                     ` Shaohua Li
2012-01-31  2:17                                 ` Eric Dumazet
2012-01-31  8:46                                 ` Eric Dumazet
2012-01-31  6:36                             ` Herbert Poetzl
2012-01-30 14:48         ` Wu Fengguang
2012-01-28 17:01   ` Herbert Poetzl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120129201543.GJ29272@MAIL.13thfloor.at \
    --to=herbert@13thfloor.at \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=eric.dumazet@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=wfg@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).