linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shaohua Li <shaohua.li@intel.com>
To: Herbert Poetzl <herbert@13thfloor.at>
Cc: Wu Fengguang <wfg@linux.intel.com>,
	Eric Dumazet <eric.dumazet@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>
Subject: Re: Bad SSD performance with recent kernels
Date: Mon, 30 Jan 2012 16:12:22 +0800	[thread overview]
Message-ID: <1327911142.21268.7.camel@sli10-conroe> (raw)
In-Reply-To: <20120130073621.GN29272@MAIL.13thfloor.at>

On Mon, 2012-01-30 at 08:36 +0100, Herbert Poetzl wrote:
> On Mon, Jan 30, 2012 at 03:22:38PM +0800, Shaohua Li wrote:
> > On Mon, 2012-01-30 at 08:13 +0100, Herbert Poetzl wrote:
> >> On Mon, Jan 30, 2012 at 11:17:38AM +0800, Shaohua Li wrote:
> >>> 2012/1/30 Wu Fengguang <wfg@linux.intel.com>:
> >>>> On Sun, Jan 29, 2012 at 02:13:51PM +0100, Eric Dumazet wrote:
> >>>>> Le dimanche 29 janvier 2012 à 19:16 +0800, Wu Fengguang a écrit :
> 
> >>>>>> Note that as long as buffered read(2) is used, it makes almost no
> >>>>>> difference (well, at least for now) to do "dd bs=128k" or "dd bs=2MB":
> >>>>>> the 128kb readahead size will be used underneath to submit read IO.
> 
> >>>>> Hmm...
> 
> >>>>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768
> >>>>> 32768+0 enregistrements lus
> >>>>> 32768+0 enregistrements écrits
> >>>>> 4294967296 octets (4,3 GB) copiés, 20,7718 s, 207 MB/s
> 
> 
> >>>>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048
> >>>>> 2048+0 enregistrements lus
> >>>>> 2048+0 enregistrements écrits
> >>>>> 4294967296 octets (4,3 GB) copiés, 27,7824 s, 155 MB/s
> 
> >>>> Interesting. Here are my test results:
> 
> >>>> root@lkp-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768
> >>>> 32768+0 records in
> >>>> 32768+0 records out
> >>>> 4294967296 bytes (4.3 GB) copied, 19.0121 s, 226 MB/s
> >>>> root@lkp-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048
> >>>> 2048+0 records in
> >>>> 2048+0 records out
> >>>> 4294967296 bytes (4.3 GB) copied, 19.0214 s, 226 MB/s
> 
> >>>> Maybe the /dev/sda performance bug on your machine is sensitive to timing?
> >>> I got similar result:
> >>> 128k: 224M/s
> >>> 1M: 182M/s
> 
> >>> 1M block size is slow, I guess it's CPU related.
> 
> >>> And as for the big regression with newer kernel than 2.6.38,
> >>> please check if idle=poll helps. CPU idle dramatically impacts
> >>> disk performance and even latest cpuidle governor doesn't help
> >>> for some CPUs.
> 
> >> here are the tests with idle=poll and after switching to 128k
> >> (instead of 1M) blocksize (same amount of data transferred)
> 
> >> kernel    ------------ read /dev/sda -------------
> >>           --- noop ---  - deadline -  ---- cfs ---
> >>           [MB/s]  %CPU  [MB/s]  %CPU  [MB/s]  %CPU
> >> --------------------------------------------------
> >> 3.2.2      45.82   3.7   44.85   3.6   45.04   3.4
> >> 3.2.2i     45.59   2.3   51.78   2.6   46.03   2.2
> >> 3.2.2i128 250.24  20.9  252.68  21.3  250.00  21.6
> 
> >> kernel    -- write ---  ------------------read -----------------
> >>           --- noop ---  --- noop ---  - deadline -  ---- cfs ---
> >>           [MB/s]  %CPU  [MB/s]  %CPU  [MB/s]  %CPU  [MB/s]  %CPU
> >> ----------------------------------------------------------------
> >> 3.2.2     270.95  42.6  162.36   9.9  162.63   9.9  162.65  10.1
> >> 3.2.2i    269.10  41.4  170.82   6.6  171.20   6.6  170.91   6.7
> >> 3.2.2i128 270.38  67.7  162.35  10.2  163.01  10.3  162.34  10.7
> 
> > What's 3.2.2i and 3.2.2i128? 
> 
> 3.2.2 ...... kernel with default options (bs=1M)
> 3.2.2i ..... kernel with idle=poll (bs=1M)
> 3.2.2i128 .. kernel with idle=poll (bs=128k)
> 
> > does idle=poll help?
> 
> doesn't look like, at least to me ...
what's your /sys/block/sdx/queue/max_sectors_kb? if you make it smaller,
does the performance increase? In my system, a smaller max_sectors_kb
makes bs=2M and bs=128k have similar performance, which makes me think
it's CPU doesn't catch up quickly after a request finishes.


  reply	other threads:[~2012-01-30  8:13 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-01-27  6:00 Bad SSD performance with recent kernels Herbert Poetzl
2012-01-27  6:44 ` Eric Dumazet
2012-01-28 12:51 ` Wu Fengguang
2012-01-28 13:33   ` Eric Dumazet
2012-01-29  5:59     ` Wu Fengguang
2012-01-29  8:42       ` Herbert Poetzl
2012-01-29  9:28         ` Wu Fengguang
2012-01-29 10:03       ` Eric Dumazet
2012-01-29 11:16         ` Wu Fengguang
2012-01-29 13:13           ` Eric Dumazet
2012-01-29 15:52             ` Pádraig Brady
2012-01-29 16:10             ` Wu Fengguang
2012-01-29 20:15               ` Herbert Poetzl
2012-01-30 11:18                 ` Wu Fengguang
2012-01-30 12:34                   ` Eric Dumazet
2012-01-30 14:01                     ` Wu Fengguang
2012-01-30 14:05                       ` Wu Fengguang
2012-01-30  3:17               ` Shaohua Li
2012-01-30  5:31                 ` Eric Dumazet
2012-01-30  5:45                   ` Shaohua Li
2012-01-30  7:13                 ` Herbert Poetzl
2012-01-30  7:22                   ` Shaohua Li
2012-01-30  7:36                     ` Herbert Poetzl
2012-01-30  8:12                       ` Shaohua Li [this message]
2012-01-30 10:31                         ` Shaohua Li
2012-01-30 14:28                           ` Wu Fengguang
2012-01-30 14:51                             ` Eric Dumazet
2012-01-30 22:26                               ` Vivek Goyal
2012-01-31  0:14                                 ` Shaohua Li
2012-01-31  1:07                                   ` Wu Fengguang
2012-01-31  3:00                                     ` Shaohua Li
2012-01-31  2:17                                 ` Eric Dumazet
2012-01-31  8:46                                 ` Eric Dumazet
2012-01-31  6:36                             ` Herbert Poetzl
2012-01-30 14:48         ` Wu Fengguang
2012-01-28 17:01   ` Herbert Poetzl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1327911142.21268.7.camel@sli10-conroe \
    --to=shaohua.li@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=eric.dumazet@gmail.com \
    --cc=herbert@13thfloor.at \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=wfg@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).