All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vladislav Bolkhovitin <vst@vlnb.net>
To: Ronald Moesbergen <intercommit@gmail.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>,
	linux-kernel@vger.kernel.org,
	Bart Van Assche <bart.vanassche@gmail.com>
Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev
Date: Wed, 01 Jul 2009 22:12:09 +0400	[thread overview]
Message-ID: <4A4BA6F9.8010704@vlnb.net> (raw)
In-Reply-To: <a0272b440907010607g4c0d0c7fk3ad9659319230a4d@mail.gmail.com>


Ronald Moesbergen, on 07/01/2009 05:07 PM wrote:
> 2009/6/30 Vladislav Bolkhovitin <vst@vlnb.net>:
>> Wu Fengguang, on 06/30/2009 05:04 AM wrote:
>>> On Mon, Jun 29, 2009 at 11:37:41PM +0800, Vladislav Bolkhovitin wrote:
>>>> Wu Fengguang, on 06/29/2009 07:01 PM wrote:
>>>>> On Mon, Jun 29, 2009 at 10:21:24PM +0800, Wu Fengguang wrote:
>>>>>> On Mon, Jun 29, 2009 at 10:00:20PM +0800, Ronald Moesbergen wrote:
>>>>>>> ... tests ...
>>>>>>>
>>>>>>>> We started with 2.6.29, so why not complete with it (to save
>>>>>>>> additional
>>>>>>>> Ronald's effort to move on 2.6.30)?
>>>>>>>>
>>>>>>>>>> 2. Default vanilla 2.6.29 kernel, 512 KB read-ahead, the rest is
>>>>>>>>>> default
>>>>>>>>> How about 2MB RAID readahead size? That transforms into about 512KB
>>>>>>>>> per-disk readahead size.
>>>>>>>> OK. Ronald, can you 4 more test cases, please:
>>>>>>>>
>>>>>>>> 7. Default vanilla 2.6.29 kernel, 2MB read-ahead, the rest is default
>>>>>>>>
>>>>>>>> 8. Default vanilla 2.6.29 kernel, 2MB read-ahead, 64 KB
>>>>>>>> max_sectors_kb, the rest is default
>>>>>>>>
>>>>>>>> 9. Patched by the Fengguang's patch vanilla 2.6.29 kernel, 2MB
>>>>>>>> read-ahead, the rest is default
>>>>>>>>
>>>>>>>> 10. Patched by the Fengguang's patch vanilla 2.6.29 kernel, 2MB
>>>>>>>> read-ahead, 64 KB max_sectors_kb, the rest is default
>>>>>>> The results:
>>>>>> I made a blindless average:
>>>>>>
>>>>>> N       MB/s          IOPS      case
>>>>>>
>>>>>> 0      114.859       984.148    Unpatched, 128KB readahead, 512
>>>>>> max_sectors_kb
>>>>>> 1      122.960       981.213    Unpatched, 512KB readahead, 512
>>>>>> max_sectors_kb
>>>>>> 2      120.709       985.111    Unpatched, 2MB readahead, 512
>>>>>> max_sectors_kb
>>>>>> 3      158.732      1004.714    Unpatched, 512KB readahead, 64
>>>>>> max_sectors_kb
>>>>>> 4      159.237       979.659    Unpatched, 2MB readahead, 64
>>>>>> max_sectors_kb
>>>>>>
>>>>>> 5      114.583       982.998    Patched, 128KB readahead, 512
>>>>>> max_sectors_kb
>>>>>> 6      124.902       987.523    Patched, 512KB readahead, 512
>>>>>> max_sectors_kb
>>>>>> 7      127.373       984.848    Patched, 2MB readahead, 512
>>>>>> max_sectors_kb
>>>>>> 8      161.218       986.698    Patched, 512KB readahead, 64
>>>>>> max_sectors_kb
>>>>>> 9      163.908       574.651    Patched, 2MB readahead, 64
>>>>>> max_sectors_kb
>>>>>>
>>>>>> So before/after patch:
>>>>>>
>>>>>>        avg throughput      135.299 => 138.397  by +2.3%
>>>>>>        avg IOPS            986.969 => 903.344  by -8.5%
>>>>>>
>>>>>> The IOPS is a bit weird.
>>>>>>
>>>>>> Summaries:
>>>>>> - this patch improves RAID throughput by +2.3% on average
>>>>>> - after this patch, 2MB readahead performs slightly better
>>>>>>  (by 1-2%) than 512KB readahead
>>>>> and the most important one:
>>>>> - 64 max_sectors_kb performs much better than 256 max_sectors_kb, by
>>>>> ~30% !
>>>> Yes, I've just wanted to point it out ;)
>>> OK, now I tend to agree on decreasing max_sectors_kb and increasing
>>> read_ahead_kb. But before actually trying to push that idea I'd like
>>> to
>>> - do more benchmarks
>>> - figure out why context readahead didn't help SCST performance
>>>  (previous traces show that context readahead is submitting perfect
>>>   large io requests, so I wonder if it's some io scheduler bug)
>> Because, as we found out, without your http://lkml.org/lkml/2009/5/21/319
>> patch read-ahead was nearly disabled, hence there were no difference which
>> algorithm was used?
>>
>> Ronald, can you run the following tests, please? This time with 2 hosts,
>> initiator (client) and target (server) connected using 1 Gbps iSCSI. It
>> would be the best if on the client vanilla 2.6.29 will be ran, but any other
>> kernel will be fine as well, only specify which. Blockdev-perftest should be
>> ran as before in buffered mode, i.e. with "-a" switch.
> 
> I could, but: only the first 'dd' run of blockdev-perftest will have
> any value, since all others will be served from the target's cache,
> won't that make the results pretty much useless (?). Are you sure this
> is what you want me to test?

Hmm, I forgot about this.. Can you setup possibility to automatically 
ssh from the client to the server and modify drop_caches() function in 
blockdev-perftest on the client so it will instead of

   sync
   echo 3 > /proc/sys/vm/drop_caches

do

   ssh root@target "sync; echo 3 > /proc/sys/vm/drop_caches"

Thanks,
Vlad


  reply	other threads:[~2009-07-01 18:12 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-29  5:35 [RESEND] [PATCH] readahead:add blk_run_backing_dev Hisashi Hifumi
2009-06-01  0:36 ` Andrew Morton
2009-06-01  1:04   ` Hisashi Hifumi
2009-06-05 15:15     ` Alan D. Brunelle
2009-06-06 14:36       ` KOSAKI Motohiro
2009-06-06 22:45         ` Wu Fengguang
2009-06-18 19:04           ` Andrew Morton
2009-06-20  3:55             ` Wu Fengguang
2009-06-20 12:29               ` Vladislav Bolkhovitin
2009-06-29  9:34                 ` Wu Fengguang
2009-06-29 10:26                   ` Ronald Moesbergen
2009-06-29 10:26                     ` Ronald Moesbergen
2009-06-29 10:55                     ` Vladislav Bolkhovitin
2009-06-29 12:54                       ` Wu Fengguang
2009-06-29 12:58                         ` Bart Van Assche
2009-06-29 13:01                           ` Wu Fengguang
2009-06-29 13:04                         ` Vladislav Bolkhovitin
2009-06-29 13:13                           ` Wu Fengguang
2009-06-29 13:28                             ` Wu Fengguang
2009-06-29 14:43                               ` Ronald Moesbergen
2009-06-29 14:51                                 ` Wu Fengguang
2009-06-29 14:56                                   ` Ronald Moesbergen
2009-06-29 15:37                                   ` Vladislav Bolkhovitin
2009-06-29 14:00                           ` Ronald Moesbergen
2009-06-29 14:21                             ` Wu Fengguang
2009-06-29 15:01                               ` Wu Fengguang
2009-06-29 15:37                                 ` Vladislav Bolkhovitin
     [not found]                                   ` <20090630010414.GB31418@localhost>
2009-06-30 10:54                                     ` Vladislav Bolkhovitin
2009-07-01 13:07                                       ` Ronald Moesbergen
2009-07-01 18:12                                         ` Vladislav Bolkhovitin [this message]
2009-07-03  9:14                                       ` Ronald Moesbergen
2009-07-03 10:56                                         ` Vladislav Bolkhovitin
2009-07-03 12:41                                           ` Ronald Moesbergen
2009-07-03 12:46                                             ` Vladislav Bolkhovitin
2009-07-04 15:19                                           ` Ronald Moesbergen
2009-07-06 11:12                                             ` Vladislav Bolkhovitin
2009-07-06 14:37                                               ` Ronald Moesbergen
2009-07-06 14:37                                                 ` Ronald Moesbergen
2009-07-06 17:48                                                 ` Vladislav Bolkhovitin
2009-07-07  6:49                                                   ` Ronald Moesbergen
2009-07-07  6:49                                                     ` Ronald Moesbergen
     [not found]                                                     ` <4A5395FD.2040507@vlnb.net>
     [not found]                                                       ` <a0272b440907080149j3eeeb9bat13f942520db059a8@mail.gmail.com>
2009-07-08 12:40                                                         ` Vladislav Bolkhovitin
2009-07-10  6:32                                                           ` Ronald Moesbergen
2009-07-10  8:43                                                             ` Vladislav Bolkhovitin
2009-07-10  9:27                                                               ` Vladislav Bolkhovitin
2009-07-13 12:12                                                                 ` Ronald Moesbergen
2009-07-13 12:36                                                                   ` Wu Fengguang
2009-07-13 12:47                                                                     ` Ronald Moesbergen
2009-07-13 12:52                                                                       ` Wu Fengguang
2009-07-14 18:52                                                                     ` Vladislav Bolkhovitin
2009-07-15  7:06                                                                       ` Wu Fengguang
2009-07-14 18:52                                                                   ` Vladislav Bolkhovitin
2009-07-15  6:30                                                                     ` Vladislav Bolkhovitin
2009-07-16  7:32                                                                       ` Ronald Moesbergen
2009-07-16 10:36                                                                         ` Vladislav Bolkhovitin
2009-07-16 14:54                                                                           ` Ronald Moesbergen
2009-07-16 16:03                                                                             ` Vladislav Bolkhovitin
2009-07-17 14:15                                                                           ` Ronald Moesbergen
2009-07-17 18:23                                                                             ` Vladislav Bolkhovitin
2009-07-20  7:20                                                                               ` Vladislav Bolkhovitin
2009-07-22  8:44                                                                                 ` Ronald Moesbergen
2009-07-27 13:11                                                                                   ` Vladislav Bolkhovitin
2009-07-28  9:51                                                                                     ` Ronald Moesbergen
2009-07-28 19:07                                                                                       ` Vladislav Bolkhovitin
2009-07-29 12:48                                                                                         ` Ronald Moesbergen
2009-07-31 18:32                                                                                           ` Vladislav Bolkhovitin
2009-08-03  9:15                                                                                             ` Ronald Moesbergen
2009-08-03  9:20                                                                                               ` Vladislav Bolkhovitin
2009-08-03 11:44                                                                                                 ` Ronald Moesbergen
2009-08-03 11:44                                                                                                   ` Ronald Moesbergen
2009-07-15 20:52                                                           ` Kurt Garloff
2009-07-16 10:38                                                             ` Vladislav Bolkhovitin
2009-06-30 10:22                             ` Vladislav Bolkhovitin
2009-06-29 10:55                   ` Vladislav Bolkhovitin
2009-06-29 13:00                     ` Wu Fengguang
2009-09-22 20:58 ` Andrew Morton
2009-09-22 20:58   ` Andrew Morton
  -- strict thread matches above, loose matches on Subject: below --
2009-09-23  1:48 Wu Fengguang
2009-09-23  1:48 ` [RESEND] [PATCH] readahead:add blk_run_backing_dev Wu Fengguang
2009-09-23  1:48 ` (unknown) Wu Fengguang
2009-05-22  0:09 [RESEND][PATCH] readahead:add blk_run_backing_dev Hisashi Hifumi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A4BA6F9.8010704@vlnb.net \
    --to=vst@vlnb.net \
    --cc=bart.vanassche@gmail.com \
    --cc=fengguang.wu@intel.com \
    --cc=intercommit@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.