All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vladislav Bolkhovitin <vst@vlnb.net>
To: Ronald Moesbergen <intercommit@gmail.com>
Cc: fengguang.wu@intel.com, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org, kosaki.motohiro@jp.fujitsu.com,
	Alan.Brunelle@hp.com, linux-fsdevel@vger.kernel.org,
	jens.axboe@oracle.com, randy.dunlap@oracle.com,
	Bart Van Assche <bart.vanassche@gmail.com>
Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev
Date: Mon, 20 Jul 2009 11:20:12 +0400	[thread overview]
Message-ID: <4A641AAC.9030300@vlnb.net> (raw)
In-Reply-To: <4A60C1A8.9020504@vlnb.net>


Vladislav Bolkhovitin, on 07/17/2009 10:23 PM wrote:
> Ronald Moesbergen, on 07/17/2009 06:15 PM wrote:
>> 2009/7/16 Vladislav Bolkhovitin <vst@vlnb.net>:
>>> Ronald Moesbergen, on 07/16/2009 11:32 AM wrote:
>>>> 2009/7/15 Vladislav Bolkhovitin <vst@vlnb.net>:
>>>>>> The drop with 64 max_sectors_kb on the client is a consequence of how
>>>>>> CFQ
>>>>>> is working. I can't find the exact code responsible for this, but from
>>>>>> all
>>>>>> signs, CFQ stops delaying requests if amount of outstanding requests
>>>>>> exceeds
>>>>>> some threshold, which is 2 or 3. With 64 max_sectors_kb and 5 SCST I/O
>>>>>> threads this threshold is exceeded, so CFQ doesn't recover order of
>>>>>> requests, hence the performance drop. With default 512 max_sectors_kb
>>>>>> and
>>>>>> 128K RA the server sees at max 2 requests at time.
>>>>>>
>>>>>> Ronald, can you perform the same tests with 1 and 2 SCST I/O threads,
>>>>>> please?
>>>> Ok. Should I still use the file-on-xfs testcase for this, or should I
>>>> go back to using a regular block device?
>>> Yes, please
>>>
>>>> The file-over-iscsi is quite
>>>> uncommon I suppose, most people will export a block device over iscsi,
>>>> not a file.
>>> No, files are common. The main reason why people use direct block devices is
>>> a not supported by anything believe that comparing with files they "have
>>> less overhead", so "should be faster". But it isn't true and can be easily
>>> checked.
>>>
>>>>> With context-RA patch, please, in those and future tests, since it should
>>>>> make RA for cooperative threads much better.
>>>>>
>>>>>> You can limit amount of SCST I/O threads by num_threads parameter of
>>>>>> scst_vdisk module.
>>>> Ok, I'll try that and include the blk_run_backing_dev,
>>>> readahead-context and io_context patches.
>> The results:
>>
>> client kernel: 2.6.26-15lenny3 (debian)
>> server kernel: 2.6.29.5 with readahead-context, blk_run_backing_dev
>> and io_context
>>
>> With one IO thread:
>>
>> 5) client: default, server: default
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  15.990   15.308   16.689   64.097    2.259    1.002
>>  33554432  15.981   16.064   16.221   63.651    0.392    1.989
>>  16777216  15.841   15.660   16.031   64.635    0.619    4.040
>>
>> 6) client: default, server: 64 max_sectors_kb, RA default
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  16.035   16.024   16.654   63.084    1.130    0.986
>>  33554432  15.924   15.975   16.359   63.668    0.762    1.990
>>  16777216  16.168   16.104   15.838   63.858    0.571    3.991
>>
>> 7) client: default, server: default max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  14.895   16.142   15.998   65.398    2.379    1.022
>>  33554432  16.753   16.169   16.067   62.729    1.146    1.960
>>  16777216  16.866   15.912   16.099   62.892    1.570    3.931
>>
>> 8) client: default, server: 64 max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  15.923   15.716   16.741   63.545    1.715    0.993
>>  33554432  16.010   16.026   16.113   63.802    0.180    1.994
>>  16777216  16.644   16.239   16.143   62.672    0.827    3.917
>>
>> 9) client: 64 max_sectors_kb, default RA. server: 64 max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  15.753   15.882   15.482   65.207    0.697    1.019
>>  33554432  15.670   16.268   15.669   64.548    1.134    2.017
>>  16777216  15.746   15.519   16.411   64.471    1.516    4.029
>>
>> 10) client: default max_sectors_kb, 2MB RA. server: 64 max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  13.639   14.360   13.654   73.795    1.758    1.153
>>  33554432  13.584   13.938   14.538   73.095    2.035    2.284
>>  16777216  13.617   13.510   13.803   75.060    0.665    4.691
>>
>> 11) client: 64 max_sectors_kb, 2MB. RA server: 64 max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  13.428   13.541   14.144   74.760    1.690    1.168
>>  33554432  13.707   13.352   13.462   75.821    0.827    2.369
>>  16777216  14.380   13.504   13.675   73.975    1.991    4.623
>>
>> With two threads:
>> 5) client: default, server: default
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  12.453   12.173   13.014   81.677    2.254    1.276
>>  33554432  12.066   11.999   12.960   83.073    2.877    2.596
>>  16777216  13.719   11.969   12.569   80.554    4.500    5.035
>>
>> 6) client: default, server: 64 max_sectors_kb, RA default
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  12.886   12.201   12.147   82.564    2.198    1.290
>>  33554432  12.344   12.928   12.007   82.483    2.504    2.578
>>  16777216  12.380   11.951   13.119   82.151    3.141    5.134
>>
>> 7) client: default, server: default max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  12.824   13.485   13.534   77.148    1.913    1.205
>>  33554432  12.084   13.752   12.111   81.251    4.800    2.539
>>  16777216  12.658   13.035   11.196   83.640    5.612    5.227
>>
>> 8) client: default, server: 64 max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  12.253   12.552   11.773   84.044    2.230    1.313
>>  33554432  13.177   12.456   11.604   82.723    4.316    2.585
>>  16777216  12.471   12.318   13.006   81.324    1.878    5.083
>>
>> 9) client: 64 max_sectors_kb, default RA. server: 64 max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  14.409   13.311   14.278   73.238    2.624    1.144
>>  33554432  14.665   14.260   14.080   71.455    1.211    2.233
>>  16777216  14.179   14.810   14.640   70.438    1.303    4.402
>>
>> 10) client: default max_sectors_kb, 2MB RA. server: 64 max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  13.401   14.107   13.549   74.860    1.642    1.170
>>  33554432  14.575   13.221   14.428   72.894    3.236    2.278
>>  16777216  13.771   14.227   13.594   73.887    1.408    4.618
>>
>> 11) client: 64 max_sectors_kb, 2MB. RA server: 64 max_sectors_kb, RA 2MB
>> blocksize       R        R        R   R(avg,    R(std        R
>>   (bytes)     (s)      (s)      (s)    MB/s)   ,MB/s)   (IOPS)
>>  67108864  10.286   12.272   10.245   94.317    7.690    1.474
>>  33554432  10.241   10.415   13.374   91.624   10.670    2.863
>>  16777216  10.499   10.224   10.792   97.526    2.151    6.095
>>
>> The last result comes close to 100MB/s!
> 
> Good! Although I expected maximum with a single thread.
> 
> Can you do the same set of tests with deadline scheduler on the server?

Case of 5 I/O threads (default) will also be interesting. I.e., overall, 
cases of 1, 2 and 5 I/O threads with deadline scheduler on the server.

Thanks,
Vlad

  reply	other threads:[~2009-07-20  7:20 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-29  5:35 [RESEND] [PATCH] readahead:add blk_run_backing_dev Hisashi Hifumi
2009-06-01  0:36 ` Andrew Morton
2009-06-01  1:04   ` Hisashi Hifumi
2009-06-05 15:15     ` Alan D. Brunelle
2009-06-06 14:36       ` KOSAKI Motohiro
2009-06-06 22:45         ` Wu Fengguang
2009-06-18 19:04           ` Andrew Morton
2009-06-20  3:55             ` Wu Fengguang
2009-06-20 12:29               ` Vladislav Bolkhovitin
2009-06-29  9:34                 ` Wu Fengguang
2009-06-29 10:26                   ` Ronald Moesbergen
2009-06-29 10:26                     ` Ronald Moesbergen
2009-06-29 10:55                     ` Vladislav Bolkhovitin
2009-06-29 12:54                       ` Wu Fengguang
2009-06-29 12:58                         ` Bart Van Assche
2009-06-29 13:01                           ` Wu Fengguang
2009-06-29 13:04                         ` Vladislav Bolkhovitin
2009-06-29 13:13                           ` Wu Fengguang
2009-06-29 13:28                             ` Wu Fengguang
2009-06-29 14:43                               ` Ronald Moesbergen
2009-06-29 14:51                                 ` Wu Fengguang
2009-06-29 14:56                                   ` Ronald Moesbergen
2009-06-29 15:37                                   ` Vladislav Bolkhovitin
2009-06-29 14:00                           ` Ronald Moesbergen
2009-06-29 14:21                             ` Wu Fengguang
2009-06-29 15:01                               ` Wu Fengguang
2009-06-29 15:37                                 ` Vladislav Bolkhovitin
     [not found]                                   ` <20090630010414.GB31418@localhost>
2009-06-30 10:54                                     ` Vladislav Bolkhovitin
2009-07-01 13:07                                       ` Ronald Moesbergen
2009-07-01 18:12                                         ` Vladislav Bolkhovitin
2009-07-03  9:14                                       ` Ronald Moesbergen
2009-07-03 10:56                                         ` Vladislav Bolkhovitin
2009-07-03 12:41                                           ` Ronald Moesbergen
2009-07-03 12:46                                             ` Vladislav Bolkhovitin
2009-07-04 15:19                                           ` Ronald Moesbergen
2009-07-06 11:12                                             ` Vladislav Bolkhovitin
2009-07-06 14:37                                               ` Ronald Moesbergen
2009-07-06 14:37                                                 ` Ronald Moesbergen
2009-07-06 17:48                                                 ` Vladislav Bolkhovitin
2009-07-07  6:49                                                   ` Ronald Moesbergen
2009-07-07  6:49                                                     ` Ronald Moesbergen
     [not found]                                                     ` <4A5395FD.2040507@vlnb.net>
     [not found]                                                       ` <a0272b440907080149j3eeeb9bat13f942520db059a8@mail.gmail.com>
2009-07-08 12:40                                                         ` Vladislav Bolkhovitin
2009-07-10  6:32                                                           ` Ronald Moesbergen
2009-07-10  8:43                                                             ` Vladislav Bolkhovitin
2009-07-10  9:27                                                               ` Vladislav Bolkhovitin
2009-07-13 12:12                                                                 ` Ronald Moesbergen
2009-07-13 12:36                                                                   ` Wu Fengguang
2009-07-13 12:47                                                                     ` Ronald Moesbergen
2009-07-13 12:52                                                                       ` Wu Fengguang
2009-07-14 18:52                                                                     ` Vladislav Bolkhovitin
2009-07-15  7:06                                                                       ` Wu Fengguang
2009-07-14 18:52                                                                   ` Vladislav Bolkhovitin
2009-07-15  6:30                                                                     ` Vladislav Bolkhovitin
2009-07-16  7:32                                                                       ` Ronald Moesbergen
2009-07-16 10:36                                                                         ` Vladislav Bolkhovitin
2009-07-16 14:54                                                                           ` Ronald Moesbergen
2009-07-16 16:03                                                                             ` Vladislav Bolkhovitin
2009-07-17 14:15                                                                           ` Ronald Moesbergen
2009-07-17 18:23                                                                             ` Vladislav Bolkhovitin
2009-07-20  7:20                                                                               ` Vladislav Bolkhovitin [this message]
2009-07-22  8:44                                                                                 ` Ronald Moesbergen
2009-07-27 13:11                                                                                   ` Vladislav Bolkhovitin
2009-07-28  9:51                                                                                     ` Ronald Moesbergen
2009-07-28 19:07                                                                                       ` Vladislav Bolkhovitin
2009-07-29 12:48                                                                                         ` Ronald Moesbergen
2009-07-31 18:32                                                                                           ` Vladislav Bolkhovitin
2009-08-03  9:15                                                                                             ` Ronald Moesbergen
2009-08-03  9:20                                                                                               ` Vladislav Bolkhovitin
2009-08-03 11:44                                                                                                 ` Ronald Moesbergen
2009-08-03 11:44                                                                                                   ` Ronald Moesbergen
2009-07-15 20:52                                                           ` Kurt Garloff
2009-07-16 10:38                                                             ` Vladislav Bolkhovitin
2009-06-30 10:22                             ` Vladislav Bolkhovitin
2009-06-29 10:55                   ` Vladislav Bolkhovitin
2009-06-29 13:00                     ` Wu Fengguang
2009-09-22 20:58 ` Andrew Morton
2009-09-22 20:58   ` Andrew Morton
  -- strict thread matches above, loose matches on Subject: below --
2009-09-23  1:48 Wu Fengguang
2009-09-23  1:48 ` [RESEND] [PATCH] readahead:add blk_run_backing_dev Wu Fengguang
2009-09-23  1:48 ` (unknown) Wu Fengguang
2009-05-22  0:09 [RESEND][PATCH] readahead:add blk_run_backing_dev Hisashi Hifumi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A641AAC.9030300@vlnb.net \
    --to=vst@vlnb.net \
    --cc=Alan.Brunelle@hp.com \
    --cc=akpm@linux-foundation.org \
    --cc=bart.vanassche@gmail.com \
    --cc=fengguang.wu@intel.com \
    --cc=intercommit@gmail.com \
    --cc=jens.axboe@oracle.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=randy.dunlap@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.