All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rafal Mielniczuk <rafal.mielniczuk@citrix.com>
To: Bob Liu <bob.liu@oracle.com>
Cc: Jens Axboe <axboe@fb.com>,
	Marcus Granado <Marcus.Granado@citrix.com>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Felipe Franciosi <felipe.franciosi@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Christoph Hellwig <hch@infradead.org>,
	"David Vrabel" <david.vrabel@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	Jonathan Davies <Jonathan.Davies@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback
Date: Tue, 18 Aug 2015 09:45:17 +0000	[thread overview]
Message-ID: <A1D98E0E70C35541AEBDE192A520C5434DF4BF@AMSPEX01CL03.citrite.net> (raw)
In-Reply-To: A1D98E0E70C35541AEBDE192A520C5434DD27A@AMSPEX01CL03.citrite.net

On 14/08/15 13:30, Rafal Mielniczuk wrote:
> On 14/08/15 09:31, Bob Liu wrote:
>> On 08/13/2015 12:46 AM, Rafal Mielniczuk wrote:
>>> On 12/08/15 11:17, Bob Liu wrote:
>>>> On 08/12/2015 01:32 AM, Jens Axboe wrote:
>>>>> On 08/11/2015 03:45 AM, Rafal Mielniczuk wrote:
>>>>>> On 11/08/15 07:08, Bob Liu wrote:
>>>>>>> On 08/10/2015 11:52 PM, Jens Axboe wrote:
>>>>>>>> On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote:
>>>> ...
>>>>>>>>> Hello,
>>>>>>>>>
>>>>>>>>> We rerun the tests for sequential reads with the identical settings but with Bob Liu's multiqueue patches reverted from dom0 and guest kernels.
>>>>>>>>> The results we obtained were *better* than the results we got with multiqueue patches applied:
>>>>>>>>>
>>>>>>>>> fio_threads  io_depth  block_size   1-queue_iops  8-queue_iops  *no-mq-patches_iops*
>>>>>>>>>        8           32       512           158K         264K         321K
>>>>>>>>>        8           32        1K           157K         260K         328K
>>>>>>>>>        8           32        2K           157K         258K         336K
>>>>>>>>>        8           32        4K           148K         257K         308K
>>>>>>>>>        8           32        8K           124K         207K         188K
>>>>>>>>>        8           32       16K            84K         105K         82K
>>>>>>>>>        8           32       32K            50K          54K         36K
>>>>>>>>>        8           32       64K            24K          27K         16K
>>>>>>>>>        8           32      128K            11K          13K         11K
>>>>>>>>>
>>>>>>>>> We noticed that the requests are not merged by the guest when the multiqueue patches are applied,
>>>>>>>>> which results in a regression for small block sizes (RealSSD P320h's optimal block size is around 32-64KB).
>>>>>>>>>
>>>>>>>>> We observed similar regression for the Dell MZ-5EA1000-0D3 100 GB 2.5" Internal SSD
>>>>>>>>>
>>>>>>>>> As I understand blk-mq layer bypasses I/O scheduler which also effectively disables merges.
>>>>>>>>> Could you explain why it is difficult to enable merging in the blk-mq layer?
>>>>>>>>> That could help closing the performance gap we observed.
>>>>>>>>>
>>>>>>>>> Otherwise, the tests shows that the multiqueue patches does not improve the performance,
>>>>>>>>> at least when it comes to sequential read/writes operations.
>>>>>>>> blk-mq still provides merging, there should be no difference there. Does the xen patches set BLK_MQ_F_SHOULD_MERGE?
>>>>>>>>
>>>>>>> Yes.
>>>>>>> Is it possible that xen-blkfront driver dequeue requests too fast after we have multiple hardware queues?
>>>>>>> Because new requests don't have the chance merging with old requests which were already dequeued and issued.
>>>>>>>
>>>>>> For some reason we don't see merges even when we set multiqueue to 1.
>>>>>> Below are some stats from the guest system when doing sequential 4KB reads:
>>>>>>
>>>>>> $ fio --name=test --ioengine=libaio --direct=1 --rw=read --numjobs=8
>>>>>>        --iodepth=32 --time_based=1 --runtime=300 --bs=4KB
>>>>>> --filename=/dev/xvdb
>>>>>>
>>>>>> $ iostat -xt 5 /dev/xvdb
>>>>>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>>>>>             0.50    0.00    2.73   85.14    2.00    9.63
>>>>>>
>>>>>> Device:         rrqm/s   wrqm/s       r/s     w/s     rkB/s    wkB/s
>>>>>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>>>>>> xvdb              0.00     0.00 156926.00    0.00 627704.00     0.00
>>>>>> 8.00    30.06    0.19    0.19    0.00   0.01 100.48
>>>>>>
>>>>>> $ cat /sys/block/xvdb/queue/scheduler
>>>>>> none
>>>>>>
>>>>>> $ cat /sys/block/xvdb/queue/nomerges
>>>>>> 0
>>>>>>
>>>>>> Relevant bits from the xenstore configuration on the dom0:
>>>>>>
>>>>>> /local/domain/0/backend/vbd/2/51728/dev = "xvdb"
>>>>>> /local/domain/0/backend/vbd/2/51728/backend-kind = "vbd"
>>>>>> /local/domain/0/backend/vbd/2/51728/type = "phy"
>>>>>> /local/domain/0/backend/vbd/2/51728/multi-queue-max-queues = "1"
>>>>>>
>>>>>> /local/domain/2/device/vbd/51728/multi-queue-num-queues = "1"
>>>>>> /local/domain/2/device/vbd/51728/ring-ref = "9"
>>>>>> /local/domain/2/device/vbd/51728/event-channel = "60"
>>>>> If you add --iodepth-batch=16 to that fio command line? Both mq and non-mq relies on plugging to get
>>>>> batching in the use case above, otherwise IO is dispatched immediately. O_DIRECT is immediate. 
>>>>> I'd be more interested in seeing a test case with buffered IO of a file system on top of the xvdb device,
>>>>> if we're missing merging for that case, then that's a much bigger issue.
>>>>>
>>>>  
>>>> I was using the null block driver for xen blk-mq test.
>>>>
>>>> There were not merges happen any more even after patch: 
>>>> https://lkml.org/lkml/2015/7/13/185
>>>> (Which just converted xen block driver to use blk-mq apis)
>>>>
>>>> Will try a file system soon.
>>>>
>>> I have more results for the guest with and without the patch
>>> https://lkml.org/lkml/2015/7/13/185
>>> applied to the latest stable kernel (4.1.5).
>>>
>> Thank you.
>>
>>> Command line used was:
>>> fio --name=test --ioengine=libaio --rw=read --numjobs=8 \
>>>     --iodepth=32 --time_based=1 --runtime=300 --bs=4KB \
>>>     --filename=/dev/xvdb --direct=(0 and 1) --iodepth_batch=16
>>>
>>> without patch (--direct=1):
>>>   xvdb: ios=18696304/0, merge=75763177/0, ticks=11323872/0, in_queue=11344352, util=100.00%
>>>
>>> with patch (--direct=1):
>>>   xvdb: ios=43709976/0, merge=97/0, ticks=8851972/0, in_queue=8902928, util=100.00%
>>>
>> So request merge can happen just more difficult to be triggered.
>> How about the iops of both cases?
> Without the patch it is 318Kiops, with the patch 146Kiops
>
>>> without patch buffered (--direct=0):
>>>   xvdb: ios=1079051/0, merge=76/0, ticks=749364/0, in_queue=748840, util=94.60
>>>
>>> with patch buffered (--direct=0):
>>>   xvdb: ios=1132932/0, merge=0/0, ticks=689108/0, in_queue=688488, util=93.32%
>>>
> There seems to be very little difference when we measure buffered
> sequential reads.
> Although iostat shows that there are almost no merges happening for both
> cases,
> the avgrq-sz is around 250 sectors (125KB). Does that mean that the
> merges are actually happening
> but on some other layer, not visible to the iostat?
>
> There is a big discrepancy for direct sequential reads and small block
> sizes,
> where we are missing merges that were happening in the version before
> the patch.
> It looks like the request does not reside in the queue for long enough
> to get merged.
>
> One thing I noticed is that in block/blk-mq.c in function
>
> bool blk_mq_attempt_merge(struct request_queue *q,
>                           struct blk_mq_ctx *ctx, struct bio *bio)
>
> The ctx->rq_list queue is mostly empty, the for loop inside the body
> of the function is almost never executed.
>
Hi,

I was able to reproduce Bob's results with nullblk device with default module parameters.

Also, when I increased the completion time of the requests,
I could see merges happening in the version without the patch, which resulted in greater throughput.

Could it be because request had time to accumulate in the queue and had a chance to be merged?
Why merges did not happen in the version with the patch then? Is the patch missing plugging Jens mentioned,
or is it a problem in blk-mq itself?

fio --name=test --ioengine=libaio --rw=read --numjobs=8 --iodepth=32 \
    --time_based=1 --runtime=30 --bs=4KB --filename=/dev/xvdb \
    --direct=1 --group_reporting=1 --iodepth_batch=16

========================================================================
modprobe null_blk
========================================================================
------------------------------------------------------------------------
*no patch* (avgrq-sz = 8.00 avgqu-sz=5.00)
------------------------------------------------------------------------
READ: io=10655MB, aggrb=363694KB/s, minb=363694KB/s, maxb=363694KB/s, mint=30001msec, maxt=30001msec

Disk stats (read/write):
  xvdb: ios=2715852/0, merge=1089/0, ticks=126572/0, in_queue=127456, util=100.00%

------------------------------------------------------------------------
*with patch* (avgrq-sz = 8.00 avgqu-sz=8.00)
------------------------------------------------------------------------
READ: io=20655MB, aggrb=705010KB/s, minb=705010KB/s, maxb=705010KB/s, mint=30001msec, maxt=30001msec

Disk stats (read/write):
  xvdb: ios=5274633/0, merge=22/0, ticks=243208/0, in_queue=242908, util=99.98%

========================================================================
modprobe null_blk irqmode=2 completion_nsec=1000000
========================================================================
------------------------------------------------------------------------
*no patch* (avgrq-sz = 34.00 avgqu-sz=38.00)
------------------------------------------------------------------------
READ: io=10372MB, aggrb=354008KB/s, minb=354008KB/s, maxb=354008KB/s, mint=30003msec, maxt=30003msec

Disk stats (read/write):
  xvdb: ios=621760/0, *merge=1988170/0*, ticks=1136700/0, in_queue=1146020, util=99.76%

------------------------------------------------------------------------
*with patch* (avgrq-sz = 8.00 avgqu-sz=28.00)
------------------------------------------------------------------------
READ: io=2876.8MB, aggrb=98187KB/s, minb=98187KB/s, maxb=98187KB/s, mint=30002msec, maxt=30002msec

Disk stats (read/write):
  xvdb: ios=734048/0, merge=0/0, ticks=843584/0, in_queue=843080, util=99.72%

  parent reply	other threads:[~2015-08-18  9:45 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-11 23:57 [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback Arianna Avanzini
2014-09-11 23:57 ` [PATCH RFC v2 1/5] xen, blkfront: port to the the multi-queue block layer API Arianna Avanzini
2014-09-11 23:57 ` Arianna Avanzini
2014-09-13 19:29   ` Christoph Hellwig
2014-09-13 19:29   ` Christoph Hellwig
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 2/5] xen, blkfront: introduce support for multiple block rings Arianna Avanzini
2014-09-11 23:57 ` Arianna Avanzini
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 3/5] xen, blkfront: negotiate the number of block rings with the backend Arianna Avanzini
2014-09-11 23:57 ` Arianna Avanzini
2014-09-12 10:46   ` David Vrabel
2014-09-12 10:46   ` David Vrabel
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 4/5] xen, blkback: introduce support for multiple block rings Arianna Avanzini
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` Arianna Avanzini
2014-09-11 23:57 ` [PATCH RFC v2 5/5] xen, blkback: negotiate of the number of block rings with the frontend Arianna Avanzini
2014-09-11 23:57 ` Arianna Avanzini
2014-09-12 10:58   ` David Vrabel
2014-09-12 10:58   ` David Vrabel
2014-10-01 20:23   ` Konrad Rzeszutek Wilk
2014-10-01 20:23   ` Konrad Rzeszutek Wilk
2014-10-01 20:27 ` [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback Konrad Rzeszutek Wilk
2015-04-28  7:36   ` Christoph Hellwig
2015-04-28  7:46     ` Arianna Avanzini
2015-04-28  7:46     ` Arianna Avanzini
2015-05-13 10:29       ` Bob Liu
2015-05-13 10:29       ` Bob Liu
2015-06-30 14:21         ` Marcus Granado
2015-06-30 14:21         ` [Xen-devel] " Marcus Granado
2015-07-01  0:04           ` Bob Liu
2015-07-01  0:04           ` Bob Liu
2015-07-01  3:02           ` Jens Axboe
2015-07-01  3:02           ` [Xen-devel] " Jens Axboe
2015-08-10 11:03             ` Rafal Mielniczuk
2015-08-10 11:03             ` [Xen-devel] " Rafal Mielniczuk
2015-08-10 11:14               ` Bob Liu
2015-08-10 11:14               ` [Xen-devel] " Bob Liu
2015-08-10 15:52               ` Jens Axboe
2015-08-10 15:52               ` [Xen-devel] " Jens Axboe
2015-08-11  6:07                 ` Bob Liu
2015-08-11  6:07                 ` [Xen-devel] " Bob Liu
2015-08-11  9:45                   ` Rafal Mielniczuk
2015-08-11  9:45                   ` [Xen-devel] " Rafal Mielniczuk
2015-08-11 17:32                     ` Jens Axboe
2015-08-12 10:16                       ` Bob Liu
2015-08-12 16:46                         ` Rafal Mielniczuk
2015-08-14  8:29                           ` Bob Liu
2015-08-14 12:30                             ` Rafal Mielniczuk
2015-08-14 12:30                             ` [Xen-devel] " Rafal Mielniczuk
2015-08-18  9:45                               ` Rafal Mielniczuk
2015-08-18  9:45                               ` Rafal Mielniczuk [this message]
2015-08-14  8:29                           ` Bob Liu
2015-08-12 16:46                         ` Rafal Mielniczuk
2015-08-12 10:16                       ` Bob Liu
2015-08-11 17:32                     ` Jens Axboe
2015-04-28  7:36   ` Christoph Hellwig
2014-10-01 20:27 ` Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A1D98E0E70C35541AEBDE192A520C5434DF4BF@AMSPEX01CL03.citrite.net \
    --to=rafal.mielniczuk@citrix.com \
    --cc=Jonathan.Davies@citrix.com \
    --cc=Marcus.Granado@citrix.com \
    --cc=avanzini.arianna@gmail.com \
    --cc=axboe@fb.com \
    --cc=bob.liu@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=david.vrabel@citrix.com \
    --cc=felipe.franciosi@citrix.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.