From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jens Axboe Subject: Re: [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback Date: Mon, 10 Aug 2015 09:52:46 -0600 Message-ID: <55C8C8CE.7020301__11283.184917602$1439222103$gmane$org@fb.com> References: <1410479844-2864-1-git-send-email-avanzini.arianna@gmail.com> <20141001202721.GF12581@laptop.dumpdata.com> <20150428073646.GA16022@infradead.org> <553F3ADF.3000301@gmail.com> <555327A5.1060200@oracle.com> <5592A5EF.2050005@citrix.com> <55935848.7080909@fb.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZOpO2-0008LE-LH for xen-devel@lists.xenproject.org; Mon, 10 Aug 2015 15:53:18 +0000 In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Rafal Mielniczuk , Marcus Granado , Bob Liu , Arianna Avanzini Cc: Jonathan Davies , Felipe Franciosi , "linux-kernel@vger.kernel.org" , Christoph Hellwig , David Vrabel , "xen-devel@lists.xenproject.org" , "boris.ostrovsky@oracle.com" List-Id: xen-devel@lists.xenproject.org On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote: > On 01/07/15 04:03, Jens Axboe wrote: >> On 06/30/2015 08:21 AM, Marcus Granado wrote: >>> Hi, >>> >>> Our measurements for the multiqueue patch indicate a clear improvement >>> in iops when more queues are used. >>> >>> The measurements were obtained under the following conditions: >>> >>> - using blkback as the dom0 backend with the multiqueue patch applied to >>> a dom0 kernel 4.0 on 8 vcpus. >>> >>> - using a recent Ubuntu 15.04 kernel 3.19 with multiqueue frontend >>> applied to be used as a guest on 4 vcpus >>> >>> - using a micron RealSSD P320h as the underlying local storage on a Dell >>> PowerEdge R720 with 2 Xeon E5-2643 v2 cpus. >>> >>> - fio 2.2.7-22-g36870 as the generator of synthetic loads in the guest. >>> We used direct_io to skip caching in the guest and ran fio for 60s >>> reading a number of block sizes ranging from 512 bytes to 4MiB. Queue >>> depth of 32 for each queue was used to saturate individual vcpus in the >>> guest. >>> >>> We were interested in observing storage iops for different values of >>> block sizes. Our expectation was that iops would improve when increasing >>> the number of queues, because both the guest and dom0 would be able to >>> make use of more vcpus to handle these requests. >>> >>> These are the results (as aggregate iops for all the fio threads) that >>> we got for the conditions above with sequential reads: >>> >>> fio_threads io_depth block_size 1-queue_iops 8-queue_iops >>> 8 32 512 158K 264K >>> 8 32 1K 157K 260K >>> 8 32 2K 157K 258K >>> 8 32 4K 148K 257K >>> 8 32 8K 124K 207K >>> 8 32 16K 84K 105K >>> 8 32 32K 50K 54K >>> 8 32 64K 24K 27K >>> 8 32 128K 11K 13K >>> >>> 8-queue iops was better than single queue iops for all the block sizes. >>> There were very good improvements as well for sequential writes with >>> block size 4K (from 80K iops with single queue to 230K iops with 8 >>> queues), and no regressions were visible in any measurement performed. >> Great results! And I don't know why this code has lingered for so long, >> so thanks for helping get some attention to this again. >> >> Personally I'd be really interested in the results for the same set of >> tests, but without the blk-mq patches. Do you have them, or could you >> potentially run them? >> > Hello, > > We rerun the tests for sequential reads with the identical settings but with Bob Liu's multiqueue patches reverted from dom0 and guest kernels. > The results we obtained were *better* than the results we got with multiqueue patches applied: > > fio_threads io_depth block_size 1-queue_iops 8-queue_iops *no-mq-patches_iops* > 8 32 512 158K 264K 321K > 8 32 1K 157K 260K 328K > 8 32 2K 157K 258K 336K > 8 32 4K 148K 257K 308K > 8 32 8K 124K 207K 188K > 8 32 16K 84K 105K 82K > 8 32 32K 50K 54K 36K > 8 32 64K 24K 27K 16K > 8 32 128K 11K 13K 11K > > We noticed that the requests are not merged by the guest when the multiqueue patches are applied, > which results in a regression for small block sizes (RealSSD P320h's optimal block size is around 32-64KB). > > We observed similar regression for the Dell MZ-5EA1000-0D3 100 GB 2.5" Internal SSD > > As I understand blk-mq layer bypasses I/O scheduler which also effectively disables merges. > Could you explain why it is difficult to enable merging in the blk-mq layer? > That could help closing the performance gap we observed. > > Otherwise, the tests shows that the multiqueue patches does not improve the performance, > at least when it comes to sequential read/writes operations. blk-mq still provides merging, there should be no difference there. Does the xen patches set BLK_MQ_F_SHOULD_MERGE? -- Jens Axboe