From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44200) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XHVG0-0000Kd-W0 for qemu-devel@nongnu.org; Wed, 13 Aug 2014 05:54:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XHVFu-0001zE-QD for qemu-devel@nongnu.org; Wed, 13 Aug 2014 05:54:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:14588) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XHVFu-0001yr-Io for qemu-devel@nongnu.org; Wed, 13 Aug 2014 05:54:06 -0400 Date: Wed, 13 Aug 2014 11:54:00 +0200 From: Kevin Wolf Message-ID: <20140813095400.GA3701@noname.redhat.com> References: <20140805094844.GF4391@noname.str.redhat.com> <20140805134815.GD12251@stefanha-thinkpad.redhat.com> <20140805144728.GH4391@noname.str.redhat.com> <20140806084855.GA4090@noname.str.redhat.com> <20140810114624.0305b7af@tom-ThinkPad-T410> <53E91B5D.4090009@redhat.com> <53EA6635.9040600@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <53EA6635.9040600@redhat.com> Subject: Re: [Qemu-devel] [PATCH v1 00/17] dataplane: optimization and multi virtqueue support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Ming Lei , Fam Zheng , qemu-devel , Stefan Hajnoczi Am 12.08.2014 um 21:08 hat Paolo Bonzini geschrieben: > Il 12/08/2014 10:12, Ming Lei ha scritto: > >> > The below patch is basically the minimal change to bypass coroutines. Of course > >> > the block.c part is not acceptable as is (the change to refresh_total_sectors > >> > is broken, the others are just ugly), but it is a start. Please run it with > >> > your fio workloads, or write an aio-based version of a qemu-img/qemu-io *I/O* > >> > benchmark. > > Could you explain why the new change is introduced? > > It provides a fast path for bdrv_aio_readv/writev whenever there is > nothing to do after the driver routine returns. In this case there is > no need to wrap the AIOCB returned by the driver routine. > > It doesn't go all the way, and in particular it doesn't reverse > completely the roles of bdrv_co_readv/writev vs. bdrv_aio_readv/writev. That's actually why I think it's an option. Remember that, like you say below, we're optimising for an extreme case here, and I certainly don't want to hurt the common case for it. I can't imagine a way of reversing the roles without multiplying the cost for the coroutine path. Or do you have a clever solution how you'd go about it without having an impact on the common case? > But it is enough to provide something that is not dataplane-specific, > does not break various functionality that we need to add to dataplane > virtio-blk, does not mess up the semantics of the block layer, and lets > you run benchmarks. > > > I will hold it until we can align to the coroutine cost computation, > > because it is very important for the discussion. > > First of all, note that the coroutine cost is totally pointless in the > discussion unless you have 100% CPU time and the dataplane thread > becomes CPU bound. You haven't said if this is the case. That's probably the implicit assumption. As I said, it's an extreme case we're trying to look at. I'm not sure how realistic it is when you don't work with ramdisks... > Second, if the coroutine cost is relevant, the profile is really too > flat to do much about it. The only solution (and here I *think* I > disagree slightly with Kevin) is to get rid of it, which is not even too > hard to do. I think we just need to make the best use of coroutines. I would really love to show you numbers, but I'm having a hard time benchmarking all this stuff. When I test only the block layer with 'qemu-img bench', I clearly have working optimisations, but it doesn't translate yet into clear improvments for actual guests. I think other things in the way from the guest to qemu slow it down so that in the end the coroutine part doesn't matter much any more. By the way, I just noticed that sequential reads were significantly faster (~25%) for me without dataplane than with it. I didn't expect to gain anything with dataplane on this setup, but certainly not to lose that much. There might be more to gain there than by optimising or removing coroutines. > The problem is that your patches to do touch too much code and subtly > break too much stuff. The one I wrote does have a little breakage > because I don't understand bs->growable 100% and I didn't really put > much effort into it (my deadline being basically "be done as soon as the > shower is free"), and it is ugly as hell, _but_ it should be compatible > with the way the block layer works. Yes, your patch is definitely much more palatable than Ming's. The part that I still don't like about it is that it would be stating "in the common case, we're only doing the second best thing". I'm not yet convinced that coroutines perform necessarily worse than state-passing callbacks. Kevin