From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=54152 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PSpkN-0001Ru-IB for qemu-devel@nongnu.org; Wed, 15 Dec 2010 06:42:16 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PSpkM-0001pT-9b for qemu-devel@nongnu.org; Wed, 15 Dec 2010 06:42:15 -0500 Received: from mail-ey0-f181.google.com ([209.85.215.181]:60596) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PSpkL-0001p2-Vi for qemu-devel@nongnu.org; Wed, 15 Dec 2010 06:42:14 -0500 Received: by eyh6 with SMTP id 6so940267eyh.12 for ; Wed, 15 Dec 2010 03:42:12 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <20101213185251.GA9554@redhat.com> References: <20101212210959.GA25136@redhat.com> <20101213103836.GG25590@redhat.com> <20101213133538.GA2731@redhat.com> <20101213133615.GB2731@redhat.com> <20101213161219.GA6715@redhat.com> <20101213185251.GA9554@redhat.com> Date: Wed, 15 Dec 2010 11:42:12 +0000 Message-ID: From: Stefan Hajnoczi Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: [Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue notify List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: qemu-devel@nongnu.org On Mon, Dec 13, 2010 at 6:52 PM, Michael S. Tsirkin wrote: > On Mon, Dec 13, 2010 at 05:57:28PM +0000, Stefan Hajnoczi wrote: >> On Mon, Dec 13, 2010 at 4:28 PM, Stefan Hajnoczi wr= ote: >> > On Mon, Dec 13, 2010 at 4:12 PM, Michael S. Tsirkin w= rote: >> >> On Mon, Dec 13, 2010 at 03:27:06PM +0000, Stefan Hajnoczi wrote: >> >>> On Mon, Dec 13, 2010 at 1:36 PM, Michael S. Tsirkin = wrote: >> >>> > On Mon, Dec 13, 2010 at 03:35:38PM +0200, Michael S. Tsirkin wrote= : >> >>> >> On Mon, Dec 13, 2010 at 01:11:27PM +0000, Stefan Hajnoczi wrote: >> >>> >> > Fresh results: >> >>> >> > >> >>> >> > 192.168.0.1 - host (runs netperf) >> >>> >> > 192.168.0.2 - guest (runs netserver) >> >>> >> > >> >>> >> > host$ src/netperf -H 192.168.0.2 -- -m 200 >> >>> >> > >> >>> >> > ioeventfd=3Don >> >>> >> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.16= 8.0.2 >> >>> >> > (192.168.0.2) port 0 AF_INET >> >>> >> > Recv =A0 Send =A0 =A0Send >> >>> >> > Socket Socket =A0Message =A0Elapsed >> >>> >> > Size =A0 Size =A0 =A0Size =A0 =A0 Time =A0 =A0 Throughput >> >>> >> > bytes =A0bytes =A0 bytes =A0 =A0secs. =A0 =A010^6bits/sec >> >>> >> > =A087380 =A016384 =A0 =A0200 =A0 =A010.00 =A0 =A01759.25 >> >>> >> > >> >>> >> > ioeventfd=3Doff >> >>> >> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.16= 8.0.2 >> >>> >> > (192.168.0.2) port 0 AF_INET >> >>> >> > Recv =A0 Send =A0 =A0Send >> >>> >> > Socket Socket =A0Message =A0Elapsed >> >>> >> > Size =A0 Size =A0 =A0Size =A0 =A0 Time =A0 =A0 Throughput >> >>> >> > bytes =A0bytes =A0 bytes =A0 =A0secs. =A0 =A010^6bits/sec >> >>> >> > >> >>> >> > =A087380 =A016384 =A0 =A0200 =A0 =A010.00 =A0 =A01757.15 >> >>> >> > >> >>> >> > The results vary approx +/- 3% between runs. >> >>> >> > >> >>> >> > Invocation: >> >>> >> > $ x86_64-softmmu/qemu-system-x86_64 -m 4096 -enable-kvm -netdev >> >>> >> > type=3Dtap,id=3Dnet0,ifname=3Dtap0,script=3Dno,downscript=3Dno = -device >> >>> >> > virtio-net-pci,netdev=3Dnet0,ioeventfd=3Don|off -vnc :0 -drive >> >>> >> > if=3Dvirtio,cache=3Dnone,file=3D$HOME/rhel6-autobench-raw.img >> >>> >> > >> >>> >> > I am running qemu.git with v5 patches, based off >> >>> >> > 36888c6335422f07bbc50bf3443a39f24b90c7c6. >> >>> >> > >> >>> >> > Host: >> >>> >> > 1 Quad-Core AMD Opteron(tm) Processor 2350 @ 2 GHz >> >>> >> > 8 GB RAM >> >>> >> > RHEL 6 host >> >>> >> > >> >>> >> > Next I will try the patches on latest qemu-kvm.git >> >>> >> > >> >>> >> > Stefan >> >>> >> >> >>> >> One interesting thing is that I put virtio-net earlier on >> >>> >> command line. >> >>> > >> >>> > Sorry I mean I put it after disk, you put it before. >> >>> >> >>> I can't find a measurable difference when swapping -drive and -netde= v. >> >> >> >> One other concern I have is that we are apparently using >> >> ioeventfd for all VQs. E.g. for virtio-net we probably should not >> >> use it for the control VQ - it's a waste of resources. >> > >> > One option is a per-device (block, net, etc) bitmap that masks out >> > virtqueues. =A0Is that something you'd like to see? >> > >> > I'm tempted to mask out the RX vq too and see how that affects the >> > qemu-kvm.git specific issue. >> >> As expected, the rx virtqueue is involved in the degradation. =A0I >> enabled ioeventfd only for the TX virtqueue and got the same good >> results as userspace virtio-net. >> >> When I enable only the rx virtqueue, performs decreases as we've seen ab= ove. >> >> Stefan > > Interesting. In particular this implies something's wrong with the > queue: we should not normally be getting notifications from rx queue > at all. Is it running low on buffers? Does it help to increase the vq > size? =A0Any other explanation? I made a mistake, it is the *tx* vq that causes reduced performance on short packets with ioeventfd. I double-checked the results and the rx vq doesn't affect performance. Initially I thought the fix would be to adjust the tx mitigation mechanism since ioeventfd does its own mitigation of sorts. Multiple eventfd signals will be coalesced into one qemu-kvm event handler call if qemu-kvm didn't have a chance to handle the first event before the eventfd was signalled again. I added -device virtio-net-pci tx=3Dimmediate to flush the TX queue immediately instead of scheduling a BH or timer. Unfortunately this had little measurable effect and performance stayed the same. This suggests most of the latency is between the guest's pio write and qemu-kvm getting around to handling the event. You mentioned that vhost-net has the same performance issue on this benchmark. I guess a solution for vhost-net may help virtio-ioeventfd and vice versa. Are you happy with this patchset if I remove virtio-net-pci ioeventfd=3Don|off so only virtio-blk-pci has ioeventfd=3Don|off (with default on)? For block we've found it to be a win and the initial results looked good for net too. Stefan