From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCH RFC v8 02/11] vhost: use batched get_vq_desc version Date: Mon, 22 Jun 2020 12:29:28 -0400 Message-ID: <20200622122546-mutt-send-email-mst@kernel.org> References: <20200611113404.17810-1-mst@redhat.com> <20200611113404.17810-3-mst@redhat.com> <20200611152257.GA1798@char.us.oracle.com> <20200622114622-mutt-send-email-mst@kernel.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: kvm-owner@vger.kernel.org To: Eugenio Perez Martin Cc: Konrad Rzeszutek Wilk , linux-kernel@vger.kernel.org, kvm list , virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, Jason Wang List-Id: virtualization@lists.linuxfoundation.org On Mon, Jun 22, 2020 at 06:11:21PM +0200, Eugenio Perez Martin wrote: > On Mon, Jun 22, 2020 at 5:55 PM Michael S. Tsirkin wrote: > > > > On Fri, Jun 19, 2020 at 08:07:57PM +0200, Eugenio Perez Martin wrote: > > > On Mon, Jun 15, 2020 at 2:28 PM Eugenio Perez Martin > > > wrote: > > > > > > > > On Thu, Jun 11, 2020 at 5:22 PM Konrad Rzeszutek Wilk > > > > wrote: > > > > > > > > > > On Thu, Jun 11, 2020 at 07:34:19AM -0400, Michael S. Tsirkin wrote: > > > > > > As testing shows no performance change, switch to that now. > > > > > > > > > > What kind of testing? 100GiB? Low latency? > > > > > > > > > > > > > Hi Konrad. > > > > > > > > I tested this version of the patch: > > > > https://lkml.org/lkml/2019/10/13/42 > > > > > > > > It was tested for throughput with DPDK's testpmd (as described in > > > > http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) > > > > and kernel pktgen. No latency tests were performed by me. Maybe it is > > > > interesting to perform a latency test or just a different set of tests > > > > over a recent version. > > > > > > > > Thanks! > > > > > > I have repeated the tests with v9, and results are a little bit different: > > > * If I test opening it with testpmd, I see no change between versions > > > > > > OK that is testpmd on guest, right? And vhost-net on the host? > > > > Hi Michael. > > No, sorry, as described in > http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html. > But I could add to test it in the guest too. > > These kinds of raw packets "bursts" do not show performance > differences, but I could test deeper if you think it would be worth > it. Oh ok, so this is without guest, with virtio-user. It might be worth checking dpdk within guest too just as another data point. > > > * If I forward packets between two vhost-net interfaces in the guest > > > using a linux bridge in the host: > > > > And here I guess you mean virtio-net in the guest kernel? > > Yes, sorry: Two virtio-net interfaces connected with a linux bridge in > the host. More precisely: > * Adding one of the interfaces to another namespace, assigning it an > IP, and starting netserver there. > * Assign another IP in the range manually to the other virtual net > interface, and start the desired test there. > > If you think it would be better to perform then differently please let me know. Not sure why you bother with namespaces since you said you are using L2 bridging. I guess it's unimportant. > > > > > - netperf UDP_STREAM shows a performance increase of 1.8, almost > > > doubling performance. This gets lower as frame size increase. > > > - rests of the test goes noticeably worse: UDP_RR goes from ~6347 > > > transactions/sec to 5830 > > > > OK so it seems plausible that we still have a bug where an interrupt > > is delayed. That is the main difference between pmd and virtio. > > Let's try disabling event index, and see what happens - that's > > the trickiest part of interrupts. > > > > Got it, will get back with the results. > > Thank you very much! > > > > > > > > - TCP_STREAM goes from ~10.7 gbps to ~7Gbps > > > - TCP_RR from 6223.64 transactions/sec to 5739.44 > >