linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	wexu@redhat.com, jfreimann@redhat.com, tiwei.bie@intel.com,
	maxime.coquelin@redhat.com
Subject: Re: [PATCH net-next V2 0/8] Packed virtqueue support for vhost
Date: Mon, 16 Jul 2018 15:49:04 +0300	[thread overview]
Message-ID: <20180716154102-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <33f4643f-f226-0389-1f4f-607c289db94e@redhat.com>

On Mon, Jul 16, 2018 at 05:46:33PM +0800, Jason Wang wrote:
> 
> 
> On 2018年07月16日 16:39, Michael S. Tsirkin wrote:
> > On Mon, Jul 16, 2018 at 11:28:03AM +0800, Jason Wang wrote:
> > > Hi all:
> > > 
> > > This series implements packed virtqueues. The code were tested with
> > > Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
> > > 
> > > 
> > > Pktgen test for both RX and TX does not show obvious difference with
> > > split virtqueues. The main bottleneck is the guest Linux driver, since
> > > it can not stress vhost for a 100% CPU utilization. A full TCP
> > > benchmark is ongoing. Will test virtio-net pmd as well when it was
> > > ready.
> > Well the question then is why we should bother merging this
> > if this doesn't give a performance gain.
> 
> We meet bottlenecks at other places. I can only test Linux driver which has
> lots of overheads e.g interrupts. And perf show only a small fraction of
> time were spent on e.g virtqueue manipulation. I hope virtio-net pmd can
> give us different result but we don't have one ready for testing now. (Jen's
> V4 have bugs thus can not work with this series).

Can't linux busy poll? And how about testing loopback with XDP?

> >   Do you see
> > a gain in CPU utilization maybe?
> 
> Unfortunately not.
> 
> > 
> > If not - let's wait for that TCP benchmark result?
> 
> We can, but you know TCP_STREAM result is sometime misleading.
> 
> A brunch of other patches of mine were rebased on this and then blocked on
> this series. Consider we don't meet regression, maybe we can merge this
> first and try optimizations or fixups on top?
> 
> Thanks

I'm not sure I understand this approach. Packed ring is just an optimization.
What value is there in merging it if it does not help speed?

> > 
> > > Notes:
> > > - This version depends on Tiwei's series at https://patchwork.ozlabs.org/cover/942297/
> > > 
> > > This version were tested with:
> > > 
> > > - Zerocopy (Out of Order) support
> > > - vIOMMU support
> > > - mergeable buffer on/off
> > > - busy polling on/off
> > > - vsock (nc-vsock)
> > > 
> > > Changes from V1:
> > > - drop uapi patch and use Tiwei's
> > > - split the enablement of packed virtqueue into a separate patch
> > > 
> > > Changes from RFC V5:
> > > 
> > > - save unnecessary barriers during vhost_add_used_packed_n()
> > > - more compact math for event idx
> > > - fix failure of SET_VRING_BASE when avail_wrap_counter is true
> > > - fix not copy avail_wrap_counter during GET_VRING_BASE
> > > - introduce SET_VRING_USED_BASE/GET_VRING_USED_BASE for syncing last_used_idx
> > > - rename used_wrap_counter to last_used_wrap_counter
> > > - rebase to net-next
> > > 
> > > Changes from RFC V4:
> > > 
> > > - fix signalled_used index recording
> > > - track avail index correctly
> > > - various minor fixes
> > > 
> > > Changes from RFC V3:
> > > 
> > > - Fix math on event idx checking
> > > - Sync last avail wrap counter through GET/SET_VRING_BASE
> > > - remove desc_event prefix in the driver/device structure
> > > 
> > > Changes from RFC V2:
> > > 
> > > - do not use & in checking desc_event_flags
> > > - off should be most significant bit
> > > - remove the workaround of mergeable buffer for dpdk prototype
> > > - id should be in the last descriptor in the chain
> > > - keep _F_WRITE for write descriptor when adding used
> > > - device flags updating should use ADDR_USED type
> > > - return error on unexpected unavail descriptor in a chain
> > > - return false in vhost_ve_avail_empty is descriptor is available
> > > - track last seen avail_wrap_counter
> > > - correctly examine available descriptor in get_indirect_packed()
> > > - vhost_idx_diff should return u16 instead of bool
> > > 
> > > Changes from RFC V1:
> > > 
> > > - Refactor vhost used elem code to avoid open coding on used elem
> > > - Event suppression support (compile test only).
> > > - Indirect descriptor support (compile test only).
> > > - Zerocopy support.
> > > - vIOMMU support.
> > > - SCSI/VSOCK support (compile test only).
> > > - Fix several bugs
> > > 
> > > Jason Wang (8):
> > >    vhost: move get_rx_bufs to vhost.c
> > >    vhost: hide used ring layout from device
> > >    vhost: do not use vring_used_elem
> > >    vhost_net: do not explicitly manipulate vhost_used_elem
> > >    vhost: vhost_put_user() can accept metadata type
> > >    vhost: packed ring support
> > >    vhost: event suppression for packed ring
> > >    vhost: enable packed virtqueues
> > > 
> > >   drivers/vhost/net.c        | 143 ++-----
> > >   drivers/vhost/scsi.c       |  62 +--
> > >   drivers/vhost/vhost.c      | 994 ++++++++++++++++++++++++++++++++++++++++-----
> > >   drivers/vhost/vhost.h      |  55 ++-
> > >   drivers/vhost/vsock.c      |  42 +-
> > >   include/uapi/linux/vhost.h |   7 +
> > >   6 files changed, 1035 insertions(+), 268 deletions(-)
> > > 
> > > -- 
> > > 2.7.4

  reply	other threads:[~2018-07-16 12:49 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-16  3:28 [PATCH net-next V2 0/8] Packed virtqueue support for vhost Jason Wang
2018-07-16  3:28 ` [PATCH net-next V2 1/8] vhost: move get_rx_bufs to vhost.c Jason Wang
2018-07-16  3:28 ` [PATCH net-next V2 2/8] vhost: hide used ring layout from device Jason Wang
2018-07-16  3:28 ` [PATCH net-next V2 3/8] vhost: do not use vring_used_elem Jason Wang
2018-07-16  3:28 ` [PATCH net-next V2 4/8] vhost_net: do not explicitly manipulate vhost_used_elem Jason Wang
2018-07-16  3:28 ` [PATCH net-next V2 5/8] vhost: vhost_put_user() can accept metadata type Jason Wang
2018-07-16  3:28 ` [PATCH net-next V2 6/8] vhost: packed ring support Jason Wang
2018-10-12 14:32   ` Tiwei Bie
2018-10-12 17:23     ` Michael S. Tsirkin
2018-10-15  2:22       ` Jason Wang
2018-10-15  2:43         ` Michael S. Tsirkin
2018-10-15  2:51           ` Jason Wang
2018-10-15 10:25             ` Michael S. Tsirkin
2018-10-18  2:44               ` Jason Wang
2018-10-16 13:58         ` Maxime Coquelin
2018-10-17  6:54           ` Jason Wang
2018-10-17 12:02             ` Maxime Coquelin
2018-07-16  3:28 ` [PATCH net-next V2 7/8] vhost: event suppression for packed ring Jason Wang
2018-07-16  3:28 ` [PATCH net-next V2 8/8] vhost: enable packed virtqueues Jason Wang
2018-07-16  8:39 ` [PATCH net-next V2 0/8] Packed virtqueue support for vhost Michael S. Tsirkin
2018-07-16  9:46   ` Jason Wang
2018-07-16 12:49     ` Michael S. Tsirkin [this message]
2018-07-17  0:45       ` Jason Wang
2018-07-22 16:56         ` Michael S. Tsirkin
2018-07-18  4:09       ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180716154102-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=jfreimann@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=tiwei.bie@intel.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=wexu@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).