From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50711) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUDtz-0004ve-Pe for qemu-devel@nongnu.org; Mon, 01 Aug 2016 10:09:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bUDtu-00084A-0y for qemu-devel@nongnu.org; Mon, 01 Aug 2016 10:09:06 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53714) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUDtt-000845-RS for qemu-devel@nongnu.org; Mon, 01 Aug 2016 10:09:01 -0400 Date: Mon, 1 Aug 2016 17:08:57 +0300 From: "Michael S. Tsirkin" Message-ID: <20160801162940-mutt-send-email-mst@kernel.org> References: <1470038878-5599-1-git-send-email-jasowang@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1470038878-5599-1-git-send-email-jasowang@redhat.com> Subject: Re: [Qemu-devel] [PATCH] vhost: don't set vring call if no vector List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang Cc: qemu-devel@nongnu.org, cornelia.huck@de.ibm.com On Mon, Aug 01, 2016 at 04:07:58PM +0800, Jason Wang wrote: > We used to set vring call fd unconditionally even if guest driver does > not use MSIX for this vritqueue at all. This will cause lots of > unnecessary userspace access and other checks for drivers does not use > interrupt at all (e.g virtio-net pmd). So check and clean vring call > fd if guest does not use any vector for this virtqueue at > all. > > Perf diffs (on rx) shows lots of cpus wasted on vhost_signal() were saved: > > # > 28.12% -27.82% [vhost] [k] vhost_signal > 14.44% -1.69% [kernel.vmlinux] [k] copy_user_generic_string > 7.05% +1.53% [kernel.vmlinux] [k] __free_page_frag > 6.51% +5.53% [vhost] [k] vhost_get_vq_desc > ... > > Pktgen tests shows 15.8% improvement on rx pps and 6.5% on tx pps. > > Before: RX 2.08Mpps TX 1.35Mpps > After: RX 2.41Mpps TX 1.44Mpps > > Signed-off-by: Jason Wang > --- > hw/virtio/vhost.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c > index 3d0c807..bd051ab 100644 > --- a/hw/virtio/vhost.c > +++ b/hw/virtio/vhost.c > @@ -822,6 +822,9 @@ static int vhost_virtqueue_start(struct vhost_dev *dev, > struct vhost_virtqueue *vq, > unsigned idx) > { > + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev))); > + VirtioBusState *vbus = VIRTIO_BUS(qbus); > + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus); > hwaddr s, l, a; > int r; > int vhost_vq_index = dev->vhost_ops->vhost_get_vq_index(dev, idx); > @@ -912,8 +915,19 @@ static int vhost_virtqueue_start(struct vhost_dev *dev, > vhost_virtqueue_mask(dev, vdev, idx, false); > } > > + if (k->query_guest_notifiers && > + k->query_guest_notifiers(qbus->parent) && > + virtio_queue_vector(vdev, idx) == VIRTIO_NO_VECTOR) { > + file.fd = -1; > + r = dev->vhost_ops->vhost_set_vring_call(dev, &file); > + if (r) { > + goto fail_vector; > + } > + } > + It's rather asymmetrical though. Wouldn't a better place be in vhost_virtqueue_mask? > return 0; > > +fail_vector: > fail_kick: > fail_alloc: > cpu_physical_memory_unmap(vq->ring, virtio_queue_get_ring_size(vdev, idx), > -- > 2.7.4