From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51114) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bU8Gg-0003qz-Sa for qemu-devel@nongnu.org; Mon, 01 Aug 2016 04:08:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bU8Ga-00074A-Nq for qemu-devel@nongnu.org; Mon, 01 Aug 2016 04:08:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52696) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bU8Ga-000742-IT for qemu-devel@nongnu.org; Mon, 01 Aug 2016 04:08:04 -0400 From: Jason Wang Date: Mon, 1 Aug 2016 16:07:58 +0800 Message-Id: <1470038878-5599-1-git-send-email-jasowang@redhat.com> Subject: [Qemu-devel] [PATCH] vhost: don't set vring call if no vector List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: mst@redhat.com Cc: qemu-devel@nongnu.org, cornelia.huck@de.ibm.com, Jason Wang We used to set vring call fd unconditionally even if guest driver does not use MSIX for this vritqueue at all. This will cause lots of unnecessary userspace access and other checks for drivers does not use interrupt at all (e.g virtio-net pmd). So check and clean vring call fd if guest does not use any vector for this virtqueue at all. Perf diffs (on rx) shows lots of cpus wasted on vhost_signal() were saved: # 28.12% -27.82% [vhost] [k] vhost_signal 14.44% -1.69% [kernel.vmlinux] [k] copy_user_generic_string 7.05% +1.53% [kernel.vmlinux] [k] __free_page_frag 6.51% +5.53% [vhost] [k] vhost_get_vq_desc ... Pktgen tests shows 15.8% improvement on rx pps and 6.5% on tx pps. Before: RX 2.08Mpps TX 1.35Mpps After: RX 2.41Mpps TX 1.44Mpps Signed-off-by: Jason Wang --- hw/virtio/vhost.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index 3d0c807..bd051ab 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -822,6 +822,9 @@ static int vhost_virtqueue_start(struct vhost_dev *dev, struct vhost_virtqueue *vq, unsigned idx) { + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev))); + VirtioBusState *vbus = VIRTIO_BUS(qbus); + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus); hwaddr s, l, a; int r; int vhost_vq_index = dev->vhost_ops->vhost_get_vq_index(dev, idx); @@ -912,8 +915,19 @@ static int vhost_virtqueue_start(struct vhost_dev *dev, vhost_virtqueue_mask(dev, vdev, idx, false); } + if (k->query_guest_notifiers && + k->query_guest_notifiers(qbus->parent) && + virtio_queue_vector(vdev, idx) == VIRTIO_NO_VECTOR) { + file.fd = -1; + r = dev->vhost_ops->vhost_set_vring_call(dev, &file); + if (r) { + goto fail_vector; + } + } + return 0; +fail_vector: fail_kick: fail_alloc: cpu_physical_memory_unmap(vq->ring, virtio_queue_get_ring_size(vdev, idx), -- 2.7.4