From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37407) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUPaA-0005Dn-G3 for qemu-devel@nongnu.org; Mon, 01 Aug 2016 22:37:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bUPa4-0008Sm-EX for qemu-devel@nongnu.org; Mon, 01 Aug 2016 22:37:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50800) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUPa4-0008SW-7L for qemu-devel@nongnu.org; Mon, 01 Aug 2016 22:37:20 -0400 References: <1470038878-5599-1-git-send-email-jasowang@redhat.com> <20160801162940-mutt-send-email-mst@kernel.org> From: Jason Wang Message-ID: <0167ea42-35eb-d9e4-3054-ad9326914092@redhat.com> Date: Tue, 2 Aug 2016 10:37:14 +0800 MIME-Version: 1.0 In-Reply-To: <20160801162940-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH] vhost: don't set vring call if no vector List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: cornelia.huck@de.ibm.com, qemu-devel@nongnu.org On 2016=E5=B9=B408=E6=9C=8801=E6=97=A5 22:08, Michael S. Tsirkin wrote: > On Mon, Aug 01, 2016 at 04:07:58PM +0800, Jason Wang wrote: >> We used to set vring call fd unconditionally even if guest driver does >> not use MSIX for this vritqueue at all. This will cause lots of >> unnecessary userspace access and other checks for drivers does not use >> interrupt at all (e.g virtio-net pmd). So check and clean vring call >> fd if guest does not use any vector for this virtqueue at >> all. >> >> Perf diffs (on rx) shows lots of cpus wasted on vhost_signal() were sa= ved: >> >> # >> 28.12% -27.82% [vhost] [k] vhost_signal >> 14.44% -1.69% [kernel.vmlinux] [k] copy_user_generic_string >> 7.05% +1.53% [kernel.vmlinux] [k] __free_page_frag >> 6.51% +5.53% [vhost] [k] vhost_get_vq_desc >> ... >> >> Pktgen tests shows 15.8% improvement on rx pps and 6.5% on tx pps. >> >> Before: RX 2.08Mpps TX 1.35Mpps >> After: RX 2.41Mpps TX 1.44Mpps >> >> Signed-off-by: Jason Wang >> --- >> hw/virtio/vhost.c | 14 ++++++++++++++ >> 1 file changed, 14 insertions(+) >> >> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c >> index 3d0c807..bd051ab 100644 >> --- a/hw/virtio/vhost.c >> +++ b/hw/virtio/vhost.c >> @@ -822,6 +822,9 @@ static int vhost_virtqueue_start(struct vhost_dev = *dev, >> struct vhost_virtqueue *vq, >> unsigned idx) >> { >> + BusState *qbus =3D BUS(qdev_get_parent_bus(DEVICE(vdev))); >> + VirtioBusState *vbus =3D VIRTIO_BUS(qbus); >> + VirtioBusClass *k =3D VIRTIO_BUS_GET_CLASS(vbus); >> hwaddr s, l, a; >> int r; >> int vhost_vq_index =3D dev->vhost_ops->vhost_get_vq_index(dev, i= dx); >> @@ -912,8 +915,19 @@ static int vhost_virtqueue_start(struct vhost_dev= *dev, >> vhost_virtqueue_mask(dev, vdev, idx, false); >> } >> =20 >> + if (k->query_guest_notifiers && >> + k->query_guest_notifiers(qbus->parent) && >> + virtio_queue_vector(vdev, idx) =3D=3D VIRTIO_NO_VECTOR) { >> + file.fd =3D -1; >> + r =3D dev->vhost_ops->vhost_set_vring_call(dev, &file); >> + if (r) { >> + goto fail_vector; >> + } >> + } >> + > It's rather asymmetrical though. Wouldn't a better place > be in vhost_virtqueue_mask? Looks not, since it will be only called when - backend does not support interrupt masking (vhost-user), the patch=20 should work for vhost-kernel too - mask or unmask an vector but the patch only work for a virtqueue of=20 VIRTIO_NO_VECTOR > > >> return 0; >> =20 >> +fail_vector: >> fail_kick: >> fail_alloc: >> cpu_physical_memory_unmap(vq->ring, virtio_queue_get_ring_size(v= dev, idx), >> --=20 >> 2.7.4