From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38275) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YTOfO-0006Rg-An for qemu-devel@nongnu.org; Thu, 05 Mar 2015 00:49:51 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YTOfN-00023h-4f for qemu-devel@nongnu.org; Thu, 05 Mar 2015 00:49:50 -0500 Received: from mx1.redhat.com ([209.132.183.28]:33562) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YTOfM-00023d-TY for qemu-devel@nongnu.org; Thu, 05 Mar 2015 00:49:49 -0500 From: Jason Wang Date: Thu, 5 Mar 2015 13:48:48 +0800 Message-Id: <1425534531-6305-12-git-send-email-jasowang@redhat.com> In-Reply-To: <1425534531-6305-1-git-send-email-jasowang@redhat.com> References: <1425534531-6305-1-git-send-email-jasowang@redhat.com> Subject: [Qemu-devel] [PATCH V3 11/14] virtio-pci: speedup MSI-X masking and unmasking List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Jason Wang , Anthony Liguori , "Michael S. Tsirkin" This patch tries to speed up the MSI-X masking and unmasking through the mapping between vector and queues. With this patch it will there's no need to go through all possible virtqueues, which may help to reduce the time spent when doing MSI-X masking/unmasking a single vector when more than hundreds or even thousands of virtqueues were supported. Tested with 80 queue pairs virito-net-pci by changing the smp affinity in the background and doing netperf in the same time: Before the patch: 5711.70 Gbits/sec After the patch: 6830.98 Gbits/sec About 19.6% improvements in throughput. Cc: Anthony Liguori Cc: Michael S. Tsirkin Signed-off-by: Jason Wang --- hw/virtio/virtio-pci.c | 40 +++++++++++++++++++++------------------- 1 file changed, 21 insertions(+), 19 deletions(-) diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index 280bba2..327a3fc 100644 --- a/hw/virtio/virtio-pci.c +++ b/hw/virtio/virtio-pci.c @@ -676,28 +676,30 @@ static int virtio_pci_vector_unmask(PCIDevice *dev, unsigned vector, { VirtIOPCIProxy *proxy = container_of(dev, VirtIOPCIProxy, pci_dev); VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus); - int ret, queue_no; + VirtQueue *vq = virtio_vector_first_queue(vdev, vector); + int ret, index, unmasked = 0; - for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) { - if (!virtio_queue_get_num(vdev, queue_no)) { + while (vq) { + index = virtio_queue_get_index(vdev, vq); + if (!virtio_queue_get_num(vdev, index)) { break; } - if (virtio_queue_vector(vdev, queue_no) != vector) { - continue; - } - ret = virtio_pci_vq_vector_unmask(proxy, queue_no, vector, msg); + ret = virtio_pci_vq_vector_unmask(proxy, index, vector, msg); if (ret < 0) { goto undo; } + vq = virtio_vector_next_queue(vq); + ++unmasked; } + return 0; undo: - while (--queue_no >= 0) { - if (virtio_queue_vector(vdev, queue_no) != vector) { - continue; - } - virtio_pci_vq_vector_mask(proxy, queue_no, vector); + vq = virtio_vector_first_queue(vdev, vector); + while (vq && --unmasked >= 0) { + index = virtio_queue_get_index(vdev, vq); + virtio_pci_vq_vector_mask(proxy, index, vector); + vq = virtio_vector_next_queue(vq); } return ret; } @@ -706,16 +708,16 @@ static void virtio_pci_vector_mask(PCIDevice *dev, unsigned vector) { VirtIOPCIProxy *proxy = container_of(dev, VirtIOPCIProxy, pci_dev); VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus); - int queue_no; + VirtQueue *vq = virtio_vector_first_queue(vdev, vector); + int index; - for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) { - if (!virtio_queue_get_num(vdev, queue_no)) { + while (vq) { + index = virtio_queue_get_index(vdev, vq); + if (!virtio_queue_get_num(vdev, index)) { break; } - if (virtio_queue_vector(vdev, queue_no) != vector) { - continue; - } - virtio_pci_vq_vector_mask(proxy, queue_no, vector); + virtio_pci_vq_vector_mask(proxy, index, vector); + vq = virtio_vector_next_queue(vq); } } -- 2.1.0