From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wei Wang Subject: [PATCH 6/6 Resend] Vhost-pci RFC: Experimental Results Date: Sun, 29 May 2016 16:11:34 +0800 Message-ID: <1464509494-159509-7-git-send-email-wei.w.wang@intel.com> References: <1464509494-159509-1-git-send-email-wei.w.wang@intel.com> Cc: Wei Wang To: kvm@vger.kernel.org, qemu-devel@nongnu.org, virtio-comment@lists.oasis-open.org, mst@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Return-path: Received: from mga02.intel.com ([134.134.136.20]:59893 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751817AbcE2APh (ORCPT ); Sat, 28 May 2016 20:15:37 -0400 In-Reply-To: <1464509494-159509-1-git-send-email-wei.w.wang@intel.com> Sender: kvm-owner@vger.kernel.org List-ID: Signed-off-by: Wei Wang --- Results | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 Results diff --git a/Results b/Results new file mode 100644 index 0000000..7402826 --- /dev/null +++ b/Results @@ -0,0 +1,18 @@ +We have built a fundamental vhost-pci based inter-VM communication framework +for network packet transmission. To test the throughput affected by scaling +with more VMs to stream out packets, we chain 2 to 5 VMs, and follow the vsperf +test methodology proposed by OPNFV, as shown in Fig. 2. The first VM is +passthrough-ed with a physical NIC to inject packets from an external packet +generator, and the last VM is passthrough-ed with a physical NIC to eject +packets back to the external generator. A layer2 forwarding module in each VM +is responsible for forwarding incoming packets from NIC1 (the injection NIC) to +NIC2 (the ejection NIC). In the traditional way, NIC2 is a virtio-net device +connected to the vhost-user backend in OVS. With our proposed solution, NIC2 is +a vhost-pci device, which directly copies packets to the next VM. The packet +generator implements the RFC2544 standard, which keeps running at a 0 packet +loss rate. + +Fig. 3 shows the scalability test results. In the vhost-user case, a +significant performance drop (40%~55%) occurs when 4 and 5 VMs are chained +together. The vhost-pci based inter-VM communication scales well (no +significant throughput drop) with more VMs are chained together. -- 1.8.3.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33445) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b6oOM-000452-51 for qemu-devel@nongnu.org; Sat, 28 May 2016 20:15:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b6oOH-0006ne-1h for qemu-devel@nongnu.org; Sat, 28 May 2016 20:15:42 -0400 Received: from mga01.intel.com ([192.55.52.88]:6080) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b6oOG-0006kP-Og for qemu-devel@nongnu.org; Sat, 28 May 2016 20:15:36 -0400 From: Wei Wang Date: Sun, 29 May 2016 16:11:34 +0800 Message-Id: <1464509494-159509-7-git-send-email-wei.w.wang@intel.com> In-Reply-To: <1464509494-159509-1-git-send-email-wei.w.wang@intel.com> References: <1464509494-159509-1-git-send-email-wei.w.wang@intel.com> Subject: [Qemu-devel] [PATCH 6/6 Resend] Vhost-pci RFC: Experimental Results List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: kvm@vger.kernel.org, qemu-devel@nongnu.org, virtio-comment@lists.oasis-open.org, mst@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Cc: Wei Wang Signed-off-by: Wei Wang --- Results | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 Results diff --git a/Results b/Results new file mode 100644 index 0000000..7402826 --- /dev/null +++ b/Results @@ -0,0 +1,18 @@ +We have built a fundamental vhost-pci based inter-VM communication framework +for network packet transmission. To test the throughput affected by scaling +with more VMs to stream out packets, we chain 2 to 5 VMs, and follow the vsperf +test methodology proposed by OPNFV, as shown in Fig. 2. The first VM is +passthrough-ed with a physical NIC to inject packets from an external packet +generator, and the last VM is passthrough-ed with a physical NIC to eject +packets back to the external generator. A layer2 forwarding module in each VM +is responsible for forwarding incoming packets from NIC1 (the injection NIC) to +NIC2 (the ejection NIC). In the traditional way, NIC2 is a virtio-net device +connected to the vhost-user backend in OVS. With our proposed solution, NIC2 is +a vhost-pci device, which directly copies packets to the next VM. The packet +generator implements the RFC2544 standard, which keeps running at a 0 packet +loss rate. + +Fig. 3 shows the scalability test results. In the vhost-user case, a +significant performance drop (40%~55%) occurs when 4 and 5 VMs are chained +together. The vhost-pci based inter-VM communication scales well (no +significant throughput drop) with more VMs are chained together. -- 1.8.3.1