All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wei Wang <wei.w.wang@intel.com>
To: kvm@vger.kernel.org, qemu-devel@nongnu.org,
	virtio-comment@lists.oasis-open.org,
	virtio-dev@lists.oasis-open.org, mst@redhat.com,
	stefanha@redhat.com, pbonzini@redhat.com
Cc: Wei Wang <wei.w.wang@intel.com>
Subject: [PATCH 6/6] Vhost-pci RFC: Experimental Results
Date: Sun, 29 May 2016 07:36:35 +0800	[thread overview]
Message-ID: <1464478595-146533-7-git-send-email-wei.w.wang@intel.com> (raw)
In-Reply-To: <1464478595-146533-1-git-send-email-wei.w.wang@intel.com>

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 Results | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)
 create mode 100644 Results

diff --git a/Results b/Results
new file mode 100644
index 0000000..7402826
--- /dev/null
+++ b/Results
@@ -0,0 +1,18 @@
+We have built a fundamental vhost-pci based inter-VM communication framework
+for network packet transmission. To test the throughput affected by scaling
+with more VMs to stream out packets, we chain 2 to 5 VMs, and follow the vsperf
+test methodology proposed by OPNFV, as shown in Fig. 2. The first VM is
+passthrough-ed with a physical NIC to inject packets from an external packet
+generator, and the last VM is passthrough-ed with a physical NIC to eject
+packets back to the external generator. A layer2 forwarding module in each VM
+is responsible for forwarding incoming packets from NIC1 (the injection NIC) to
+NIC2 (the ejection NIC). In the traditional way, NIC2 is a virtio-net device
+connected to the vhost-user backend in OVS. With our proposed solution, NIC2 is
+a vhost-pci device, which directly copies packets to the next VM. The packet
+generator implements the RFC2544 standard, which keeps running at a 0 packet
+loss rate.
+
+Fig. 3 shows the scalability test results. In the vhost-user case, a
+significant performance drop (40%~55%) occurs when 4 and 5 VMs are chained
+together. The vhost-pci based inter-VM communication scales well (no
+significant throughput drop) with more VMs are chained together.
-- 
1.8.3.1


WARNING: multiple messages have this Message-ID (diff)
From: Wei Wang <wei.w.wang@intel.com>
To: kvm@vger.kernel.org, qemu-devel@nongnu.org,
	virtio-comment@lists.oasis-open.org,
	virtio-dev@lists.oasis-open.org, mst@redhat.com,
	stefanha@redhat.com, pbonzini@redhat.com
Cc: Wei Wang <wei.w.wang@intel.com>
Subject: [Qemu-devel] [PATCH 6/6] Vhost-pci RFC: Experimental Results
Date: Sun, 29 May 2016 07:36:35 +0800	[thread overview]
Message-ID: <1464478595-146533-7-git-send-email-wei.w.wang@intel.com> (raw)
In-Reply-To: <1464478595-146533-1-git-send-email-wei.w.wang@intel.com>

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 Results | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)
 create mode 100644 Results

diff --git a/Results b/Results
new file mode 100644
index 0000000..7402826
--- /dev/null
+++ b/Results
@@ -0,0 +1,18 @@
+We have built a fundamental vhost-pci based inter-VM communication framework
+for network packet transmission. To test the throughput affected by scaling
+with more VMs to stream out packets, we chain 2 to 5 VMs, and follow the vsperf
+test methodology proposed by OPNFV, as shown in Fig. 2. The first VM is
+passthrough-ed with a physical NIC to inject packets from an external packet
+generator, and the last VM is passthrough-ed with a physical NIC to eject
+packets back to the external generator. A layer2 forwarding module in each VM
+is responsible for forwarding incoming packets from NIC1 (the injection NIC) to
+NIC2 (the ejection NIC). In the traditional way, NIC2 is a virtio-net device
+connected to the vhost-user backend in OVS. With our proposed solution, NIC2 is
+a vhost-pci device, which directly copies packets to the next VM. The packet
+generator implements the RFC2544 standard, which keeps running at a 0 packet
+loss rate.
+
+Fig. 3 shows the scalability test results. In the vhost-user case, a
+significant performance drop (40%~55%) occurs when 4 and 5 VMs are chained
+together. The vhost-pci based inter-VM communication scales well (no
+significant throughput drop) with more VMs are chained together.
-- 
1.8.3.1

  parent reply	other threads:[~2016-05-28 15:40 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-28 23:36 [PATCH 0/6] *** Vhost-pci RFC *** Wei Wang
2016-05-28 23:36 ` [Qemu-devel] " Wei Wang
2016-05-28 23:36 ` [PATCH 1/6] Vhost-pci RFC: Introduction Wei Wang
2016-05-28 23:36   ` [Qemu-devel] " Wei Wang
2016-05-28 23:36 ` [PATCH 2/6] Vhost-pci RFC: Modification Scope Wei Wang
2016-05-28 23:36   ` [Qemu-devel] " Wei Wang
2016-05-28 23:36 ` [PATCH 3/6] Vhost-pci RFC: Benefits to KVM Wei Wang
2016-05-28 23:36   ` [Qemu-devel] " Wei Wang
2016-05-28 23:36 ` [PATCH 4/6] Vhost-pci RFC: Detailed Description in the Virtio Specification Format Wei Wang
2016-05-28 23:36   ` [Qemu-devel] " Wei Wang
2016-05-28 23:36 ` [PATCH 5/6] Vhost-pci RFC: Future Security Enhancement Wei Wang
2016-05-28 23:36   ` [Qemu-devel] " Wei Wang
2016-05-28 23:36 ` Wei Wang [this message]
2016-05-28 23:36   ` [Qemu-devel] [PATCH 6/6] Vhost-pci RFC: Experimental Results Wei Wang
2016-05-31 18:21 ` [Qemu-devel] [PATCH 0/6] *** Vhost-pci RFC *** Eric Blake
2016-06-01  2:15   ` [virtio-comment] " Wang, Wei W
2016-06-01  2:15     ` Wang, Wei W

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1464478595-146533-7-git-send-email-wei.w.wang@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=virtio-comment@lists.oasis-open.org \
    --cc=virtio-dev@lists.oasis-open.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.