All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: mst@redhat.com, kvm@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: Jason Wang <jasowang@redhat.com>
Subject: [PATCH V3 6/6] vhost_net: correctly limit the max pending buffers
Date: Mon,  2 Sep 2013 16:41:01 +0800	[thread overview]
Message-ID: <1378111261-14826-7-git-send-email-jasowang@redhat.com> (raw)
In-Reply-To: <1378111261-14826-1-git-send-email-jasowang@redhat.com>

As Michael point out, We used to limit the max pending DMAs to get better cache
utilization. But it was not done correctly since it was one done when there's no
new buffers submitted from guest. Guest can easily exceeds the limitation by
keeping sending packets.

So this patch moves the check into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/net.c |   18 +++++++-----------
 1 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8e9dc55..831eb4f 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -363,6 +363,13 @@ static void handle_tx(struct vhost_net *net)
 		if (zcopy)
 			vhost_zerocopy_signal_used(net, vq);
 
+		/* If more outstanding DMAs, queue the work.
+		 * Handle upend_idx wrap around
+		 */
+		if (unlikely((nvq->upend_idx + vq->num - VHOST_MAX_PEND)
+			      % UIO_MAXIOV == nvq->done_idx))
+			break;
+
 		head = vhost_get_vq_desc(&net->dev, vq, vq->iov,
 					 ARRAY_SIZE(vq->iov),
 					 &out, &in,
@@ -372,17 +379,6 @@ static void handle_tx(struct vhost_net *net)
 			break;
 		/* Nothing new?  Wait for eventfd to tell us they refilled. */
 		if (head == vq->num) {
-			int num_pends;
-
-			/* If more outstanding DMAs, queue the work.
-			 * Handle upend_idx wrap around
-			 */
-			num_pends = likely(nvq->upend_idx >= nvq->done_idx) ?
-				    (nvq->upend_idx - nvq->done_idx) :
-				    (nvq->upend_idx + UIO_MAXIOV -
-				     nvq->done_idx);
-			if (unlikely(num_pends > VHOST_MAX_PEND))
-				break;
 			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
 				vhost_disable_notify(&net->dev, vq);
 				continue;
-- 
1.7.1


WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com>
To: mst@redhat.com, kvm@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH V3 6/6] vhost_net: correctly limit the max pending buffers
Date: Mon,  2 Sep 2013 16:41:01 +0800	[thread overview]
Message-ID: <1378111261-14826-7-git-send-email-jasowang@redhat.com> (raw)
In-Reply-To: <1378111261-14826-1-git-send-email-jasowang@redhat.com>

As Michael point out, We used to limit the max pending DMAs to get better cache
utilization. But it was not done correctly since it was one done when there's no
new buffers submitted from guest. Guest can easily exceeds the limitation by
keeping sending packets.

So this patch moves the check into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/net.c |   18 +++++++-----------
 1 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8e9dc55..831eb4f 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -363,6 +363,13 @@ static void handle_tx(struct vhost_net *net)
 		if (zcopy)
 			vhost_zerocopy_signal_used(net, vq);
 
+		/* If more outstanding DMAs, queue the work.
+		 * Handle upend_idx wrap around
+		 */
+		if (unlikely((nvq->upend_idx + vq->num - VHOST_MAX_PEND)
+			      % UIO_MAXIOV == nvq->done_idx))
+			break;
+
 		head = vhost_get_vq_desc(&net->dev, vq, vq->iov,
 					 ARRAY_SIZE(vq->iov),
 					 &out, &in,
@@ -372,17 +379,6 @@ static void handle_tx(struct vhost_net *net)
 			break;
 		/* Nothing new?  Wait for eventfd to tell us they refilled. */
 		if (head == vq->num) {
-			int num_pends;
-
-			/* If more outstanding DMAs, queue the work.
-			 * Handle upend_idx wrap around
-			 */
-			num_pends = likely(nvq->upend_idx >= nvq->done_idx) ?
-				    (nvq->upend_idx - nvq->done_idx) :
-				    (nvq->upend_idx + UIO_MAXIOV -
-				     nvq->done_idx);
-			if (unlikely(num_pends > VHOST_MAX_PEND))
-				break;
 			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
 				vhost_disable_notify(&net->dev, vq);
 				continue;
-- 
1.7.1

  parent reply	other threads:[~2013-09-02  8:53 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-02  8:40 [PATCH V3 0/6] vhost code cleanup and minor enhancement Jason Wang
2013-09-02  8:40 ` Jason Wang
2013-09-02  8:40 ` [PATCH V3 1/6] vhost_net: make vhost_zerocopy_signal_used() return void Jason Wang
2013-09-02  8:40   ` Jason Wang
2013-09-02  8:40 ` [PATCH V3 2/6] vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used() Jason Wang
2013-09-02  8:40   ` Jason Wang
2013-09-02  8:40 ` [PATCH V3 3/6] vhost: switch to use vhost_add_used_n() Jason Wang
2013-09-02  8:40   ` Jason Wang
2013-09-02  8:40 ` [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time Jason Wang
2013-09-02  8:40   ` Jason Wang
2013-09-04 11:59   ` Michael S. Tsirkin
2013-09-04 11:59     ` Michael S. Tsirkin
2013-09-05  2:54     ` Jason Wang
2013-09-05  2:54       ` Jason Wang
2013-09-23  7:16       ` Michael S. Tsirkin
2013-09-23  7:16         ` Michael S. Tsirkin
2013-09-26  4:30         ` Jason Wang
2013-09-26  4:30           ` Jason Wang
2013-09-29  9:36           ` Jason Wang
2013-09-29  9:36             ` Jason Wang
2013-09-02  8:41 ` [PATCH V3 5/6] vhost_net: poll vhost queue after marking DMA is done Jason Wang
2013-09-02  8:41 ` Jason Wang
2013-09-02  8:41 ` Jason Wang [this message]
2013-09-02  8:41   ` [PATCH V3 6/6] vhost_net: correctly limit the max pending buffers Jason Wang
2013-09-04  2:47 ` [PATCH V3 0/6] vhost code cleanup and minor enhancement David Miller
2013-09-04  2:47   ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1378111261-14826-7-git-send-email-jasowang@redhat.com \
    --to=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.