linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 net-next 0/2] rx busy polling support for virtio-net
@ 2014-07-23  8:33 Jason Wang
  2014-07-23  8:33 ` [PATCH v3 net-next 1/2] virtio-net: introduce virtnet_receive() Jason Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Jason Wang @ 2014-07-23  8:33 UTC (permalink / raw)
  To: rusty, mst, virtualization, netdev, linux-kernel; +Cc: Jason Wang

Hi all:

This series introduces the support for rx busy polling support. This
was useful for reduing the latency for a kvm guest. Instead of
introducing new states and spinlocks, this series re-uses NAPI state
to synchonrize between NAPI and busy polling. This grealy simplified
the codes and reduce the overheads of spinlocks for normal NAPI fast
path.

Test was done between a kvm guest and an external host. Two hosts were
connected through 40gb mlx4 cards. With both busy_poll and  busy_read
are set to 50 in guest, 1 byte netperf tcp_rr shows 127% improvement:
transaction rate was increased from 8353.33 to 18966.87.

Changes from V2:
- Avoid introducing new states and spinlocks by reusuing the NAPI
  state
- Fix the budget calculation in virtnet_poll()
- Drop patch 1/3 from V2 since it was useless

Changes from V1:
- split the patch info smaller ones
- add more details about test setup/configuration

Jason Wang (2):
  virtio-net: introduce virtnet_receive()
  virtio-net: rx busy polling support

 drivers/net/virtio_net.c | 67 +++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 61 insertions(+), 6 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v3 net-next 1/2] virtio-net: introduce virtnet_receive()
  2014-07-23  8:33 [PATCH v3 net-next 0/2] rx busy polling support for virtio-net Jason Wang
@ 2014-07-23  8:33 ` Jason Wang
  2014-07-23  8:33 ` [PATCH v3 net-next 2/2] virtio-net: rx busy polling support Jason Wang
  2014-07-23 22:12 ` [PATCH v3 net-next 0/2] rx busy polling support for virtio-net David Miller
  2 siblings, 0 replies; 4+ messages in thread
From: Jason Wang @ 2014-07-23  8:33 UTC (permalink / raw)
  To: rusty, mst, virtualization, netdev, linux-kernel
  Cc: Jason Wang, Vlad Yasevich, Eric Dumazet

Move common receive logic to a new helper virtnet_receive(). It will
also be used by rx busy polling method.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Vlad Yasevich <vyasevic@redhat.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7d9f84a..b647d0d 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -725,15 +725,12 @@ static void refill_work(struct work_struct *work)
 	}
 }
 
-static int virtnet_poll(struct napi_struct *napi, int budget)
+static int virtnet_receive(struct receive_queue *rq, int budget)
 {
-	struct receive_queue *rq =
-		container_of(napi, struct receive_queue, napi);
 	struct virtnet_info *vi = rq->vq->vdev->priv;
+	unsigned int len, received = 0;
 	void *buf;
-	unsigned int r, len, received = 0;
 
-again:
 	while (received < budget &&
 	       (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) {
 		receive_buf(rq, buf, len);
@@ -745,6 +742,18 @@ again:
 			schedule_delayed_work(&vi->refill, 0);
 	}
 
+	return received;
+}
+
+static int virtnet_poll(struct napi_struct *napi, int budget)
+{
+	struct receive_queue *rq =
+		container_of(napi, struct receive_queue, napi);
+	unsigned int r, received = 0;
+
+again:
+	received += virtnet_receive(rq, budget - received);
+
 	/* Out of packets? */
 	if (received < budget) {
 		r = virtqueue_enable_cb_prepare(rq->vq);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v3 net-next 2/2] virtio-net: rx busy polling support
  2014-07-23  8:33 [PATCH v3 net-next 0/2] rx busy polling support for virtio-net Jason Wang
  2014-07-23  8:33 ` [PATCH v3 net-next 1/2] virtio-net: introduce virtnet_receive() Jason Wang
@ 2014-07-23  8:33 ` Jason Wang
  2014-07-23 22:12 ` [PATCH v3 net-next 0/2] rx busy polling support for virtio-net David Miller
  2 siblings, 0 replies; 4+ messages in thread
From: Jason Wang @ 2014-07-23  8:33 UTC (permalink / raw)
  To: rusty, mst, virtualization, netdev, linux-kernel
  Cc: Jason Wang, Vlad Yasevich, Eric Dumazet

Add basic support for rx busy polling. Instead of introducing new
states and spinlock to synchronize between NAPI and polling method,
this patch just reuse NAPI state to avoid extra overhead for fast path
and simplified the codes.

Test was done between a kvm guest and an external host. Two hosts were
connected through 40gb mlx4 cards. With both busy_poll and busy_read
are set to 50 in guest, 1 byte netperf tcp_rr shows 127% improvement:
transaction rate was increased from 8353.33 to 18966.87.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Vlad Yasevich <vyasevic@redhat.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 47 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index b647d0d..59caa06 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -27,6 +27,7 @@
 #include <linux/slab.h>
 #include <linux/cpu.h>
 #include <linux/average.h>
+#include <net/busy_poll.h>
 
 static int napi_weight = NAPI_POLL_WEIGHT;
 module_param(napi_weight, int, 0444);
@@ -521,6 +522,8 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
 		skb_shinfo(skb)->gso_segs = 0;
 	}
 
+	skb_mark_napi_id(skb, &rq->napi);
+
 	netif_receive_skb(skb);
 	return;
 
@@ -769,6 +772,43 @@ again:
 	return received;
 }
 
+#ifdef CONFIG_NET_RX_BUSY_POLL
+/* must be called with local_bh_disable()d */
+static int virtnet_busy_poll(struct napi_struct *napi)
+{
+	struct receive_queue *rq =
+		container_of(napi, struct receive_queue, napi);
+	struct virtnet_info *vi = rq->vq->vdev->priv;
+	int r, received = 0, budget = 4;
+
+	if (!(vi->status & VIRTIO_NET_S_LINK_UP))
+		return LL_FLUSH_FAILED;
+
+	if (!napi_schedule_prep(napi))
+		return LL_FLUSH_BUSY;
+
+	virtqueue_disable_cb(rq->vq);
+
+again:
+	received += virtnet_receive(rq, budget);
+
+	r = virtqueue_enable_cb_prepare(rq->vq);
+	clear_bit(NAPI_STATE_SCHED, &napi->state);
+	if (unlikely(virtqueue_poll(rq->vq, r)) &&
+	    napi_schedule_prep(napi)) {
+		virtqueue_disable_cb(rq->vq);
+		if (received < budget) {
+			budget -= received;
+			goto again;
+		} else {
+			__napi_schedule(napi);
+		}
+	}
+
+	return received;
+}
+#endif	/* CONFIG_NET_RX_BUSY_POLL */
+
 static int virtnet_open(struct net_device *dev)
 {
 	struct virtnet_info *vi = netdev_priv(dev);
@@ -1356,6 +1396,9 @@ static const struct net_device_ops virtnet_netdev = {
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = virtnet_netpoll,
 #endif
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	.ndo_busy_poll		= virtnet_busy_poll,
+#endif
 };
 
 static void virtnet_config_changed_work(struct work_struct *work)
@@ -1561,6 +1604,7 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
 		vi->rq[i].pages = NULL;
 		netif_napi_add(vi->dev, &vi->rq[i].napi, virtnet_poll,
 			       napi_weight);
+		napi_hash_add(&vi->rq[i].napi);
 
 		sg_init_table(vi->rq[i].sg, ARRAY_SIZE(vi->rq[i].sg));
 		ewma_init(&vi->rq[i].mrg_avg_pkt_len, 1, RECEIVE_AVG_WEIGHT);
@@ -1862,11 +1906,13 @@ static int virtnet_freeze(struct virtio_device *vdev)
 	netif_device_detach(vi->dev);
 	cancel_delayed_work_sync(&vi->refill);
 
-	if (netif_running(vi->dev))
+	if (netif_running(vi->dev)) {
 		for (i = 0; i < vi->max_queue_pairs; i++) {
 			napi_disable(&vi->rq[i].napi);
+			napi_hash_del(&vi->rq[i].napi);
 			netif_napi_del(&vi->rq[i].napi);
 		}
+	}
 
 	remove_vq_common(vi);
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 net-next 0/2] rx busy polling support for virtio-net
  2014-07-23  8:33 [PATCH v3 net-next 0/2] rx busy polling support for virtio-net Jason Wang
  2014-07-23  8:33 ` [PATCH v3 net-next 1/2] virtio-net: introduce virtnet_receive() Jason Wang
  2014-07-23  8:33 ` [PATCH v3 net-next 2/2] virtio-net: rx busy polling support Jason Wang
@ 2014-07-23 22:12 ` David Miller
  2 siblings, 0 replies; 4+ messages in thread
From: David Miller @ 2014-07-23 22:12 UTC (permalink / raw)
  To: jasowang; +Cc: rusty, mst, virtualization, netdev, linux-kernel

From: Jason Wang <jasowang@redhat.com>
Date: Wed, 23 Jul 2014 16:33:53 +0800

> This series introduces the support for rx busy polling support. This
> was useful for reduing the latency for a kvm guest. Instead of
> introducing new states and spinlocks, this series re-uses NAPI state
> to synchonrize between NAPI and busy polling. This grealy simplified
> the codes and reduce the overheads of spinlocks for normal NAPI fast
> path.
> 
> Test was done between a kvm guest and an external host. Two hosts were
> connected through 40gb mlx4 cards. With both busy_poll and  busy_read
> are set to 50 in guest, 1 byte netperf tcp_rr shows 127% improvement:
> transaction rate was increased from 8353.33 to 18966.87.
> 
> Changes from V2:
> - Avoid introducing new states and spinlocks by reusuing the NAPI
>   state
> - Fix the budget calculation in virtnet_poll()
> - Drop patch 1/3 from V2 since it was useless
> 
> Changes from V1:
> - split the patch info smaller ones
> - add more details about test setup/configuration

Series applied, thanks Jason.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-07-23 22:12 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-23  8:33 [PATCH v3 net-next 0/2] rx busy polling support for virtio-net Jason Wang
2014-07-23  8:33 ` [PATCH v3 net-next 1/2] virtio-net: introduce virtnet_receive() Jason Wang
2014-07-23  8:33 ` [PATCH v3 net-next 2/2] virtio-net: rx busy polling support Jason Wang
2014-07-23 22:12 ` [PATCH v3 net-next 0/2] rx busy polling support for virtio-net David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).