linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/6] net device rx busy polling support in vhost_net
@ 2016-03-31  5:50 Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu Jason Wang
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Jason Wang @ 2016-03-31  5:50 UTC (permalink / raw)
  To: davem, mst, netdev, linux-kernel; +Cc: Jason Wang

Hi all:

This series try to add net device rx busy polling support in
vhost_net. This is done through:

- adding socket rx busy polling support for tun/macvtap by marking
  napi_id.
- vhost_net will try to find the net device through napi_id and do
  busy polling if possible.

TCP_RR tests on two mlx4s show some improvements:

smp=1 queue=1
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
    1/     1/   +4%/   +3%/   +3%/   +3%/  +22%
    1/    50/   +2%/   +2%/   +2%/   +2%/    0%
    1/   100/   +1%/    0%/   +1%/   +1%/   -1%
    1/   200/   +2%/   +1%/   +2%/   +2%/    0%
   64/     1/   +1%/   +3%/   +1%/   +1%/   +1%
   64/    50/    0%/    0%/    0%/    0%/   -1%
   64/   100/   +1%/    0%/   +1%/   +1%/    0%
   64/   200/    0%/    0%/   +2%/   +2%/    0%
  256/     1/   +2%/   +2%/   +2%/   +2%/   +2%
  256/    50/   +3%/   +3%/   +3%/   +3%/    0%
  256/   100/   +1%/   +1%/   +2%/   +2%/    0%
  256/   200/    0%/    0%/   +1%/   +1%/   +1%
 1024/     1/   +2%/   +2%/   +2%/   +2%/   +2%
 1024/    50/   -1%/   -1%/   -1%/   -1%/   -2%
 1024/   100/   +1%/   +1%/    0%/    0%/   -1%
 1024/   200/   +2%/   +1%/   +2%/   +2%/    0%

smp=8 queue=1
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
    1/     1/   +1%/   -5%/   +1%/   +1%/    0%
    1/    50/   +1%/    0%/   +1%/   +1%/   -1%
    1/   100/   -1%/   -1%/   -2%/   -2%/   -4%
    1/   200/    0%/    0%/    0%/    0%/   -1%
   64/     1/   -2%/  -10%/   -2%/   -2%/   -2%
   64/    50/   -1%/   -1%/   -1%/   -1%/   -2%
   64/   100/   -1%/    0%/    0%/    0%/   -1%
   64/   200/   -1%/   -1%/    0%/    0%/    0%
  256/     1/   +7%/  +25%/   +7%/   +7%/   +7%
  256/    50/   +2%/   +2%/   +2%/   +2%/   -1%
  256/   100/   -1%/   -1%/   -1%/   -1%/   -3%
  256/   200/   +1%/    0%/    0%/    0%/    0%
 1024/     1/   +5%/  +15%/   +5%/   +5%/   +4%
 1024/    50/    0%/    0%/   -1%/   -1%/   -1%
 1024/   100/   -1%/   -1%/   -1%/   -1%/   -2%
 1024/   200/   -1%/    0%/   -1%/   -1%/   -1%

smp=8 queue=8
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
    1/     1/   +5%/   +2%/   +5%/   +5%/    0%
    1/    50/   +2%/   +2%/   +3%/   +3%/  -20%
    1/   100/   +5%/   +5%/   +5%/   +5%/  -13%
    1/   200/   +8%/   +8%/   +6%/   +6%/  -12%
   64/     1/    0%/   +4%/    0%/    0%/  +18%
   64/    50/   +6%/   +5%/   +5%/   +5%/   -7%
   64/   100/   +4%/   +4%/   +5%/   +5%/  -12%
   64/   200/   +5%/   +5%/   +5%/   +5%/  -12%
  256/     1/    0%/   -3%/    0%/    0%/   +1%
  256/    50/   +3%/   +3%/   +3%/   +3%/   -2%
  256/   100/   +6%/   +5%/   +5%/   +5%/  -11%
  256/   200/   +4%/   +4%/   +4%/   +4%/  -13%
 1024/     1/    0%/   -3%/    0%/    0%/   -6%
 1024/    50/   +1%/   +1%/   +1%/   +1%/  -10%
 1024/   100/   +4%/   +4%/   +5%/   +5%/  -11%
 1024/   200/   +4%/   +5%/   +4%/   +4%/  -12%

Thanks

Jason Wang (6):
  net: skbuff: don't use union for napi_id and sender_cpu
  tuntap: socket rx busy polling support
  macvtap: socket rx busy polling support
  net: core: factor out core busy polling logic to sk_busy_loop_once()
  net: export napi_by_id()
  vhost_net: net device rx busy polling support

 drivers/net/macvtap.c   |  4 ++++
 drivers/net/tun.c       |  3 ++-
 drivers/vhost/net.c     | 22 ++++++++++++++++--
 include/linux/skbuff.h  | 10 ++++----
 include/net/busy_poll.h |  8 +++++++
 net/core/dev.c          | 62 ++++++++++++++++++++++++++++---------------------
 6 files changed, 75 insertions(+), 34 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-03-31  5:50 [PATCH net-next 0/6] net device rx busy polling support in vhost_net Jason Wang
@ 2016-03-31  5:50 ` Jason Wang
  2016-03-31 10:32   ` Eric Dumazet
  2016-03-31  5:50 ` [PATCH net-next 2/6] tuntap: socket rx busy polling support Jason Wang
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Jason Wang @ 2016-03-31  5:50 UTC (permalink / raw)
  To: davem, mst, netdev, linux-kernel; +Cc: Jason Wang

We use a union for napi_id and send_cpu, this is ok for most of the
cases except when we want to support busy polling for tun which needs
napi_id to be stored and passed to socket during tun_net_xmit(). In
this case, napi_id was overridden with sender_cpu before tun_net_xmit()
was called if XPS was enabled. Fixing by not using union for napi_id
and sender_cpu.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/linux/skbuff.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 15d0df9..8aee891 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -743,11 +743,11 @@ struct sk_buff {
 	__u32			hash;
 	__be16			vlan_proto;
 	__u16			vlan_tci;
-#if defined(CONFIG_NET_RX_BUSY_POLL) || defined(CONFIG_XPS)
-	union {
-		unsigned int	napi_id;
-		unsigned int	sender_cpu;
-	};
+#if defined(CONFIG_NET_RX_BUSY_POLL)
+	unsigned int		napi_id;
+#endif
+#if defined(CONFIG_XPS)
+	unsigned int		sender_cpu;
 #endif
 	union {
 #ifdef CONFIG_NETWORK_SECMARK
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 2/6] tuntap: socket rx busy polling support
  2016-03-31  5:50 [PATCH net-next 0/6] net device rx busy polling support in vhost_net Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu Jason Wang
@ 2016-03-31  5:50 ` Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 3/6] macvtap: " Jason Wang
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2016-03-31  5:50 UTC (permalink / raw)
  To: davem, mst, netdev, linux-kernel; +Cc: Jason Wang

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/tun.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index afdf950..950faf5 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -69,6 +69,7 @@
 #include <net/netns/generic.h>
 #include <net/rtnetlink.h>
 #include <net/sock.h>
+#include <net/busy_poll.h>
 #include <linux/seq_file.h>
 #include <linux/uio.h>
 
@@ -871,6 +872,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
 
 	nf_reset(skb);
 
+	sk_mark_napi_id(tfile->socket.sk, skb);
 	/* Enqueue packet */
 	skb_queue_tail(&tfile->socket.sk->sk_receive_queue, skb);
 
@@ -878,7 +880,6 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
 	if (tfile->flags & TUN_FASYNC)
 		kill_fasync(&tfile->fasync, SIGIO, POLL_IN);
 	tfile->socket.sk->sk_data_ready(tfile->socket.sk);
-
 	rcu_read_unlock();
 	return NETDEV_TX_OK;
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 3/6] macvtap: socket rx busy polling support
  2016-03-31  5:50 [PATCH net-next 0/6] net device rx busy polling support in vhost_net Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 2/6] tuntap: socket rx busy polling support Jason Wang
@ 2016-03-31  5:50 ` Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 4/6] net: core: factor out core busy polling logic to sk_busy_loop_once() Jason Wang
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2016-03-31  5:50 UTC (permalink / raw)
  To: davem, mst, netdev, linux-kernel; +Cc: Jason Wang

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/macvtap.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
index 95394ed..1891aff 100644
--- a/drivers/net/macvtap.c
+++ b/drivers/net/macvtap.c
@@ -20,6 +20,7 @@
 #include <net/net_namespace.h>
 #include <net/rtnetlink.h>
 #include <net/sock.h>
+#include <net/busy_poll.h>
 #include <linux/virtio_net.h>
 
 /*
@@ -369,6 +370,7 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
 			goto drop;
 
 		if (!segs) {
+			sk_mark_napi_id(&q->sk, skb);
 			skb_queue_tail(&q->sk.sk_receive_queue, skb);
 			goto wake_up;
 		}
@@ -378,6 +380,7 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
 			struct sk_buff *nskb = segs->next;
 
 			segs->next = NULL;
+			sk_mark_napi_id(&q->sk, segs);
 			skb_queue_tail(&q->sk.sk_receive_queue, segs);
 			segs = nskb;
 		}
@@ -391,6 +394,7 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
 		    !(features & NETIF_F_CSUM_MASK) &&
 		    skb_checksum_help(skb))
 			goto drop;
+		sk_mark_napi_id(&q->sk, skb);
 		skb_queue_tail(&q->sk.sk_receive_queue, skb);
 	}
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 4/6] net: core: factor out core busy polling logic to sk_busy_loop_once()
  2016-03-31  5:50 [PATCH net-next 0/6] net device rx busy polling support in vhost_net Jason Wang
                   ` (2 preceding siblings ...)
  2016-03-31  5:50 ` [PATCH net-next 3/6] macvtap: " Jason Wang
@ 2016-03-31  5:50 ` Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 5/6] net: export napi_by_id() Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 6/6] vhost_net: net device rx busy polling support Jason Wang
  5 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2016-03-31  5:50 UTC (permalink / raw)
  To: davem, mst, netdev, linux-kernel; +Cc: Jason Wang

This patch factors out core logic of busy polling to
sk_busy_loop_once() in order to be reused by other modules.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/net/busy_poll.h |  7 ++++++
 net/core/dev.c          | 59 ++++++++++++++++++++++++++++---------------------
 2 files changed, 41 insertions(+), 25 deletions(-)

diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
index 2fbeb13..e765e23 100644
--- a/include/net/busy_poll.h
+++ b/include/net/busy_poll.h
@@ -73,6 +73,7 @@ static inline bool busy_loop_timeout(unsigned long end_time)
 }
 
 bool sk_busy_loop(struct sock *sk, int nonblock);
+int sk_busy_loop_once(struct sock *sk, struct napi_struct *napi);
 
 /* used in the NIC receive handler to mark the skb */
 static inline void skb_mark_napi_id(struct sk_buff *skb,
@@ -117,6 +118,12 @@ static inline bool busy_loop_timeout(unsigned long end_time)
 	return true;
 }
 
+static inline int sk_busy_loop_once(struct napi_struct *napi,
+				    int (*busy_poll)(struct napi_struct *dev))
+{
+	return 0;
+}
+
 static inline bool sk_busy_loop(struct sock *sk, int nonblock)
 {
 	return false;
diff --git a/net/core/dev.c b/net/core/dev.c
index b9bcbe7..a2f0c46 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4902,10 +4902,42 @@ static struct napi_struct *napi_by_id(unsigned int napi_id)
 
 #if defined(CONFIG_NET_RX_BUSY_POLL)
 #define BUSY_POLL_BUDGET 8
+int sk_busy_loop_once(struct sock *sk, struct napi_struct *napi)
+{
+	int (*busy_poll)(struct napi_struct *dev);
+	int rc = 0;
+
+	/* Note: ndo_busy_poll method is optional in linux-4.5 */
+	busy_poll = napi->dev->netdev_ops->ndo_busy_poll;
+
+	local_bh_disable();
+	if (busy_poll) {
+		rc = busy_poll(napi);
+	} else if (napi_schedule_prep(napi)) {
+		void *have = netpoll_poll_lock(napi);
+
+		if (test_bit(NAPI_STATE_SCHED, &napi->state)) {
+			rc = napi->poll(napi, BUSY_POLL_BUDGET);
+			trace_napi_poll(napi);
+			if (rc == BUSY_POLL_BUDGET) {
+				napi_complete_done(napi, rc);
+				napi_schedule(napi);
+			}
+		}
+		netpoll_poll_unlock(have);
+	}
+	if (rc > 0)
+		NET_ADD_STATS_BH(sock_net(sk),
+				 LINUX_MIB_BUSYPOLLRXPACKETS, rc);
+	local_bh_enable();
+
+	return rc;
+}
+EXPORT_SYMBOL(sk_busy_loop_once);
+
 bool sk_busy_loop(struct sock *sk, int nonblock)
 {
 	unsigned long end_time = !nonblock ? sk_busy_loop_end_time(sk) : 0;
-	int (*busy_poll)(struct napi_struct *dev);
 	struct napi_struct *napi;
 	int rc = false;
 
@@ -4915,31 +4947,8 @@ bool sk_busy_loop(struct sock *sk, int nonblock)
 	if (!napi)
 		goto out;
 
-	/* Note: ndo_busy_poll method is optional in linux-4.5 */
-	busy_poll = napi->dev->netdev_ops->ndo_busy_poll;
-
 	do {
-		rc = 0;
-		local_bh_disable();
-		if (busy_poll) {
-			rc = busy_poll(napi);
-		} else if (napi_schedule_prep(napi)) {
-			void *have = netpoll_poll_lock(napi);
-
-			if (test_bit(NAPI_STATE_SCHED, &napi->state)) {
-				rc = napi->poll(napi, BUSY_POLL_BUDGET);
-				trace_napi_poll(napi);
-				if (rc == BUSY_POLL_BUDGET) {
-					napi_complete_done(napi, rc);
-					napi_schedule(napi);
-				}
-			}
-			netpoll_poll_unlock(have);
-		}
-		if (rc > 0)
-			NET_ADD_STATS_BH(sock_net(sk),
-					 LINUX_MIB_BUSYPOLLRXPACKETS, rc);
-		local_bh_enable();
+		rc = sk_busy_loop_once(sk, napi);
 
 		if (rc == LL_FLUSH_FAILED)
 			break; /* permanent failure */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 5/6] net: export napi_by_id()
  2016-03-31  5:50 [PATCH net-next 0/6] net device rx busy polling support in vhost_net Jason Wang
                   ` (3 preceding siblings ...)
  2016-03-31  5:50 ` [PATCH net-next 4/6] net: core: factor out core busy polling logic to sk_busy_loop_once() Jason Wang
@ 2016-03-31  5:50 ` Jason Wang
  2016-03-31  5:50 ` [PATCH net-next 6/6] vhost_net: net device rx busy polling support Jason Wang
  5 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2016-03-31  5:50 UTC (permalink / raw)
  To: davem, mst, netdev, linux-kernel; +Cc: Jason Wang

This patch exports napi_by_id() which will be used by vhost_net socket
busy polling.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/net/busy_poll.h | 1 +
 net/core/dev.c          | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
index e765e23..dc9c76d 100644
--- a/include/net/busy_poll.h
+++ b/include/net/busy_poll.h
@@ -74,6 +74,7 @@ static inline bool busy_loop_timeout(unsigned long end_time)
 
 bool sk_busy_loop(struct sock *sk, int nonblock);
 int sk_busy_loop_once(struct sock *sk, struct napi_struct *napi);
+struct napi_struct *napi_by_id(unsigned int napi_id);
 
 /* used in the NIC receive handler to mark the skb */
 static inline void skb_mark_napi_id(struct sk_buff *skb,
diff --git a/net/core/dev.c b/net/core/dev.c
index a2f0c46..b98d210 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4888,7 +4888,7 @@ void napi_complete_done(struct napi_struct *n, int work_done)
 EXPORT_SYMBOL(napi_complete_done);
 
 /* must be called under rcu_read_lock(), as we dont take a reference */
-static struct napi_struct *napi_by_id(unsigned int napi_id)
+struct napi_struct *napi_by_id(unsigned int napi_id)
 {
 	unsigned int hash = napi_id % HASH_SIZE(napi_hash);
 	struct napi_struct *napi;
@@ -4899,6 +4899,7 @@ static struct napi_struct *napi_by_id(unsigned int napi_id)
 
 	return NULL;
 }
+EXPORT_SYMBOL(napi_by_id);
 
 #if defined(CONFIG_NET_RX_BUSY_POLL)
 #define BUSY_POLL_BUDGET 8
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 6/6] vhost_net: net device rx busy polling support
  2016-03-31  5:50 [PATCH net-next 0/6] net device rx busy polling support in vhost_net Jason Wang
                   ` (4 preceding siblings ...)
  2016-03-31  5:50 ` [PATCH net-next 5/6] net: export napi_by_id() Jason Wang
@ 2016-03-31  5:50 ` Jason Wang
  5 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2016-03-31  5:50 UTC (permalink / raw)
  To: davem, mst, netdev, linux-kernel; +Cc: Jason Wang

This patch let vhost_net try rx busy polling of underlying net device
when busy polling is enabled. Test shows some improvement on TCP_RR:

smp=1 queue=1
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
    1/     1/   +4%/   +3%/   +3%/   +3%/  +22%
    1/    50/   +2%/   +2%/   +2%/   +2%/    0%
    1/   100/   +1%/    0%/   +1%/   +1%/   -1%
    1/   200/   +2%/   +1%/   +2%/   +2%/    0%
   64/     1/   +1%/   +3%/   +1%/   +1%/   +1%
   64/    50/    0%/    0%/    0%/    0%/   -1%
   64/   100/   +1%/    0%/   +1%/   +1%/    0%
   64/   200/    0%/    0%/   +2%/   +2%/    0%
  256/     1/   +2%/   +2%/   +2%/   +2%/   +2%
  256/    50/   +3%/   +3%/   +3%/   +3%/    0%
  256/   100/   +1%/   +1%/   +2%/   +2%/    0%
  256/   200/    0%/    0%/   +1%/   +1%/   +1%
 1024/     1/   +2%/   +2%/   +2%/   +2%/   +2%
 1024/    50/   -1%/   -1%/   -1%/   -1%/   -2%
 1024/   100/   +1%/   +1%/    0%/    0%/   -1%
 1024/   200/   +2%/   +1%/   +2%/   +2%/    0%

smp=8 queue=1
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
    1/     1/   +1%/   -5%/   +1%/   +1%/    0%
    1/    50/   +1%/    0%/   +1%/   +1%/   -1%
    1/   100/   -1%/   -1%/   -2%/   -2%/   -4%
    1/   200/    0%/    0%/    0%/    0%/   -1%
   64/     1/   -2%/  -10%/   -2%/   -2%/   -2%
   64/    50/   -1%/   -1%/   -1%/   -1%/   -2%
   64/   100/   -1%/    0%/    0%/    0%/   -1%
   64/   200/   -1%/   -1%/    0%/    0%/    0%
  256/     1/   +7%/  +25%/   +7%/   +7%/   +7%
  256/    50/   +2%/   +2%/   +2%/   +2%/   -1%
  256/   100/   -1%/   -1%/   -1%/   -1%/   -3%
  256/   200/   +1%/    0%/    0%/    0%/    0%
 1024/     1/   +5%/  +15%/   +5%/   +5%/   +4%
 1024/    50/    0%/    0%/   -1%/   -1%/   -1%
 1024/   100/   -1%/   -1%/   -1%/   -1%/   -2%
 1024/   200/   -1%/    0%/   -1%/   -1%/   -1%

smp=8 queue=8
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
    1/     1/   +5%/   +2%/   +5%/   +5%/    0%
    1/    50/   +2%/   +2%/   +3%/   +3%/  -20%
    1/   100/   +5%/   +5%/   +5%/   +5%/  -13%
    1/   200/   +8%/   +8%/   +6%/   +6%/  -12%
   64/     1/    0%/   +4%/    0%/    0%/  +18%
   64/    50/   +6%/   +5%/   +5%/   +5%/   -7%
   64/   100/   +4%/   +4%/   +5%/   +5%/  -12%
   64/   200/   +5%/   +5%/   +5%/   +5%/  -12%
  256/     1/    0%/   -3%/    0%/    0%/   +1%
  256/    50/   +3%/   +3%/   +3%/   +3%/   -2%
  256/   100/   +6%/   +5%/   +5%/   +5%/  -11%
  256/   200/   +4%/   +4%/   +4%/   +4%/  -13%
 1024/     1/    0%/   -3%/    0%/    0%/   -6%
 1024/    50/   +1%/   +1%/   +1%/   +1%/  -10%
 1024/   100/   +4%/   +4%/   +5%/   +5%/  -11%
 1024/   200/   +4%/   +5%/   +4%/   +4%/  -12%

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/net.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index f744eeb..7350f6c 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -27,6 +27,7 @@
 #include <linux/if_vlan.h>
 
 #include <net/sock.h>
+#include <net/busy_poll.h>
 
 #include "vhost.h"
 
@@ -307,15 +308,24 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
 				    unsigned int *out_num, unsigned int *in_num)
 {
 	unsigned long uninitialized_var(endtime);
+	struct socket *sock = vq->private_data;
+	struct sock *sk = sock->sk;
+	struct napi_struct *napi;
 	int r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
 				    out_num, in_num, NULL, NULL);
 
 	if (r == vq->num && vq->busyloop_timeout) {
 		preempt_disable();
+		rcu_read_lock();
+		napi = napi_by_id(sk->sk_napi_id);
 		endtime = busy_clock() + vq->busyloop_timeout;
 		while (vhost_can_busy_poll(vq->dev, endtime) &&
-		       vhost_vq_avail_empty(vq->dev, vq))
+		       vhost_vq_avail_empty(vq->dev, vq)) {
+			if (napi)
+				sk_busy_loop_once(sk, napi);
 			cpu_relax_lowlatency();
+		}
+		rcu_read_unlock();
 		preempt_enable();
 		r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
 					out_num, in_num, NULL, NULL);
@@ -476,6 +486,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
 	struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX];
 	struct vhost_virtqueue *vq = &nvq->vq;
 	unsigned long uninitialized_var(endtime);
+	struct napi_struct *napi;
 	int len = peek_head_len(sk);
 
 	if (!len && vq->busyloop_timeout) {
@@ -484,13 +495,20 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
 		vhost_disable_notify(&net->dev, vq);
 
 		preempt_disable();
+		rcu_read_lock();
+
+		napi = napi_by_id(sk->sk_napi_id);
 		endtime = busy_clock() + vq->busyloop_timeout;
 
 		while (vhost_can_busy_poll(&net->dev, endtime) &&
 		       skb_queue_empty(&sk->sk_receive_queue) &&
-		       vhost_vq_avail_empty(&net->dev, vq))
+		       vhost_vq_avail_empty(&net->dev, vq)) {
+			if (napi)
+				sk_busy_loop_once(sk, napi);
 			cpu_relax_lowlatency();
+		}
 
+		rcu_read_unlock();
 		preempt_enable();
 
 		if (vhost_enable_notify(&net->dev, vq))
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-03-31  5:50 ` [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu Jason Wang
@ 2016-03-31 10:32   ` Eric Dumazet
  2016-03-31 20:01     ` David Miller
  2016-04-01  2:13     ` Jason Wang
  0 siblings, 2 replies; 16+ messages in thread
From: Eric Dumazet @ 2016-03-31 10:32 UTC (permalink / raw)
  To: Jason Wang; +Cc: davem, mst, netdev, linux-kernel

On Thu, 2016-03-31 at 13:50 +0800, Jason Wang wrote:
> We use a union for napi_id and send_cpu, this is ok for most of the
> cases except when we want to support busy polling for tun which needs
> napi_id to be stored and passed to socket during tun_net_xmit(). In
> this case, napi_id was overridden with sender_cpu before tun_net_xmit()
> was called if XPS was enabled. Fixing by not using union for napi_id
> and sender_cpu.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  include/linux/skbuff.h | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 15d0df9..8aee891 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -743,11 +743,11 @@ struct sk_buff {
>  	__u32			hash;
>  	__be16			vlan_proto;
>  	__u16			vlan_tci;
> -#if defined(CONFIG_NET_RX_BUSY_POLL) || defined(CONFIG_XPS)
> -	union {
> -		unsigned int	napi_id;
> -		unsigned int	sender_cpu;
> -	};
> +#if defined(CONFIG_NET_RX_BUSY_POLL)
> +	unsigned int		napi_id;
> +#endif
> +#if defined(CONFIG_XPS)
> +	unsigned int		sender_cpu;
>  #endif
>  	union {
>  #ifdef CONFIG_NETWORK_SECMARK

Hmmm...

This is a serious problem.

Making skb bigger (8 bytes because of alignment) was not considered
valid for sender_cpu introduction. We worked quite hard to avoid this,
if you take a look at git history :(

Can you describe more precisely the problem and code path ?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-03-31 10:32   ` Eric Dumazet
@ 2016-03-31 20:01     ` David Miller
  2016-04-01  2:46       ` Jason Wang
  2016-04-01  2:13     ` Jason Wang
  1 sibling, 1 reply; 16+ messages in thread
From: David Miller @ 2016-03-31 20:01 UTC (permalink / raw)
  To: eric.dumazet; +Cc: jasowang, mst, netdev, linux-kernel

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Thu, 31 Mar 2016 03:32:21 -0700

> On Thu, 2016-03-31 at 13:50 +0800, Jason Wang wrote:
>> We use a union for napi_id and send_cpu, this is ok for most of the
>> cases except when we want to support busy polling for tun which needs
>> napi_id to be stored and passed to socket during tun_net_xmit(). In
>> this case, napi_id was overridden with sender_cpu before tun_net_xmit()
>> was called if XPS was enabled. Fixing by not using union for napi_id
>> and sender_cpu.
>> 
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>>  include/linux/skbuff.h | 10 +++++-----
>>  1 file changed, 5 insertions(+), 5 deletions(-)
>> 
>> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
>> index 15d0df9..8aee891 100644
>> --- a/include/linux/skbuff.h
>> +++ b/include/linux/skbuff.h
>> @@ -743,11 +743,11 @@ struct sk_buff {
>>  	__u32			hash;
>>  	__be16			vlan_proto;
>>  	__u16			vlan_tci;
>> -#if defined(CONFIG_NET_RX_BUSY_POLL) || defined(CONFIG_XPS)
>> -	union {
>> -		unsigned int	napi_id;
>> -		unsigned int	sender_cpu;
>> -	};
>> +#if defined(CONFIG_NET_RX_BUSY_POLL)
>> +	unsigned int		napi_id;
>> +#endif
>> +#if defined(CONFIG_XPS)
>> +	unsigned int		sender_cpu;
>>  #endif
>>  	union {
>>  #ifdef CONFIG_NETWORK_SECMARK
> 
> Hmmm...
> 
> This is a serious problem.
> 
> Making skb bigger (8 bytes because of alignment) was not considered
> valid for sender_cpu introduction. We worked quite hard to avoid this,
> if you take a look at git history :(
> 
> Can you describe more precisely the problem and code path ?

>From what I can see they are doing busy poll loops in the TX code paths,
as well as the RX code paths, of vhost.

Doing this in the TX side makes little sense to me.  The busy poll
implementations in the drivers only process their RX queues when
->ndo_busy_poll() is invoked.  So I wonder what this is accomplishing
for the vhost TX case?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-03-31 10:32   ` Eric Dumazet
  2016-03-31 20:01     ` David Miller
@ 2016-04-01  2:13     ` Jason Wang
  2016-04-01  2:55       ` Eric Dumazet
  1 sibling, 1 reply; 16+ messages in thread
From: Jason Wang @ 2016-04-01  2:13 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: davem, mst, netdev, linux-kernel



On 03/31/2016 06:32 PM, Eric Dumazet wrote:
> On Thu, 2016-03-31 at 13:50 +0800, Jason Wang wrote:
>> We use a union for napi_id and send_cpu, this is ok for most of the
>> cases except when we want to support busy polling for tun which needs
>> napi_id to be stored and passed to socket during tun_net_xmit(). In
>> this case, napi_id was overridden with sender_cpu before tun_net_xmit()
>> was called if XPS was enabled. Fixing by not using union for napi_id
>> and sender_cpu.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>>  include/linux/skbuff.h | 10 +++++-----
>>  1 file changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
>> index 15d0df9..8aee891 100644
>> --- a/include/linux/skbuff.h
>> +++ b/include/linux/skbuff.h
>> @@ -743,11 +743,11 @@ struct sk_buff {
>>  	__u32			hash;
>>  	__be16			vlan_proto;
>>  	__u16			vlan_tci;
>> -#if defined(CONFIG_NET_RX_BUSY_POLL) || defined(CONFIG_XPS)
>> -	union {
>> -		unsigned int	napi_id;
>> -		unsigned int	sender_cpu;
>> -	};
>> +#if defined(CONFIG_NET_RX_BUSY_POLL)
>> +	unsigned int		napi_id;
>> +#endif
>> +#if defined(CONFIG_XPS)
>> +	unsigned int		sender_cpu;
>>  #endif
>>  	union {
>>  #ifdef CONFIG_NETWORK_SECMARK
> Hmmm...
>
> This is a serious problem.
>
> Making skb bigger (8 bytes because of alignment) was not considered
> valid for sender_cpu introduction. We worked quite hard to avoid this,
> if you take a look at git history :(
>
> Can you describe more precisely the problem and code path ?
>

The problem is we want to support busy polling for tun. This needs
napi_id to be passed to tun socket by sk_mark_napi_id() during
tun_net_xmit(). But before reaching this, XPS will set sender_cpu will
make us can't see correct napi_id.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-03-31 20:01     ` David Miller
@ 2016-04-01  2:46       ` Jason Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2016-04-01  2:46 UTC (permalink / raw)
  To: David Miller, eric.dumazet; +Cc: mst, netdev, linux-kernel



On 04/01/2016 04:01 AM, David Miller wrote:
> From: Eric Dumazet <eric.dumazet@gmail.com>
> Date: Thu, 31 Mar 2016 03:32:21 -0700
>
>> On Thu, 2016-03-31 at 13:50 +0800, Jason Wang wrote:
>>> We use a union for napi_id and send_cpu, this is ok for most of the
>>> cases except when we want to support busy polling for tun which needs
>>> napi_id to be stored and passed to socket during tun_net_xmit(). In
>>> this case, napi_id was overridden with sender_cpu before tun_net_xmit()
>>> was called if XPS was enabled. Fixing by not using union for napi_id
>>> and sender_cpu.
>>>
>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>>  include/linux/skbuff.h | 10 +++++-----
>>>  1 file changed, 5 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
>>> index 15d0df9..8aee891 100644
>>> --- a/include/linux/skbuff.h
>>> +++ b/include/linux/skbuff.h
>>> @@ -743,11 +743,11 @@ struct sk_buff {
>>>  	__u32			hash;
>>>  	__be16			vlan_proto;
>>>  	__u16			vlan_tci;
>>> -#if defined(CONFIG_NET_RX_BUSY_POLL) || defined(CONFIG_XPS)
>>> -	union {
>>> -		unsigned int	napi_id;
>>> -		unsigned int	sender_cpu;
>>> -	};
>>> +#if defined(CONFIG_NET_RX_BUSY_POLL)
>>> +	unsigned int		napi_id;
>>> +#endif
>>> +#if defined(CONFIG_XPS)
>>> +	unsigned int		sender_cpu;
>>>  #endif
>>>  	union {
>>>  #ifdef CONFIG_NETWORK_SECMARK
>> Hmmm...
>>
>> This is a serious problem.
>>
>> Making skb bigger (8 bytes because of alignment) was not considered
>> valid for sender_cpu introduction. We worked quite hard to avoid this,
>> if you take a look at git history :(
>>
>> Can you describe more precisely the problem and code path ?
> From what I can see they are doing busy poll loops in the TX code paths,
> as well as the RX code paths, of vhost.
>
> Doing this in the TX side makes little sense to me.  The busy poll
> implementations in the drivers only process their RX queues when
> ->ndo_busy_poll() is invoked.  So I wonder what this is accomplishing
> for the vhost TX case?

In vhost TX case, it's possible that new packets were arrived at rx
queue during tx polling. Consider tx and rx were processed in one
thread, poll rx looks feasible to me.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-04-01  2:13     ` Jason Wang
@ 2016-04-01  2:55       ` Eric Dumazet
  2016-04-01  4:49         ` Jason Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Eric Dumazet @ 2016-04-01  2:55 UTC (permalink / raw)
  To: Jason Wang; +Cc: davem, mst, netdev, linux-kernel

On Fri, 2016-04-01 at 10:13 +0800, Jason Wang wrote:


> 
> The problem is we want to support busy polling for tun. This needs
> napi_id to be passed to tun socket by sk_mark_napi_id() during
> tun_net_xmit(). But before reaching this, XPS will set sender_cpu will
> make us can't see correct napi_id.
> 

Looks like napi_id should have precedence then ?

Only forwarding should allow the field to be cleared to allow XPS to do
its job.

Maybe skb_sender_cpu_clear() was removed too early (commit
64d4e3431e686dc37ce388ba531c4c4e866fb141)

Look, it is 8pm here, I am pretty sure a solution can be found,
but I am also need to take a break, I started at 3am today...

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-04-01  2:55       ` Eric Dumazet
@ 2016-04-01  4:49         ` Jason Wang
  2016-04-01 13:04           ` Eric Dumazet
  0 siblings, 1 reply; 16+ messages in thread
From: Jason Wang @ 2016-04-01  4:49 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: davem, mst, netdev, linux-kernel



On 04/01/2016 10:55 AM, Eric Dumazet wrote:
> On Fri, 2016-04-01 at 10:13 +0800, Jason Wang wrote:
>
>
>> The problem is we want to support busy polling for tun. This needs
>> napi_id to be passed to tun socket by sk_mark_napi_id() during
>> tun_net_xmit(). But before reaching this, XPS will set sender_cpu will
>> make us can't see correct napi_id.
>>
> Looks like napi_id should have precedence then ?

But then when busy polling is enabled, we may still hit the issue before
commit 2bd82484bb4c5db1d5dc983ac7c409b2782e0154? So looks like sometimes
(e.g for tun), we need both two fields.

>
> Only forwarding should allow the field to be cleared to allow XPS to do
> its job.
>
> Maybe skb_sender_cpu_clear() was removed too early (commit
> 64d4e3431e686dc37ce388ba531c4c4e866fb141)

Not sure I get you, but this will clear napi_id too.

> Look, it is 8pm here, I am pretty sure a solution can be found,
> but I am also need to take a break, I started at 3am today...
>
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-04-01  4:49         ` Jason Wang
@ 2016-04-01 13:04           ` Eric Dumazet
  2016-04-05 15:05             ` Michael S. Tsirkin
  2016-04-06  6:22             ` Jason Wang
  0 siblings, 2 replies; 16+ messages in thread
From: Eric Dumazet @ 2016-04-01 13:04 UTC (permalink / raw)
  To: Jason Wang; +Cc: davem, mst, netdev, linux-kernel

On Fri, 2016-04-01 at 12:49 +0800, Jason Wang wrote:
> 
> On 04/01/2016 10:55 AM, Eric Dumazet wrote:
> > On Fri, 2016-04-01 at 10:13 +0800, Jason Wang wrote:
> >
> >
> >> The problem is we want to support busy polling for tun. This needs
> >> napi_id to be passed to tun socket by sk_mark_napi_id() during
> >> tun_net_xmit(). But before reaching this, XPS will set sender_cpu will
> >> make us can't see correct napi_id.
> >>
> > Looks like napi_id should have precedence then ?
> 
> But then when busy polling is enabled, we may still hit the issue before
> commit 2bd82484bb4c5db1d5dc983ac7c409b2782e0154? So looks like sometimes
> (e.g for tun), we need both two fields.

You did not clearly show me the path you take where both fields would be
needed. If you expect me to do that, it wont happen.

> 
> >
> > Only forwarding should allow the field to be cleared to allow XPS to do
> > its job.
> >
> > Maybe skb_sender_cpu_clear() was removed too early (commit
> > 64d4e3431e686dc37ce388ba531c4c4e866fb141)
> 
> Not sure I get you, but this will clear napi_id too.

Only when allowed. In your case it would not be called.

Some people do not use tun, and want to forward or cook millions of
packets per second. sk_buff size is critical. 

If busy polling gives you 5 % of performance improvement, but cost
everyone else a performance decrease, this is a serious problem.

XPS is a sender problem, NAPI is a receiver problem. Fields should be
shared.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-04-01 13:04           ` Eric Dumazet
@ 2016-04-05 15:05             ` Michael S. Tsirkin
  2016-04-06  6:22             ` Jason Wang
  1 sibling, 0 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2016-04-05 15:05 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Jason Wang, davem, netdev, linux-kernel

On Fri, Apr 01, 2016 at 06:04:19AM -0700, Eric Dumazet wrote:
> On Fri, 2016-04-01 at 12:49 +0800, Jason Wang wrote:
> > 
> > On 04/01/2016 10:55 AM, Eric Dumazet wrote:
> > > On Fri, 2016-04-01 at 10:13 +0800, Jason Wang wrote:
> > >
> > >
> > >> The problem is we want to support busy polling for tun. This needs
> > >> napi_id to be passed to tun socket by sk_mark_napi_id() during
> > >> tun_net_xmit(). But before reaching this, XPS will set sender_cpu will
> > >> make us can't see correct napi_id.
> > >>
> > > Looks like napi_id should have precedence then ?
> > 
> > But then when busy polling is enabled, we may still hit the issue before
> > commit 2bd82484bb4c5db1d5dc983ac7c409b2782e0154? So looks like sometimes
> > (e.g for tun), we need both two fields.
> 
> You did not clearly show me the path you take where both fields would be
> needed. If you expect me to do that, it wont happen.
> 
> > 
> > >
> > > Only forwarding should allow the field to be cleared to allow XPS to do
> > > its job.
> > >
> > > Maybe skb_sender_cpu_clear() was removed too early (commit
> > > 64d4e3431e686dc37ce388ba531c4c4e866fb141)
> > 
> > Not sure I get you, but this will clear napi_id too.
> 
> Only when allowed. In your case it would not be called.
> 
> Some people do not use tun, and want to forward or cook millions of
> packets per second. sk_buff size is critical. 
> 
> If busy polling gives you 5 % of performance improvement, but cost
> everyone else a performance decrease, this is a serious problem.
> 
> XPS is a sender problem, NAPI is a receiver problem. Fields should be
> shared.

Right. The issue IIUC is the weird way tun behaves: it's a netdev
so linux is a sender, but it has a socket in it and then linux
is the receiver too. I guess we need to find a way to special-case
tun somehow?

-- 
MST

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu
  2016-04-01 13:04           ` Eric Dumazet
  2016-04-05 15:05             ` Michael S. Tsirkin
@ 2016-04-06  6:22             ` Jason Wang
  1 sibling, 0 replies; 16+ messages in thread
From: Jason Wang @ 2016-04-06  6:22 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: davem, mst, netdev, linux-kernel



On 04/01/2016 09:04 PM, Eric Dumazet wrote:
> On Fri, 2016-04-01 at 12:49 +0800, Jason Wang wrote:
>> On 04/01/2016 10:55 AM, Eric Dumazet wrote:
>>> On Fri, 2016-04-01 at 10:13 +0800, Jason Wang wrote:
>>>
>>>
>>>> The problem is we want to support busy polling for tun. This needs
>>>> napi_id to be passed to tun socket by sk_mark_napi_id() during
>>>> tun_net_xmit(). But before reaching this, XPS will set sender_cpu will
>>>> make us can't see correct napi_id.
>>>>
>>> Looks like napi_id should have precedence then ?
>> But then when busy polling is enabled, we may still hit the issue before
>> commit 2bd82484bb4c5db1d5dc983ac7c409b2782e0154? So looks like sometimes
>> (e.g for tun), we need both two fields.
> You did not clearly show me the path you take where both fields would be
> needed. If you expect me to do that, it wont happen.

Right, since tun does not use XPS. But the stacked device on top of tun
does not know this which may still try to set sender_cpu.

>
>>> Only forwarding should allow the field to be cleared to allow XPS to do
>>> its job.
>>>
>>> Maybe skb_sender_cpu_clear() was removed too early (commit
>>> 64d4e3431e686dc37ce388ba531c4c4e866fb141)
>> Not sure I get you, but this will clear napi_id too.
> Only when allowed. In your case it would not be called.

Looks like I miss something, there used to be a skb_sender_cpu_clear()
in br_dev_queue_push_xmit() which will be called before tun_net_xmit().

>
> Some people do not use tun, and want to forward or cook millions of
> packets per second. sk_buff size is critical. 
>
> If busy polling gives you 5 % of performance improvement, but cost
> everyone else a performance decrease, this is a serious problem.
>
> XPS is a sender problem, NAPI is a receiver problem. Fields should be
> shared.
>

How about introduce a napi_id field in softnet_data to record the napi
that is being polled, then sk_mark_napi_id() can fetch napi_id from it
instead of skb?

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2016-04-06  6:22 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-31  5:50 [PATCH net-next 0/6] net device rx busy polling support in vhost_net Jason Wang
2016-03-31  5:50 ` [PATCH net-next 1/6] net: skbuff: don't use union for napi_id and sender_cpu Jason Wang
2016-03-31 10:32   ` Eric Dumazet
2016-03-31 20:01     ` David Miller
2016-04-01  2:46       ` Jason Wang
2016-04-01  2:13     ` Jason Wang
2016-04-01  2:55       ` Eric Dumazet
2016-04-01  4:49         ` Jason Wang
2016-04-01 13:04           ` Eric Dumazet
2016-04-05 15:05             ` Michael S. Tsirkin
2016-04-06  6:22             ` Jason Wang
2016-03-31  5:50 ` [PATCH net-next 2/6] tuntap: socket rx busy polling support Jason Wang
2016-03-31  5:50 ` [PATCH net-next 3/6] macvtap: " Jason Wang
2016-03-31  5:50 ` [PATCH net-next 4/6] net: core: factor out core busy polling logic to sk_busy_loop_once() Jason Wang
2016-03-31  5:50 ` [PATCH net-next 5/6] net: export napi_by_id() Jason Wang
2016-03-31  5:50 ` [PATCH net-next 6/6] vhost_net: net device rx busy polling support Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).