linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next V5 0/9] vhost_net rx batch dequeuing
@ 2017-05-17  4:14 Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 1/9] ptr_ring: add ptr_ring_unconsume Jason Wang
                   ` (10 more replies)
  0 siblings, 11 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying. This is also the requirement for more batching
implemention on rx path.

Tests shows at most 7.56% improvment bon rx pps on top of batch
zeroing and no obvious changes for TCP_STREAM/TCP_RR result.

Please review.

Thanks

Changes from V4:
- drop batch zeroing patch
- renew the performance numbers
- move skb pointer array out of vhost_net structure

Changes from V3:
- add batch zeroing patch to fix the build warnings

Changes from V2:
- rebase to net-next HEAD
- use unconsume helpers to put skb back on releasing
- introduce and use vhost_net internal buffer helpers
- renew performance numbers on top of batch zeroing

Changes from V1:
- switch to use for() in __ptr_ring_consume_batched()
- rename peek_head_len_batched() to fetch_skbs()
- use skb_array_consume_batched() instead of
  skb_array_consume_batched_bh() since no consumer run in bh
- drop the lockless peeking patch since skb_array could be resized, so
  it's not safe to call lockless one

Jason Wang (8):
  skb_array: introduce skb_array_unconsume
  ptr_ring: introduce batch dequeuing
  skb_array: introduce batch dequeuing
  tun: export skb_array
  tap: export skb_array
  tun: support receiving skb through msg_control
  tap: support receiving skb from msg_control
  vhost_net: try batch dequing from skb array

Michael S. Tsirkin (1):
  ptr_ring: add ptr_ring_unconsume

 drivers/net/tap.c         |  25 +++++++--
 drivers/net/tun.c         |  31 ++++++++---
 drivers/vhost/net.c       | 128 +++++++++++++++++++++++++++++++++++++++++++---
 include/linux/if_tap.h    |   5 ++
 include/linux/if_tun.h    |   5 ++
 include/linux/ptr_ring.h  | 120 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/skb_array.h |  31 +++++++++++
 7 files changed, 327 insertions(+), 18 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 1/9] ptr_ring: add ptr_ring_unconsume
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 2/9] skb_array: introduce skb_array_unconsume Jason Wang
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

From: "Michael S. Tsirkin" <mst@redhat.com>

Applications that consume a batch of entries in one go
can benefit from ability to return some of them back
into the ring.

Add an API for that - assuming there's space. If there's no space
naturally can't do this and have to drop entries, but this implies ring
is full so we'd likely drop some anyway.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/linux/ptr_ring.h | 55 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
index 6b2e0dd..796b90f 100644
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -403,6 +403,61 @@ static inline int ptr_ring_init(struct ptr_ring *r, int size, gfp_t gfp)
 	return 0;
 }
 
+/*
+ * Return entries into ring. Destroy entries that don't fit.
+ *
+ * Note: this is expected to be a rare slow path operation.
+ *
+ * Note: producer lock is nested within consumer lock, so if you
+ * resize you must make sure all uses nest correctly.
+ * In particular if you consume ring in interrupt or BH context, you must
+ * disable interrupts/BH when doing so.
+ */
+static inline void ptr_ring_unconsume(struct ptr_ring *r, void **batch, int n,
+				      void (*destroy)(void *))
+{
+	unsigned long flags;
+	int head;
+
+	spin_lock_irqsave(&r->consumer_lock, flags);
+	spin_lock(&r->producer_lock);
+
+	if (!r->size)
+		goto done;
+
+	/*
+	 * Clean out buffered entries (for simplicity). This way following code
+	 * can test entries for NULL and if not assume they are valid.
+	 */
+	head = r->consumer_head - 1;
+	while (likely(head >= r->consumer_tail))
+		r->queue[head--] = NULL;
+	r->consumer_tail = r->consumer_head;
+
+	/*
+	 * Go over entries in batch, start moving head back and copy entries.
+	 * Stop when we run into previously unconsumed entries.
+	 */
+	while (n) {
+		head = r->consumer_head - 1;
+		if (head < 0)
+			head = r->size - 1;
+		if (r->queue[head]) {
+			/* This batch entry will have to be destroyed. */
+			goto done;
+		}
+		r->queue[head] = batch[--n];
+		r->consumer_tail = r->consumer_head = head;
+	}
+
+done:
+	/* Destroy all entries left in the batch. */
+	while (n)
+		destroy(batch[--n]);
+	spin_unlock(&r->producer_lock);
+	spin_unlock_irqrestore(&r->consumer_lock, flags);
+}
+
 static inline void **__ptr_ring_swap_queue(struct ptr_ring *r, void **queue,
 					   int size, gfp_t gfp,
 					   void (*destroy)(void *))
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 2/9] skb_array: introduce skb_array_unconsume
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 1/9] ptr_ring: add ptr_ring_unconsume Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 3/9] ptr_ring: introduce batch dequeuing Jason Wang
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/linux/skb_array.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index f4dfade..79850b6 100644
--- a/include/linux/skb_array.h
+++ b/include/linux/skb_array.h
@@ -156,6 +156,12 @@ static void __skb_array_destroy_skb(void *ptr)
 	kfree_skb(ptr);
 }
 
+static inline void skb_array_unconsume(struct skb_array *a,
+				       struct sk_buff **skbs, int n)
+{
+	ptr_ring_unconsume(&a->ring, (void **)skbs, n, __skb_array_destroy_skb);
+}
+
 static inline int skb_array_resize(struct skb_array *a, int size, gfp_t gfp)
 {
 	return ptr_ring_resize(&a->ring, size, gfp, __skb_array_destroy_skb);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 3/9] ptr_ring: introduce batch dequeuing
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 1/9] ptr_ring: add ptr_ring_unconsume Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 2/9] skb_array: introduce skb_array_unconsume Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 4/9] skb_array: " Jason Wang
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

This patch introduce a batched version of consuming, consumer can
dequeue more than one pointers from the ring at a time. We don't care
about the reorder of reading here so no need for compiler barrier.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/linux/ptr_ring.h | 65 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 65 insertions(+)

diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
index 796b90f..d8c97ec 100644
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -278,6 +278,22 @@ static inline void *__ptr_ring_consume(struct ptr_ring *r)
 	return ptr;
 }
 
+static inline int __ptr_ring_consume_batched(struct ptr_ring *r,
+					     void **array, int n)
+{
+	void *ptr;
+	int i;
+
+	for (i = 0; i < n; i++) {
+		ptr = __ptr_ring_consume(r);
+		if (!ptr)
+			break;
+		array[i] = ptr;
+	}
+
+	return i;
+}
+
 /*
  * Note: resize (below) nests producer lock within consumer lock, so if you
  * call this in interrupt or BH context, you must disable interrupts/BH when
@@ -328,6 +344,55 @@ static inline void *ptr_ring_consume_bh(struct ptr_ring *r)
 	return ptr;
 }
 
+static inline int ptr_ring_consume_batched(struct ptr_ring *r,
+					   void **array, int n)
+{
+	int ret;
+
+	spin_lock(&r->consumer_lock);
+	ret = __ptr_ring_consume_batched(r, array, n);
+	spin_unlock(&r->consumer_lock);
+
+	return ret;
+}
+
+static inline int ptr_ring_consume_batched_irq(struct ptr_ring *r,
+					       void **array, int n)
+{
+	int ret;
+
+	spin_lock_irq(&r->consumer_lock);
+	ret = __ptr_ring_consume_batched(r, array, n);
+	spin_unlock_irq(&r->consumer_lock);
+
+	return ret;
+}
+
+static inline int ptr_ring_consume_batched_any(struct ptr_ring *r,
+					       void **array, int n)
+{
+	unsigned long flags;
+	int ret;
+
+	spin_lock_irqsave(&r->consumer_lock, flags);
+	ret = __ptr_ring_consume_batched(r, array, n);
+	spin_unlock_irqrestore(&r->consumer_lock, flags);
+
+	return ret;
+}
+
+static inline int ptr_ring_consume_batched_bh(struct ptr_ring *r,
+					      void **array, int n)
+{
+	int ret;
+
+	spin_lock_bh(&r->consumer_lock);
+	ret = __ptr_ring_consume_batched(r, array, n);
+	spin_unlock_bh(&r->consumer_lock);
+
+	return ret;
+}
+
 /* Cast to structure type and call a function without discarding from FIFO.
  * Function must return a value.
  * Callers must take consumer_lock.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 4/9] skb_array: introduce batch dequeuing
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
                   ` (2 preceding siblings ...)
  2017-05-17  4:14 ` [PATCH net-next V5 3/9] ptr_ring: introduce batch dequeuing Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 5/9] tun: export skb_array Jason Wang
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/linux/skb_array.h | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index 79850b6..35226cd 100644
--- a/include/linux/skb_array.h
+++ b/include/linux/skb_array.h
@@ -97,21 +97,46 @@ static inline struct sk_buff *skb_array_consume(struct skb_array *a)
 	return ptr_ring_consume(&a->ring);
 }
 
+static inline int skb_array_consume_batched(struct skb_array *a,
+					    struct sk_buff **array, int n)
+{
+	return ptr_ring_consume_batched(&a->ring, (void **)array, n);
+}
+
 static inline struct sk_buff *skb_array_consume_irq(struct skb_array *a)
 {
 	return ptr_ring_consume_irq(&a->ring);
 }
 
+static inline int skb_array_consume_batched_irq(struct skb_array *a,
+						struct sk_buff **array, int n)
+{
+	return ptr_ring_consume_batched_irq(&a->ring, (void **)array, n);
+}
+
 static inline struct sk_buff *skb_array_consume_any(struct skb_array *a)
 {
 	return ptr_ring_consume_any(&a->ring);
 }
 
+static inline int skb_array_consume_batched_any(struct skb_array *a,
+						struct sk_buff **array, int n)
+{
+	return ptr_ring_consume_batched_any(&a->ring, (void **)array, n);
+}
+
+
 static inline struct sk_buff *skb_array_consume_bh(struct skb_array *a)
 {
 	return ptr_ring_consume_bh(&a->ring);
 }
 
+static inline int skb_array_consume_batched_bh(struct skb_array *a,
+					       struct sk_buff **array, int n)
+{
+	return ptr_ring_consume_batched_bh(&a->ring, (void **)array, n);
+}
+
 static inline int __skb_array_len_with_tag(struct sk_buff *skb)
 {
 	if (likely(skb)) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 5/9] tun: export skb_array
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
                   ` (3 preceding siblings ...)
  2017-05-17  4:14 ` [PATCH net-next V5 4/9] skb_array: " Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 6/9] tap: " Jason Wang
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

This patch exports skb_array through tun_get_skb_array(). Caller can
then manipulate skb array directly.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/tun.c      | 13 +++++++++++++
 include/linux/if_tun.h |  5 +++++
 2 files changed, 18 insertions(+)

diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index bbd707b..3cbfc5c 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -2626,6 +2626,19 @@ struct socket *tun_get_socket(struct file *file)
 }
 EXPORT_SYMBOL_GPL(tun_get_socket);
 
+struct skb_array *tun_get_skb_array(struct file *file)
+{
+	struct tun_file *tfile;
+
+	if (file->f_op != &tun_fops)
+		return ERR_PTR(-EINVAL);
+	tfile = file->private_data;
+	if (!tfile)
+		return ERR_PTR(-EBADFD);
+	return &tfile->tx_array;
+}
+EXPORT_SYMBOL_GPL(tun_get_skb_array);
+
 module_init(tun_init);
 module_exit(tun_cleanup);
 MODULE_DESCRIPTION(DRV_DESCRIPTION);
diff --git a/include/linux/if_tun.h b/include/linux/if_tun.h
index ed6da2e..bf9bdf4 100644
--- a/include/linux/if_tun.h
+++ b/include/linux/if_tun.h
@@ -19,6 +19,7 @@
 
 #if defined(CONFIG_TUN) || defined(CONFIG_TUN_MODULE)
 struct socket *tun_get_socket(struct file *);
+struct skb_array *tun_get_skb_array(struct file *file);
 #else
 #include <linux/err.h>
 #include <linux/errno.h>
@@ -28,5 +29,9 @@ static inline struct socket *tun_get_socket(struct file *f)
 {
 	return ERR_PTR(-EINVAL);
 }
+static inline struct skb_array *tun_get_skb_array(struct file *f)
+{
+	return ERR_PTR(-EINVAL);
+}
 #endif /* CONFIG_TUN */
 #endif /* __IF_TUN_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 6/9] tap: export skb_array
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
                   ` (4 preceding siblings ...)
  2017-05-17  4:14 ` [PATCH net-next V5 5/9] tun: export skb_array Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 7/9] tun: support receiving skb through msg_control Jason Wang
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

This patch exports skb_array through tap_get_skb_array(). Caller can
then manipulate skb array directly.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/tap.c      | 13 +++++++++++++
 include/linux/if_tap.h |  5 +++++
 2 files changed, 18 insertions(+)

diff --git a/drivers/net/tap.c b/drivers/net/tap.c
index 4d4173d..abdaf86 100644
--- a/drivers/net/tap.c
+++ b/drivers/net/tap.c
@@ -1193,6 +1193,19 @@ struct socket *tap_get_socket(struct file *file)
 }
 EXPORT_SYMBOL_GPL(tap_get_socket);
 
+struct skb_array *tap_get_skb_array(struct file *file)
+{
+	struct tap_queue *q;
+
+	if (file->f_op != &tap_fops)
+		return ERR_PTR(-EINVAL);
+	q = file->private_data;
+	if (!q)
+		return ERR_PTR(-EBADFD);
+	return &q->skb_array;
+}
+EXPORT_SYMBOL_GPL(tap_get_skb_array);
+
 int tap_queue_resize(struct tap_dev *tap)
 {
 	struct net_device *dev = tap->dev;
diff --git a/include/linux/if_tap.h b/include/linux/if_tap.h
index 3482c3c..4837157 100644
--- a/include/linux/if_tap.h
+++ b/include/linux/if_tap.h
@@ -3,6 +3,7 @@
 
 #if IS_ENABLED(CONFIG_TAP)
 struct socket *tap_get_socket(struct file *);
+struct skb_array *tap_get_skb_array(struct file *file);
 #else
 #include <linux/err.h>
 #include <linux/errno.h>
@@ -12,6 +13,10 @@ static inline struct socket *tap_get_socket(struct file *f)
 {
 	return ERR_PTR(-EINVAL);
 }
+static inline struct skb_array *tap_get_skb_array(struct file *f)
+{
+	return ERR_PTR(-EINVAL);
+}
 #endif /* CONFIG_TAP */
 
 #include <net/sock.h>
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 7/9] tun: support receiving skb through msg_control
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
                   ` (5 preceding siblings ...)
  2017-05-17  4:14 ` [PATCH net-next V5 6/9] tap: " Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 8/9] tap: support receiving skb from msg_control Jason Wang
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

This patch makes tun_recvmsg() can receive from skb from its caller
through msg_control. Vhost_net will be the first user.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/tun.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 3cbfc5c..f8041f9c 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1510,9 +1510,8 @@ static struct sk_buff *tun_ring_recv(struct tun_file *tfile, int noblock,
 
 static ssize_t tun_do_read(struct tun_struct *tun, struct tun_file *tfile,
 			   struct iov_iter *to,
-			   int noblock)
+			   int noblock, struct sk_buff *skb)
 {
-	struct sk_buff *skb;
 	ssize_t ret;
 	int err;
 
@@ -1521,10 +1520,12 @@ static ssize_t tun_do_read(struct tun_struct *tun, struct tun_file *tfile,
 	if (!iov_iter_count(to))
 		return 0;
 
-	/* Read frames from ring */
-	skb = tun_ring_recv(tfile, noblock, &err);
-	if (!skb)
-		return err;
+	if (!skb) {
+		/* Read frames from ring */
+		skb = tun_ring_recv(tfile, noblock, &err);
+		if (!skb)
+			return err;
+	}
 
 	ret = tun_put_user(tun, tfile, skb, to);
 	if (unlikely(ret < 0))
@@ -1544,7 +1545,7 @@ static ssize_t tun_chr_read_iter(struct kiocb *iocb, struct iov_iter *to)
 
 	if (!tun)
 		return -EBADFD;
-	ret = tun_do_read(tun, tfile, to, file->f_flags & O_NONBLOCK);
+	ret = tun_do_read(tun, tfile, to, file->f_flags & O_NONBLOCK, NULL);
 	ret = min_t(ssize_t, ret, len);
 	if (ret > 0)
 		iocb->ki_pos = ret;
@@ -1646,7 +1647,8 @@ static int tun_recvmsg(struct socket *sock, struct msghdr *m, size_t total_len,
 					 SOL_PACKET, TUN_TX_TIMESTAMP);
 		goto out;
 	}
-	ret = tun_do_read(tun, tfile, &m->msg_iter, flags & MSG_DONTWAIT);
+	ret = tun_do_read(tun, tfile, &m->msg_iter, flags & MSG_DONTWAIT,
+			  m->msg_control);
 	if (ret > (ssize_t)total_len) {
 		m->msg_flags |= MSG_TRUNC;
 		ret = flags & MSG_TRUNC ? ret : total_len;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 8/9] tap: support receiving skb from msg_control
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
                   ` (6 preceding siblings ...)
  2017-05-17  4:14 ` [PATCH net-next V5 7/9] tun: support receiving skb through msg_control Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17  4:14 ` [PATCH net-next V5 9/9] vhost_net: try batch dequing from skb array Jason Wang
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

This patch makes tap_recvmsg() can receive from skb from its caller
through msg_control. Vhost_net will be the first user.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/tap.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/net/tap.c b/drivers/net/tap.c
index abdaf86..9af3239 100644
--- a/drivers/net/tap.c
+++ b/drivers/net/tap.c
@@ -824,15 +824,17 @@ static ssize_t tap_put_user(struct tap_queue *q,
 
 static ssize_t tap_do_read(struct tap_queue *q,
 			   struct iov_iter *to,
-			   int noblock)
+			   int noblock, struct sk_buff *skb)
 {
 	DEFINE_WAIT(wait);
-	struct sk_buff *skb;
 	ssize_t ret = 0;
 
 	if (!iov_iter_count(to))
 		return 0;
 
+	if (skb)
+		goto put;
+
 	while (1) {
 		if (!noblock)
 			prepare_to_wait(sk_sleep(&q->sk), &wait,
@@ -856,6 +858,7 @@ static ssize_t tap_do_read(struct tap_queue *q,
 	if (!noblock)
 		finish_wait(sk_sleep(&q->sk), &wait);
 
+put:
 	if (skb) {
 		ret = tap_put_user(q, skb, to);
 		if (unlikely(ret < 0))
@@ -872,7 +875,7 @@ static ssize_t tap_read_iter(struct kiocb *iocb, struct iov_iter *to)
 	struct tap_queue *q = file->private_data;
 	ssize_t len = iov_iter_count(to), ret;
 
-	ret = tap_do_read(q, to, file->f_flags & O_NONBLOCK);
+	ret = tap_do_read(q, to, file->f_flags & O_NONBLOCK, NULL);
 	ret = min_t(ssize_t, ret, len);
 	if (ret > 0)
 		iocb->ki_pos = ret;
@@ -1155,7 +1158,8 @@ static int tap_recvmsg(struct socket *sock, struct msghdr *m,
 	int ret;
 	if (flags & ~(MSG_DONTWAIT|MSG_TRUNC))
 		return -EINVAL;
-	ret = tap_do_read(q, &m->msg_iter, flags & MSG_DONTWAIT);
+	ret = tap_do_read(q, &m->msg_iter, flags & MSG_DONTWAIT,
+			  m->msg_control);
 	if (ret > total_len) {
 		m->msg_flags |= MSG_TRUNC;
 		ret = flags & MSG_TRUNC ? ret : total_len;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V5 9/9] vhost_net: try batch dequing from skb array
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
                   ` (7 preceding siblings ...)
  2017-05-17  4:14 ` [PATCH net-next V5 8/9] tap: support receiving skb from msg_control Jason Wang
@ 2017-05-17  4:14 ` Jason Wang
  2017-05-17 20:59 ` [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Michael S. Tsirkin
  2017-05-18 14:08 ` David Miller
  10 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2017-05-17  4:14 UTC (permalink / raw)
  To: mst, netdev, linux-kernel; +Cc: Jason Wang

We used to dequeue one skb during recvmsg() from skb_array, this could
be inefficient because of the bad cache utilization and spinlock
touching for each packet. This patch tries to batch them by calling
batch dequeuing helpers explicitly on the exported skb array and pass
the skb back through msg_control for underlayer socket to finish the
userspace copying. Batch dequeuing is also the requirement for more
batching improvement on receive path.

Tests were done by pktgen on tap with XDP1 in guest. Host is Intel(R)
Xeon(R) CPU E5-2650 0 @ 2.00GHz.

rx batch | pps

0   2.25Mpps
1   2.33Mpps (+3.56%)
4   2.33Mpps (+3.56%)
16  2.35Mpps (+4.44%)
64  2.42Mpps (+7.56%) <- Default rx batching
128 2.40Mpps (+6.67%)
256 2.38Mpps (+5.78%)

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/net.c | 128 +++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 122 insertions(+), 6 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index f61f852..e3d7ea1 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -28,6 +28,8 @@
 #include <linux/if_macvlan.h>
 #include <linux/if_tap.h>
 #include <linux/if_vlan.h>
+#include <linux/skb_array.h>
+#include <linux/skbuff.h>
 
 #include <net/sock.h>
 
@@ -85,6 +87,13 @@ struct vhost_net_ubuf_ref {
 	struct vhost_virtqueue *vq;
 };
 
+#define VHOST_RX_BATCH 64
+struct vhost_net_buf {
+	struct sk_buff **queue;
+	int tail;
+	int head;
+};
+
 struct vhost_net_virtqueue {
 	struct vhost_virtqueue vq;
 	size_t vhost_hlen;
@@ -99,6 +108,8 @@ struct vhost_net_virtqueue {
 	/* Reference counting for outstanding ubufs.
 	 * Protected by vq mutex. Writers must also take device mutex. */
 	struct vhost_net_ubuf_ref *ubufs;
+	struct skb_array *rx_array;
+	struct vhost_net_buf rxq;
 };
 
 struct vhost_net {
@@ -117,6 +128,71 @@ struct vhost_net {
 
 static unsigned vhost_net_zcopy_mask __read_mostly;
 
+static void *vhost_net_buf_get_ptr(struct vhost_net_buf *rxq)
+{
+	if (rxq->tail != rxq->head)
+		return rxq->queue[rxq->head];
+	else
+		return NULL;
+}
+
+static int vhost_net_buf_get_size(struct vhost_net_buf *rxq)
+{
+	return rxq->tail - rxq->head;
+}
+
+static int vhost_net_buf_is_empty(struct vhost_net_buf *rxq)
+{
+	return rxq->tail == rxq->head;
+}
+
+static void *vhost_net_buf_consume(struct vhost_net_buf *rxq)
+{
+	void *ret = vhost_net_buf_get_ptr(rxq);
+	++rxq->head;
+	return ret;
+}
+
+static int vhost_net_buf_produce(struct vhost_net_virtqueue *nvq)
+{
+	struct vhost_net_buf *rxq = &nvq->rxq;
+
+	rxq->head = 0;
+	rxq->tail = skb_array_consume_batched(nvq->rx_array, rxq->queue,
+					      VHOST_RX_BATCH);
+	return rxq->tail;
+}
+
+static void vhost_net_buf_unproduce(struct vhost_net_virtqueue *nvq)
+{
+	struct vhost_net_buf *rxq = &nvq->rxq;
+
+	if (nvq->rx_array && !vhost_net_buf_is_empty(rxq)) {
+		skb_array_unconsume(nvq->rx_array, rxq->queue + rxq->head,
+				    vhost_net_buf_get_size(rxq));
+		rxq->head = rxq->tail = 0;
+	}
+}
+
+static int vhost_net_buf_peek(struct vhost_net_virtqueue *nvq)
+{
+	struct vhost_net_buf *rxq = &nvq->rxq;
+
+	if (!vhost_net_buf_is_empty(rxq))
+		goto out;
+
+	if (!vhost_net_buf_produce(nvq))
+		return 0;
+
+out:
+	return __skb_array_len_with_tag(vhost_net_buf_get_ptr(rxq));
+}
+
+static void vhost_net_buf_init(struct vhost_net_buf *rxq)
+{
+	rxq->head = rxq->tail = 0;
+}
+
 static void vhost_net_enable_zcopy(int vq)
 {
 	vhost_net_zcopy_mask |= 0x1 << vq;
@@ -201,6 +277,7 @@ static void vhost_net_vq_reset(struct vhost_net *n)
 		n->vqs[i].ubufs = NULL;
 		n->vqs[i].vhost_hlen = 0;
 		n->vqs[i].sock_hlen = 0;
+		vhost_net_buf_init(&n->vqs[i].rxq);
 	}
 
 }
@@ -503,15 +580,14 @@ static void handle_tx(struct vhost_net *net)
 	mutex_unlock(&vq->mutex);
 }
 
-static int peek_head_len(struct sock *sk)
+static int peek_head_len(struct vhost_net_virtqueue *rvq, struct sock *sk)
 {
-	struct socket *sock = sk->sk_socket;
 	struct sk_buff *head;
 	int len = 0;
 	unsigned long flags;
 
-	if (sock->ops->peek_len)
-		return sock->ops->peek_len(sock);
+	if (rvq->rx_array)
+		return vhost_net_buf_peek(rvq);
 
 	spin_lock_irqsave(&sk->sk_receive_queue.lock, flags);
 	head = skb_peek(&sk->sk_receive_queue);
@@ -537,10 +613,11 @@ static int sk_has_rx_data(struct sock *sk)
 
 static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
 {
+	struct vhost_net_virtqueue *rvq = &net->vqs[VHOST_NET_VQ_RX];
 	struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX];
 	struct vhost_virtqueue *vq = &nvq->vq;
 	unsigned long uninitialized_var(endtime);
-	int len = peek_head_len(sk);
+	int len = peek_head_len(rvq, sk);
 
 	if (!len && vq->busyloop_timeout) {
 		/* Both tx vq and rx socket were polled here */
@@ -561,7 +638,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
 			vhost_poll_queue(&vq->poll);
 		mutex_unlock(&vq->mutex);
 
-		len = peek_head_len(sk);
+		len = peek_head_len(rvq, sk);
 	}
 
 	return len;
@@ -699,6 +776,8 @@ static void handle_rx(struct vhost_net *net)
 		/* On error, stop handling until the next kick. */
 		if (unlikely(headcount < 0))
 			goto out;
+		if (nvq->rx_array)
+			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
 		/* On overrun, truncate and discard */
 		if (unlikely(headcount > UIO_MAXIOV)) {
 			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
@@ -815,6 +894,7 @@ static int vhost_net_open(struct inode *inode, struct file *f)
 	struct vhost_net *n;
 	struct vhost_dev *dev;
 	struct vhost_virtqueue **vqs;
+	struct sk_buff **queue;
 	int i;
 
 	n = kvmalloc(sizeof *n, GFP_KERNEL | __GFP_REPEAT);
@@ -826,6 +906,15 @@ static int vhost_net_open(struct inode *inode, struct file *f)
 		return -ENOMEM;
 	}
 
+	queue = kmalloc_array(VHOST_RX_BATCH, sizeof(struct sk_buff *),
+			      GFP_KERNEL);
+	if (!queue) {
+		kfree(vqs);
+		kvfree(n);
+		return -ENOMEM;
+	}
+	n->vqs[VHOST_NET_VQ_RX].rxq.queue = queue;
+
 	dev = &n->dev;
 	vqs[VHOST_NET_VQ_TX] = &n->vqs[VHOST_NET_VQ_TX].vq;
 	vqs[VHOST_NET_VQ_RX] = &n->vqs[VHOST_NET_VQ_RX].vq;
@@ -838,6 +927,7 @@ static int vhost_net_open(struct inode *inode, struct file *f)
 		n->vqs[i].done_idx = 0;
 		n->vqs[i].vhost_hlen = 0;
 		n->vqs[i].sock_hlen = 0;
+		vhost_net_buf_init(&n->vqs[i].rxq);
 	}
 	vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX);
 
@@ -853,11 +943,14 @@ static struct socket *vhost_net_stop_vq(struct vhost_net *n,
 					struct vhost_virtqueue *vq)
 {
 	struct socket *sock;
+	struct vhost_net_virtqueue *nvq =
+		container_of(vq, struct vhost_net_virtqueue, vq);
 
 	mutex_lock(&vq->mutex);
 	sock = vq->private_data;
 	vhost_net_disable_vq(n, vq);
 	vq->private_data = NULL;
+	vhost_net_buf_unproduce(nvq);
 	mutex_unlock(&vq->mutex);
 	return sock;
 }
@@ -912,6 +1005,7 @@ static int vhost_net_release(struct inode *inode, struct file *f)
 	/* We do an extra flush before freeing memory,
 	 * since jobs can re-queue themselves. */
 	vhost_net_flush(n);
+	kfree(n->vqs[VHOST_NET_VQ_RX].rxq.queue);
 	kfree(n->dev.vqs);
 	kvfree(n);
 	return 0;
@@ -950,6 +1044,25 @@ static struct socket *get_raw_socket(int fd)
 	return ERR_PTR(r);
 }
 
+static struct skb_array *get_tap_skb_array(int fd)
+{
+	struct skb_array *array;
+	struct file *file = fget(fd);
+
+	if (!file)
+		return NULL;
+	array = tun_get_skb_array(file);
+	if (!IS_ERR(array))
+		goto out;
+	array = tap_get_skb_array(file);
+	if (!IS_ERR(array))
+		goto out;
+	array = NULL;
+out:
+	fput(file);
+	return array;
+}
+
 static struct socket *get_tap_socket(int fd)
 {
 	struct file *file = fget(fd);
@@ -1026,6 +1139,9 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 
 		vhost_net_disable_vq(n, vq);
 		vq->private_data = sock;
+		vhost_net_buf_unproduce(nvq);
+		if (index == VHOST_NET_VQ_RX)
+			nvq->rx_array = get_tap_skb_array(fd);
 		r = vhost_vq_init_access(vq);
 		if (r)
 			goto err_used;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next V5 0/9] vhost_net rx batch dequeuing
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
                   ` (8 preceding siblings ...)
  2017-05-17  4:14 ` [PATCH net-next V5 9/9] vhost_net: try batch dequing from skb array Jason Wang
@ 2017-05-17 20:59 ` Michael S. Tsirkin
  2017-05-19  6:27   ` Jason Wang
  2017-05-18 14:08 ` David Miller
  10 siblings, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2017-05-17 20:59 UTC (permalink / raw)
  To: Jason Wang; +Cc: netdev, linux-kernel

On Wed, May 17, 2017 at 12:14:36PM +0800, Jason Wang wrote:
> This series tries to implement rx batching for vhost-net. This is done
> by batching the dequeuing from skb_array which was exported by
> underlayer socket and pass the sbk back through msg_control to finish
> userspace copying. This is also the requirement for more batching
> implemention on rx path.
> 
> Tests shows at most 7.56% improvment bon rx pps on top of batch
> zeroing and no obvious changes for TCP_STREAM/TCP_RR result.
> 
> Please review.
> 
> Thanks

A surprisingly large gain for such as simple change.  It would be nice
to understand better why this helps - in particular, does the optimal
batch size change if ring is bigger or smaller? But let's merge it
meanwhile.

Series:

Acked-by: Michael S. Tsirkin <mst@redhat.com>



> Changes from V4:
> - drop batch zeroing patch
> - renew the performance numbers
> - move skb pointer array out of vhost_net structure
> 
> Changes from V3:
> - add batch zeroing patch to fix the build warnings
> 
> Changes from V2:
> - rebase to net-next HEAD
> - use unconsume helpers to put skb back on releasing
> - introduce and use vhost_net internal buffer helpers
> - renew performance numbers on top of batch zeroing
> 
> Changes from V1:
> - switch to use for() in __ptr_ring_consume_batched()
> - rename peek_head_len_batched() to fetch_skbs()
> - use skb_array_consume_batched() instead of
>   skb_array_consume_batched_bh() since no consumer run in bh
> - drop the lockless peeking patch since skb_array could be resized, so
>   it's not safe to call lockless one
> 
> Jason Wang (8):
>   skb_array: introduce skb_array_unconsume
>   ptr_ring: introduce batch dequeuing
>   skb_array: introduce batch dequeuing
>   tun: export skb_array
>   tap: export skb_array
>   tun: support receiving skb through msg_control
>   tap: support receiving skb from msg_control
>   vhost_net: try batch dequing from skb array
> 
> Michael S. Tsirkin (1):
>   ptr_ring: add ptr_ring_unconsume
> 
>  drivers/net/tap.c         |  25 +++++++--
>  drivers/net/tun.c         |  31 ++++++++---
>  drivers/vhost/net.c       | 128 +++++++++++++++++++++++++++++++++++++++++++---
>  include/linux/if_tap.h    |   5 ++
>  include/linux/if_tun.h    |   5 ++
>  include/linux/ptr_ring.h  | 120 +++++++++++++++++++++++++++++++++++++++++++
>  include/linux/skb_array.h |  31 +++++++++++
>  7 files changed, 327 insertions(+), 18 deletions(-)
> 
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next V5 0/9] vhost_net rx batch dequeuing
  2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
                   ` (9 preceding siblings ...)
  2017-05-17 20:59 ` [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Michael S. Tsirkin
@ 2017-05-18 14:08 ` David Miller
  10 siblings, 0 replies; 14+ messages in thread
From: David Miller @ 2017-05-18 14:08 UTC (permalink / raw)
  To: jasowang; +Cc: mst, netdev, linux-kernel

From: Jason Wang <jasowang@redhat.com>
Date: Wed, 17 May 2017 12:14:36 +0800

> This series tries to implement rx batching for vhost-net. This is done
> by batching the dequeuing from skb_array which was exported by
> underlayer socket and pass the sbk back through msg_control to finish
> userspace copying. This is also the requirement for more batching
> implemention on rx path.
> 
> Tests shows at most 7.56% improvment bon rx pps on top of batch
> zeroing and no obvious changes for TCP_STREAM/TCP_RR result.

Series applied, thanks Jason.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next V5 0/9] vhost_net rx batch dequeuing
  2017-05-17 20:59 ` [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Michael S. Tsirkin
@ 2017-05-19  6:27   ` Jason Wang
  2017-05-19 16:36     ` Michael S. Tsirkin
  0 siblings, 1 reply; 14+ messages in thread
From: Jason Wang @ 2017-05-19  6:27 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: netdev, linux-kernel



On 2017年05月18日 04:59, Michael S. Tsirkin wrote:
> On Wed, May 17, 2017 at 12:14:36PM +0800, Jason Wang wrote:
>> This series tries to implement rx batching for vhost-net. This is done
>> by batching the dequeuing from skb_array which was exported by
>> underlayer socket and pass the sbk back through msg_control to finish
>> userspace copying. This is also the requirement for more batching
>> implemention on rx path.
>>
>> Tests shows at most 7.56% improvment bon rx pps on top of batch
>> zeroing and no obvious changes for TCP_STREAM/TCP_RR result.
>>
>> Please review.
>>
>> Thanks
> A surprisingly large gain for such as simple change.  It would be nice
> to understand better why this helps - in particular, does the optimal
> batch size change if ring is bigger or smaller?

Will test, just want to confirm. You mean virtio ring not tx_queue_len here?

Thanks

> But let's merge it
> meanwhile.
>
> Series:
>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
>
>
>> Changes from V4:
>> - drop batch zeroing patch
>> - renew the performance numbers
>> - move skb pointer array out of vhost_net structure
>>
>> Changes from V3:
>> - add batch zeroing patch to fix the build warnings
>>
>> Changes from V2:
>> - rebase to net-next HEAD
>> - use unconsume helpers to put skb back on releasing
>> - introduce and use vhost_net internal buffer helpers
>> - renew performance numbers on top of batch zeroing
>>
>> Changes from V1:
>> - switch to use for() in __ptr_ring_consume_batched()
>> - rename peek_head_len_batched() to fetch_skbs()
>> - use skb_array_consume_batched() instead of
>>    skb_array_consume_batched_bh() since no consumer run in bh
>> - drop the lockless peeking patch since skb_array could be resized, so
>>    it's not safe to call lockless one
>>
>> Jason Wang (8):
>>    skb_array: introduce skb_array_unconsume
>>    ptr_ring: introduce batch dequeuing
>>    skb_array: introduce batch dequeuing
>>    tun: export skb_array
>>    tap: export skb_array
>>    tun: support receiving skb through msg_control
>>    tap: support receiving skb from msg_control
>>    vhost_net: try batch dequing from skb array
>>
>> Michael S. Tsirkin (1):
>>    ptr_ring: add ptr_ring_unconsume
>>
>>   drivers/net/tap.c         |  25 +++++++--
>>   drivers/net/tun.c         |  31 ++++++++---
>>   drivers/vhost/net.c       | 128 +++++++++++++++++++++++++++++++++++++++++++---
>>   include/linux/if_tap.h    |   5 ++
>>   include/linux/if_tun.h    |   5 ++
>>   include/linux/ptr_ring.h  | 120 +++++++++++++++++++++++++++++++++++++++++++
>>   include/linux/skb_array.h |  31 +++++++++++
>>   7 files changed, 327 insertions(+), 18 deletions(-)
>>
>> -- 
>> 2.7.4

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next V5 0/9] vhost_net rx batch dequeuing
  2017-05-19  6:27   ` Jason Wang
@ 2017-05-19 16:36     ` Michael S. Tsirkin
  0 siblings, 0 replies; 14+ messages in thread
From: Michael S. Tsirkin @ 2017-05-19 16:36 UTC (permalink / raw)
  To: Jason Wang; +Cc: netdev, linux-kernel

On Fri, May 19, 2017 at 02:27:16PM +0800, Jason Wang wrote:
> 
> 
> On 2017年05月18日 04:59, Michael S. Tsirkin wrote:
> > On Wed, May 17, 2017 at 12:14:36PM +0800, Jason Wang wrote:
> > > This series tries to implement rx batching for vhost-net. This is done
> > > by batching the dequeuing from skb_array which was exported by
> > > underlayer socket and pass the sbk back through msg_control to finish
> > > userspace copying. This is also the requirement for more batching
> > > implemention on rx path.
> > > 
> > > Tests shows at most 7.56% improvment bon rx pps on top of batch
> > > zeroing and no obvious changes for TCP_STREAM/TCP_RR result.
> > > 
> > > Please review.
> > > 
> > > Thanks
> > A surprisingly large gain for such as simple change.  It would be nice
> > to understand better why this helps - in particular, does the optimal
> > batch size change if ring is bigger or smaller?
> 
> Will test, just want to confirm. You mean virtio ring not tx_queue_len here?
> 
> Thanks

Exactly.
Thanks,
MST

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-05-19 16:36 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-17  4:14 [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 1/9] ptr_ring: add ptr_ring_unconsume Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 2/9] skb_array: introduce skb_array_unconsume Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 3/9] ptr_ring: introduce batch dequeuing Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 4/9] skb_array: " Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 5/9] tun: export skb_array Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 6/9] tap: " Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 7/9] tun: support receiving skb through msg_control Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 8/9] tap: support receiving skb from msg_control Jason Wang
2017-05-17  4:14 ` [PATCH net-next V5 9/9] vhost_net: try batch dequing from skb array Jason Wang
2017-05-17 20:59 ` [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Michael S. Tsirkin
2017-05-19  6:27   ` Jason Wang
2017-05-19 16:36     ` Michael S. Tsirkin
2017-05-18 14:08 ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).