All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] XDP for virtio_net
@ 2016-11-18 18:59 John Fastabend
  2016-11-18 18:59 ` [PATCH 1/5] net: virtio dynamically disable/enable LRO John Fastabend
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-18 18:59 UTC (permalink / raw)
  To: tgraf, shm, alexei.starovoitov, daniel, davem
  Cc: john.r.fastabend, netdev, bblanco, john.fastabend, brouer

This implements virtio_net for the mergeable buffers and big_packet
modes. I tested this with vhost_net running on qemu and did not see
any issues.

There are some restrictions for XDP to be enabled (see patch 3) for
more details.

  1. LRO must be off
  2. MTU must be less than PAGE_SIZE
  3. queues must be available to dedicate to XDP
  4. num_bufs received in mergeable buffers must be 1
  5. big_packet mode must have all data on single page

Please review any comments/feedback welcome as always.

Thanks,
John
---

John Fastabend (4):
      net: virtio dynamically disable/enable LRO
      net: xdp: add invalid buffer warning
      virtio_net: add dedicated XDP transmit queues
      virtio_net: add XDP_TX support

Shrijeet Mukherjee (1):
      virtio_net: Add XDP support


 drivers/net/virtio_net.c |  264 +++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/filter.h   |    1 
 net/core/filter.c        |    6 +
 3 files changed, 267 insertions(+), 4 deletions(-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/5] net: virtio dynamically disable/enable LRO
  2016-11-18 18:59 [PATCH 0/5] XDP for virtio_net John Fastabend
@ 2016-11-18 18:59 ` John Fastabend
  2016-11-18 18:59 ` [PATCH 2/5] net: xdp: add invalid buffer warning John Fastabend
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-18 18:59 UTC (permalink / raw)
  To: tgraf, shm, alexei.starovoitov, daniel, davem
  Cc: john.r.fastabend, netdev, bblanco, john.fastabend, brouer

This adds support for dynamically setting the LRO feature flag. The
message to control guest features in the backend uses the
CTRL_GUEST_OFFLOADS msg type.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
---
 drivers/net/virtio_net.c |   43 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 2cafd12..0758cae 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1419,6 +1419,41 @@ static void virtnet_init_settings(struct net_device *dev)
 	.set_settings = virtnet_set_settings,
 };
 
+static int virtnet_set_features(struct net_device *netdev,
+				netdev_features_t features)
+{
+	struct virtnet_info *vi = netdev_priv(netdev);
+	struct virtio_device *vdev = vi->vdev;
+	struct scatterlist sg;
+	u64 offloads = 0;
+
+	if (features & NETIF_F_LRO)
+		offloads |= (1 << VIRTIO_NET_F_GUEST_TSO4) |
+			    (1 << VIRTIO_NET_F_GUEST_TSO6);
+
+	if (features & NETIF_F_RXCSUM)
+		offloads |= (1 << VIRTIO_NET_F_GUEST_CSUM);
+
+	if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) {
+		sg_init_one(&sg, &offloads, sizeof(uint64_t));
+		if (!virtnet_send_command(vi,
+					  VIRTIO_NET_CTRL_GUEST_OFFLOADS,
+					  VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET,
+					  &sg)) {
+			dev_warn(&netdev->dev,
+				 "Failed to set guest offloads by virtnet command.\n");
+			return -EINVAL;
+		}
+	} else if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS) &&
+		   !virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) {
+		dev_warn(&netdev->dev,
+			 "No support for setting offloads pre version_1.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static const struct net_device_ops virtnet_netdev = {
 	.ndo_open            = virtnet_open,
 	.ndo_stop   	     = virtnet_close,
@@ -1435,6 +1470,7 @@ static void virtnet_init_settings(struct net_device *dev)
 #ifdef CONFIG_NET_RX_BUSY_POLL
 	.ndo_busy_poll		= virtnet_busy_poll,
 #endif
+	.ndo_set_features	= virtnet_set_features,
 };
 
 static void virtnet_config_changed_work(struct work_struct *work)
@@ -1810,6 +1846,12 @@ static int virtnet_probe(struct virtio_device *vdev)
 	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
 		dev->features |= NETIF_F_RXCSUM;
 
+	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) &&
+	    virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6)) {
+		dev->features |= NETIF_F_LRO;
+		dev->hw_features |= NETIF_F_LRO;
+	}
+
 	dev->vlan_features = dev->features;
 
 	/* MTU range: 68 - 65535 */
@@ -2049,6 +2091,7 @@ static int virtnet_restore(struct virtio_device *vdev)
 	VIRTIO_NET_F_CTRL_MAC_ADDR,
 	VIRTIO_F_ANY_LAYOUT,
 	VIRTIO_NET_F_MTU,
+	VIRTIO_NET_F_CTRL_GUEST_OFFLOADS,
 };
 
 static struct virtio_driver virtio_net_driver = {

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/5] net: xdp: add invalid buffer warning
  2016-11-18 18:59 [PATCH 0/5] XDP for virtio_net John Fastabend
  2016-11-18 18:59 ` [PATCH 1/5] net: virtio dynamically disable/enable LRO John Fastabend
@ 2016-11-18 18:59 ` John Fastabend
  2016-11-18 19:00 ` [PATCH 3/5] virtio_net: Add XDP support John Fastabend
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-18 18:59 UTC (permalink / raw)
  To: tgraf, shm, alexei.starovoitov, daniel, davem
  Cc: john.r.fastabend, netdev, bblanco, john.fastabend, brouer

This adds a warning for drivers to use when encountering an invalid
buffer for XDP. For normal cases this should not happen but to catch
this in virtual/qemu setups that I may not have expected from the
emulation layer having a standard warning is useful.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
---
 include/linux/filter.h |    1 +
 net/core/filter.c      |    6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 1f09c52..0c79004 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -595,6 +595,7 @@ int sk_get_filter(struct sock *sk, struct sock_filter __user *filter,
 struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
 				       const struct bpf_insn *patch, u32 len);
 void bpf_warn_invalid_xdp_action(u32 act);
+void bpf_warn_invalid_xdp_buffer(void);
 
 #ifdef CONFIG_BPF_JIT
 extern int bpf_jit_enable;
diff --git a/net/core/filter.c b/net/core/filter.c
index cd9e2ba..b8fb57c 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2722,6 +2722,12 @@ void bpf_warn_invalid_xdp_action(u32 act)
 }
 EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_action);
 
+void bpf_warn_invalid_xdp_buffer(void)
+{
+	WARN_ONCE(1, "Illegal XDP buffer encountered, expect packet loss\n");
+}
+EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_buffer);
+
 static u32 sk_filter_convert_ctx_access(enum bpf_access_type type, int dst_reg,
 					int src_reg, int ctx_off,
 					struct bpf_insn *insn_buf,

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/5] virtio_net: Add XDP support
  2016-11-18 18:59 [PATCH 0/5] XDP for virtio_net John Fastabend
  2016-11-18 18:59 ` [PATCH 1/5] net: virtio dynamically disable/enable LRO John Fastabend
  2016-11-18 18:59 ` [PATCH 2/5] net: xdp: add invalid buffer warning John Fastabend
@ 2016-11-18 19:00 ` John Fastabend
  2016-11-18 23:21   ` Eric Dumazet
  2016-11-18 23:23   ` Eric Dumazet
  2016-11-18 19:00 ` [PATCH 4/5] virtio_net: add dedicated XDP transmit queues John Fastabend
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-18 19:00 UTC (permalink / raw)
  To: tgraf, shm, alexei.starovoitov, daniel, davem
  Cc: john.r.fastabend, netdev, bblanco, john.fastabend, brouer

From: Shrijeet Mukherjee <shrijeet@gmail.com>

This adds XDP support to virtio_net. Some requirements must be
met for XDP to be enabled depending on the mode. First it will
only be supported with LRO disabled so that data is not pushed
across multiple buffers. The MTU must be less than a page size
to avoid having to handle XDP across multiple pages.

If mergeable receive is enabled this first series only supports
the case where header and data are in the same buf which we can
check when a packet is received by looking at num_buf. If the
num_buf is greater than 1 and a XDP program is loaded the packet
is dropped and a warning is thrown. When any_header_sg is set this
does not happen and both header and data is put in a single buffer
as expected so we check this when XDP programs are loaded. Note I
have only tested this with Linux vhost backend.

If big packets mode is enabled and MTU/LRO conditions above are
met then XDP is allowed.

A follow on patch can be generated to solve the mergeable receive
case with num_bufs equal to 2. Buffers greater than two may not
be handled has easily.

Suggested-by: Shrijeet Mukherjee <shrijeet@gmail.com>
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
---
 drivers/net/virtio_net.c |  144 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 140 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0758cae..16c257d 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -22,6 +22,7 @@
 #include <linux/module.h>
 #include <linux/virtio.h>
 #include <linux/virtio_net.h>
+#include <linux/bpf.h>
 #include <linux/scatterlist.h>
 #include <linux/if_vlan.h>
 #include <linux/slab.h>
@@ -81,6 +82,8 @@ struct receive_queue {
 
 	struct napi_struct napi;
 
+	struct bpf_prog *xdp_prog;
+
 	/* Chain pages by the private ptr. */
 	struct page *pages;
 
@@ -324,6 +327,38 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
 	return skb;
 }
 
+static u32 do_xdp_prog(struct virtnet_info *vi,
+		       struct bpf_prog *xdp_prog,
+		       struct page *page, int offset, int len)
+{
+	int hdr_padded_len;
+	struct xdp_buff xdp;
+	u32 act;
+	u8 *buf;
+
+	buf = page_address(page) + offset;
+
+	if (vi->mergeable_rx_bufs)
+		hdr_padded_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
+	else
+		hdr_padded_len = sizeof(struct padded_vnet_hdr);
+
+	xdp.data = buf + hdr_padded_len;
+	xdp.data_end = xdp.data + (len - vi->hdr_len);
+
+	act = bpf_prog_run_xdp(xdp_prog, &xdp);
+	switch (act) {
+	case XDP_PASS:
+		return XDP_PASS;
+	default:
+		bpf_warn_invalid_xdp_action(act);
+	case XDP_TX:
+	case XDP_ABORTED:
+	case XDP_DROP:
+		return XDP_DROP;
+	}
+}
+
 static struct sk_buff *receive_small(struct virtnet_info *vi, void *buf, unsigned int len)
 {
 	struct sk_buff * skb = buf;
@@ -340,9 +375,19 @@ static struct sk_buff *receive_big(struct net_device *dev,
 				   void *buf,
 				   unsigned int len)
 {
+	struct bpf_prog *xdp_prog;
 	struct page *page = buf;
-	struct sk_buff *skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE);
+	struct sk_buff *skb;
+
+	xdp_prog = rcu_dereference(rq->xdp_prog);
+	if (xdp_prog) {
+		u32 act = do_xdp_prog(vi, xdp_prog, page, 0, len);
+
+		if (act == XDP_DROP)
+			goto err;
+	}
 
+	skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE);
 	if (unlikely(!skb))
 		goto err;
 
@@ -366,10 +411,25 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	struct page *page = virt_to_head_page(buf);
 	int offset = buf - page_address(page);
 	unsigned int truesize = max(len, mergeable_ctx_to_buf_truesize(ctx));
+	struct sk_buff *head_skb, *curr_skb;
+	struct bpf_prog *xdp_prog;
 
-	struct sk_buff *head_skb = page_to_skb(vi, rq, page, offset, len,
-					       truesize);
-	struct sk_buff *curr_skb = head_skb;
+	xdp_prog = rcu_dereference(rq->xdp_prog);
+	if (xdp_prog) {
+		u32 act;
+
+		if (num_buf > 1) {
+			bpf_warn_invalid_xdp_buffer();
+			goto err_skb;
+		}
+
+		act = do_xdp_prog(vi, xdp_prog, page, offset, len);
+		if (act == XDP_DROP)
+			goto err_skb;
+	}
+
+	head_skb = page_to_skb(vi, rq, page, offset, len, truesize);
+	curr_skb = head_skb;
 
 	if (unlikely(!curr_skb))
 		goto err_skb;
@@ -1328,6 +1388,13 @@ static int virtnet_set_channels(struct net_device *dev,
 	if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0)
 		return -EINVAL;
 
+	/* For now we don't support modifying channels while XDP is loaded
+	 * also when XDP is loaded all RX queues have XDP programs so we only
+	 * need to check a single RX queue.
+	 */
+	if (vi->rq[0].xdp_prog)
+		return -EINVAL;
+
 	get_online_cpus();
 	err = virtnet_set_queues(vi, queue_pairs);
 	if (!err) {
@@ -1454,6 +1521,68 @@ static int virtnet_set_features(struct net_device *netdev,
 	return 0;
 }
 
+static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	struct bpf_prog *old_prog;
+	int i;
+
+	if ((dev->features & NETIF_F_LRO) && prog) {
+		netdev_warn(dev, "can't set XDP while LRO is on, disable LRO first\n");
+		return -EINVAL;
+	}
+
+	if (vi->mergeable_rx_bufs && !vi->any_header_sg) {
+		netdev_warn(dev, "XDP expects header/data in single page\n");
+		return -EINVAL;
+	}
+
+	if (dev->mtu > PAGE_SIZE) {
+		netdev_warn(dev, "XDP requires MTU less than %lu\n", PAGE_SIZE);
+		return -EINVAL;
+	}
+
+	if (prog) {
+		prog = bpf_prog_add(prog, vi->max_queue_pairs - 1);
+		if (IS_ERR(prog))
+			return PTR_ERR(prog);
+	}
+
+	for (i = 0; i < vi->max_queue_pairs; i++) {
+		old_prog = rcu_dereference(vi->rq[i].xdp_prog);
+		rcu_assign_pointer(vi->rq[i].xdp_prog, prog);
+		if (old_prog)
+			bpf_prog_put(old_prog);
+	}
+
+	return 0;
+}
+
+static bool virtnet_xdp_query(struct net_device *dev)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	int i;
+
+	for (i = 0; i < vi->max_queue_pairs; i++) {
+		if (vi->rq[i].xdp_prog)
+			return true;
+	}
+	return false;
+}
+
+static int virtnet_xdp(struct net_device *dev, struct netdev_xdp *xdp)
+{
+	switch (xdp->command) {
+	case XDP_SETUP_PROG:
+		return virtnet_xdp_set(dev, xdp->prog);
+	case XDP_QUERY_PROG:
+		xdp->prog_attached = virtnet_xdp_query(dev);
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
 static const struct net_device_ops virtnet_netdev = {
 	.ndo_open            = virtnet_open,
 	.ndo_stop   	     = virtnet_close,
@@ -1471,6 +1600,7 @@ static int virtnet_set_features(struct net_device *netdev,
 	.ndo_busy_poll		= virtnet_busy_poll,
 #endif
 	.ndo_set_features	= virtnet_set_features,
+	.ndo_xdp		= virtnet_xdp,
 };
 
 static void virtnet_config_changed_work(struct work_struct *work)
@@ -1527,11 +1657,17 @@ static void virtnet_free_queues(struct virtnet_info *vi)
 
 static void free_receive_bufs(struct virtnet_info *vi)
 {
+	struct bpf_prog *old_prog;
 	int i;
 
 	for (i = 0; i < vi->max_queue_pairs; i++) {
 		while (vi->rq[i].pages)
 			__free_pages(get_a_page(&vi->rq[i], GFP_KERNEL), 0);
+
+		old_prog = rcu_dereference(vi->rq[i].xdp_prog);
+		RCU_INIT_POINTER(vi->rq[i].xdp_prog, NULL);
+		if (old_prog)
+			bpf_prog_put(old_prog);
 	}
 }
 

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
  2016-11-18 18:59 [PATCH 0/5] XDP for virtio_net John Fastabend
                   ` (2 preceding siblings ...)
  2016-11-18 19:00 ` [PATCH 3/5] virtio_net: Add XDP support John Fastabend
@ 2016-11-18 19:00 ` John Fastabend
  2016-11-18 21:09   ` Jakub Kicinski
  2016-11-18 19:01 ` [PATCH 5/5] virtio_net: add XDP_TX support John Fastabend
  2016-11-18 19:06 ` [PATCH 0/5] XDP for virtio_net John Fastabend
  5 siblings, 1 reply; 18+ messages in thread
From: John Fastabend @ 2016-11-18 19:00 UTC (permalink / raw)
  To: tgraf, shm, alexei.starovoitov, daniel, davem
  Cc: john.r.fastabend, netdev, bblanco, john.fastabend, brouer

XDP requires using isolated transmit queues to avoid interference
with normal networking stack (BQL, NETDEV_TX_BUSY, etc). This patch
adds a XDP queue per cpu when a XDP program is loaded and does not
expose the queues to the OS via the normal API call to
netif_set_real_num_tx_queues(). This way the stack will never push
an skb to these queues.

However virtio/vhost/qemu implementation only allows for creating
TX/RX queue pairs at this time so creating only TX queues was not
possible. And because the associated RX queues are being created I
went ahead and exposed these to the stack and let the backend use
them. This creates more RX queues visible to the network stack than
TX queues which is worth mentioning but does not cause any issues as
far as I can tell.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
---
 drivers/net/virtio_net.c |   32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 16c257d..631ee07 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -114,6 +114,9 @@ struct virtnet_info {
 	/* # of queue pairs currently used by the driver */
 	u16 curr_queue_pairs;
 
+	/* # of XDP queue pairs currently used by the driver */
+	u16 xdp_queue_pairs;
+
 	/* I like... big packets and I cannot lie! */
 	bool big_packets;
 
@@ -1525,7 +1528,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog)
 {
 	struct virtnet_info *vi = netdev_priv(dev);
 	struct bpf_prog *old_prog;
-	int i;
+	u16 xdp_qp = 0, curr_qp;
+	int err, i;
 
 	if ((dev->features & NETIF_F_LRO) && prog) {
 		netdev_warn(dev, "can't set XDP while LRO is on, disable LRO first\n");
@@ -1542,12 +1546,34 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog)
 		return -EINVAL;
 	}
 
+	curr_qp = vi->curr_queue_pairs - vi->xdp_queue_pairs;
+	if (prog)
+		xdp_qp = num_online_cpus();
+
+	/* XDP requires extra queues for XDP_TX */
+	if (curr_qp + xdp_qp > vi->max_queue_pairs) {
+		netdev_warn(dev, "request %i queues but max is %i\n",
+			    curr_qp + xdp_qp, vi->max_queue_pairs);
+		return -ENOMEM;
+	}
+
+	err = virtnet_set_queues(vi, curr_qp + xdp_qp);
+	if (err) {
+		dev_warn(&dev->dev, "XDP Device queue allocation failure.\n");
+		return err;
+	}
+
 	if (prog) {
-		prog = bpf_prog_add(prog, vi->max_queue_pairs - 1);
-		if (IS_ERR(prog))
+		prog = bpf_prog_add(prog, vi->max_queue_pairs);
+		if (IS_ERR(prog)) {
+			virtnet_set_queues(vi, curr_qp);
 			return PTR_ERR(prog);
+		}
 	}
 
+	vi->xdp_queue_pairs = xdp_qp;
+	netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp);
+
 	for (i = 0; i < vi->max_queue_pairs; i++) {
 		old_prog = rcu_dereference(vi->rq[i].xdp_prog);
 		rcu_assign_pointer(vi->rq[i].xdp_prog, prog);

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/5] virtio_net: add XDP_TX support
  2016-11-18 18:59 [PATCH 0/5] XDP for virtio_net John Fastabend
                   ` (3 preceding siblings ...)
  2016-11-18 19:00 ` [PATCH 4/5] virtio_net: add dedicated XDP transmit queues John Fastabend
@ 2016-11-18 19:01 ` John Fastabend
  2016-11-18 19:06 ` [PATCH 0/5] XDP for virtio_net John Fastabend
  5 siblings, 0 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-18 19:01 UTC (permalink / raw)
  To: tgraf, shm, alexei.starovoitov, daniel, davem
  Cc: john.r.fastabend, netdev, bblanco, john.fastabend, brouer

This adds support for the XDP_TX action to virtio_net. When an XDP
program is run and returns the XDP_TX action the virtio_net XDP
implementation will transmit the packet on a TX queue that aligns
with the current CPU that the XDP packet was processed on.

Before sending the packet the header is zeroed.  Also XDP is expected
to handle checksum correctly so no checksum offload  support is
provided.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
---
 drivers/net/virtio_net.c |   57 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 54 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 631ee07..4b22938 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -330,12 +330,40 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
 	return skb;
 }
 
+static void virtnet_xdp_xmit(struct virtnet_info *vi,
+			     unsigned int qnum, struct xdp_buff *xdp)
+{
+	struct send_queue *sq = &vi->sq[qnum];
+	struct virtio_net_hdr_mrg_rxbuf *hdr;
+	unsigned int num_sg, len;
+	void *xdp_sent;
+
+	/* Free up any pending old buffers before queueing new ones. */
+	while ((xdp_sent = virtqueue_get_buf(sq->vq, &len)) != NULL) {
+		struct page *page = virt_to_head_page(xdp_sent);
+
+		put_page(page);
+	}
+
+	/* Zero header and leave csum up to XDP layers */
+	hdr = xdp->data;
+	memset(hdr, 0, vi->hdr_len);
+	hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_NONE;
+	hdr->hdr.flags = VIRTIO_NET_HDR_F_DATA_VALID;
+
+	num_sg = 1;
+	sg_init_one(sq->sg, xdp->data, xdp->data_end - xdp->data);
+	virtqueue_add_outbuf(sq->vq, sq->sg, num_sg, xdp->data, GFP_ATOMIC);
+	virtqueue_kick(sq->vq);
+}
+
 static u32 do_xdp_prog(struct virtnet_info *vi,
 		       struct bpf_prog *xdp_prog,
 		       struct page *page, int offset, int len)
 {
 	int hdr_padded_len;
 	struct xdp_buff xdp;
+	unsigned int qp;
 	u32 act;
 	u8 *buf;
 
@@ -353,9 +381,15 @@ static u32 do_xdp_prog(struct virtnet_info *vi,
 	switch (act) {
 	case XDP_PASS:
 		return XDP_PASS;
+	case XDP_TX:
+		qp = vi->curr_queue_pairs -
+			vi->xdp_queue_pairs +
+			smp_processor_id();
+		xdp.data = buf + (vi->mergeable_rx_bufs ? 0 : 4);
+		virtnet_xdp_xmit(vi, qp, &xdp);
+		return XDP_TX;
 	default:
 		bpf_warn_invalid_xdp_action(act);
-	case XDP_TX:
 	case XDP_ABORTED:
 	case XDP_DROP:
 		return XDP_DROP;
@@ -386,8 +420,15 @@ static struct sk_buff *receive_big(struct net_device *dev,
 	if (xdp_prog) {
 		u32 act = do_xdp_prog(vi, xdp_prog, page, 0, len);
 
-		if (act == XDP_DROP)
+		switch (act) {
+		case XDP_PASS:
+			break;
+		case XDP_TX:
+			goto xdp_xmit;
+		case XDP_DROP:
+		default:
 			goto err;
+		}
 	}
 
 	skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE);
@@ -399,6 +440,7 @@ static struct sk_buff *receive_big(struct net_device *dev,
 err:
 	dev->stats.rx_dropped++;
 	give_pages(rq, page);
+xdp_xmit:
 	return NULL;
 }
 
@@ -417,6 +459,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	struct sk_buff *head_skb, *curr_skb;
 	struct bpf_prog *xdp_prog;
 
+	head_skb = NULL;
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (xdp_prog) {
 		u32 act;
@@ -427,8 +470,15 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		}
 
 		act = do_xdp_prog(vi, xdp_prog, page, offset, len);
-		if (act == XDP_DROP)
+		switch (act) {
+		case XDP_PASS:
+			break;
+		case XDP_TX:
+			goto xdp_xmit;
+		case XDP_DROP:
+		default:
 			goto err_skb;
+		}
 	}
 
 	head_skb = page_to_skb(vi, rq, page, offset, len, truesize);
@@ -502,6 +552,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 err_buf:
 	dev->stats.rx_dropped++;
 	dev_kfree_skb(head_skb);
+xdp_xmit:
 	return NULL;
 }
 

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/5] XDP for virtio_net
  2016-11-18 18:59 [PATCH 0/5] XDP for virtio_net John Fastabend
                   ` (4 preceding siblings ...)
  2016-11-18 19:01 ` [PATCH 5/5] virtio_net: add XDP_TX support John Fastabend
@ 2016-11-18 19:06 ` John Fastabend
  5 siblings, 0 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-18 19:06 UTC (permalink / raw)
  To: tgraf, shm, alexei.starovoitov, daniel, davem, Michael S. Tsirkin
  Cc: john.r.fastabend, netdev, bblanco, brouer

On 16-11-18 10:59 AM, John Fastabend wrote:
> This implements virtio_net for the mergeable buffers and big_packet
> modes. I tested this with vhost_net running on qemu and did not see
> any issues.
> 
> There are some restrictions for XDP to be enabled (see patch 3) for
> more details.
> 
>   1. LRO must be off
>   2. MTU must be less than PAGE_SIZE
>   3. queues must be available to dedicate to XDP
>   4. num_bufs received in mergeable buffers must be 1
>   5. big_packet mode must have all data on single page
> 
> Please review any comments/feedback welcome as always.
> 
> Thanks,
> John
> ---
> 

Hi Dave,

Should be obvious but this is for net-next I dropped the tag from my
git-send command.

Also I missed probably the most important person on the CC/TO list.

+Michael Tsirkin.

Thanks,
John

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
  2016-11-18 19:00 ` [PATCH 4/5] virtio_net: add dedicated XDP transmit queues John Fastabend
@ 2016-11-18 21:09   ` Jakub Kicinski
  2016-11-19  2:10     ` Jakub Kicinski
  0 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2016-11-18 21:09 UTC (permalink / raw)
  To: John Fastabend
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer

Looks very cool! :)

On Fri, 18 Nov 2016 11:00:41 -0800, John Fastabend wrote:
> @@ -1542,12 +1546,34 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog)
>  		return -EINVAL;
>  	}
>  
> +	curr_qp = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> +	if (prog)
> +		xdp_qp = num_online_cpus();

Is num_online_cpus() correct here?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/5] virtio_net: Add XDP support
  2016-11-18 19:00 ` [PATCH 3/5] virtio_net: Add XDP support John Fastabend
@ 2016-11-18 23:21   ` Eric Dumazet
  2016-11-19  2:16     ` John Fastabend
  2016-11-18 23:23   ` Eric Dumazet
  1 sibling, 1 reply; 18+ messages in thread
From: Eric Dumazet @ 2016-11-18 23:21 UTC (permalink / raw)
  To: John Fastabend
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer

On Fri, 2016-11-18 at 11:00 -0800, John Fastabend wrote:


>  static void free_receive_bufs(struct virtnet_info *vi)
>  {
> +	struct bpf_prog *old_prog;
>  	int i;
>  
>  	for (i = 0; i < vi->max_queue_pairs; i++) {
>  		while (vi->rq[i].pages)
>  			__free_pages(get_a_page(&vi->rq[i], GFP_KERNEL), 0);
> +
> +		old_prog = rcu_dereference(vi->rq[i].xdp_prog);

Seems wrong to me.

Are you sure lockdep (with CONFIG_PROVE_RCU=y) was happy with this ?

> +		RCU_INIT_POINTER(vi->rq[i].xdp_prog, NULL);
> +		if (old_prog)
> +			bpf_prog_put(old_prog);
>  	}
>  }
>  
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/5] virtio_net: Add XDP support
  2016-11-18 19:00 ` [PATCH 3/5] virtio_net: Add XDP support John Fastabend
  2016-11-18 23:21   ` Eric Dumazet
@ 2016-11-18 23:23   ` Eric Dumazet
  2016-11-19  1:02     ` John Fastabend
  1 sibling, 1 reply; 18+ messages in thread
From: Eric Dumazet @ 2016-11-18 23:23 UTC (permalink / raw)
  To: John Fastabend
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer

On Fri, 2016-11-18 at 11:00 -0800, John Fastabend wrote:
> From: Shrijeet Mukherjee <shrijeet@gmail.com>


>  #include <linux/slab.h>
> @@ -81,6 +82,8 @@ struct receive_queue {
>  
>  	struct napi_struct napi;
>  
> +	struct bpf_prog *xdp_prog;

Please add proper sparse annotation, as in 

	struct bpf_prog __rcu *xdp_prog;

And run sparse ;)

CONFIG_SPARSE_RCU_POINTER=y

make C=2 drivers/net/virtio_net.o

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/5] virtio_net: Add XDP support
  2016-11-18 23:23   ` Eric Dumazet
@ 2016-11-19  1:02     ` John Fastabend
  0 siblings, 0 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-19  1:02 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer

On 16-11-18 03:23 PM, Eric Dumazet wrote:
> On Fri, 2016-11-18 at 11:00 -0800, John Fastabend wrote:
>> From: Shrijeet Mukherjee <shrijeet@gmail.com>
> 
> 
>>  #include <linux/slab.h>
>> @@ -81,6 +82,8 @@ struct receive_queue {
>>  
>>  	struct napi_struct napi;
>>  
>> +	struct bpf_prog *xdp_prog;
> 
> Please add proper sparse annotation, as in 
> 
> 	struct bpf_prog __rcu *xdp_prog;
> 
> And run sparse ;)
> 
> CONFIG_SPARSE_RCU_POINTER=y
> 
> make C=2 drivers/net/virtio_net.o
> 
> 
> 
> 

Yep will do thanks! And I will fix the other comment as well.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
  2016-11-18 21:09   ` Jakub Kicinski
@ 2016-11-19  2:10     ` Jakub Kicinski
  2016-11-19  2:43       ` John Fastabend
  0 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2016-11-19  2:10 UTC (permalink / raw)
  To: John Fastabend
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer

On Fri, 18 Nov 2016 13:09:53 -0800, Jakub Kicinski wrote:
> Looks very cool! :)
> 
> On Fri, 18 Nov 2016 11:00:41 -0800, John Fastabend wrote:
> > @@ -1542,12 +1546,34 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog)
> >  		return -EINVAL;
> >  	}
> >  
> > +	curr_qp = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> > +	if (prog)
> > +		xdp_qp = num_online_cpus();  
> 
> Is num_online_cpus() correct here?

Sorry, I don't know the virto_net code, so I'm probably wrong.  I was
concerned whether the number of cpus can change but also that the cpu
mask may be sparse and therefore offsetting by smp_processor_id()
into the queue table below could bring trouble.

@@ -353,9 +381,15 @@ static u32 do_xdp_prog(struct virtnet_info *vi,
 	switch (act) {
 	case XDP_PASS:
 		return XDP_PASS;
+	case XDP_TX:
+		qp = vi->curr_queue_pairs -
+			vi->xdp_queue_pairs +
+			smp_processor_id();
+		xdp.data = buf + (vi->mergeable_rx_bufs ? 0 : 4);
+		virtnet_xdp_xmit(vi, qp, &xdp);
+		return XDP_TX;
 	default:
 		bpf_warn_invalid_xdp_action(act);
-	case XDP_TX:
 	case XDP_ABORTED:
 	case XDP_DROP:
 		return XDP_DROP;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/5] virtio_net: Add XDP support
  2016-11-18 23:21   ` Eric Dumazet
@ 2016-11-19  2:16     ` John Fastabend
  0 siblings, 0 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-19  2:16 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer

On 16-11-18 03:21 PM, Eric Dumazet wrote:
> On Fri, 2016-11-18 at 11:00 -0800, John Fastabend wrote:
> 
> 
>>  static void free_receive_bufs(struct virtnet_info *vi)
>>  {
>> +	struct bpf_prog *old_prog;
>>  	int i;
>>  
>>  	for (i = 0; i < vi->max_queue_pairs; i++) {
>>  		while (vi->rq[i].pages)
>>  			__free_pages(get_a_page(&vi->rq[i], GFP_KERNEL), 0);
>> +
>> +		old_prog = rcu_dereference(vi->rq[i].xdp_prog);
> 
> Seems wrong to me.
> 

Yep it is wrong should be rtnl_dereference() here and the
rcu_dereference() calls earlier in the patch need to be _bh().

> Are you sure lockdep (with CONFIG_PROVE_RCU=y) was happy with this ?

oops you are right it was missing.

> 
>> +		RCU_INIT_POINTER(vi->rq[i].xdp_prog, NULL);
>> +		if (old_prog)
>> +			bpf_prog_put(old_prog);

bpf_prog_put() waits a grace period of ref count is zero. That said on
driver unload we need to protect the bpf_prog_put with RTNL_LOCK() as
well.

I'll send out a v2 in a bit.

Thanks a lot.

>>  	}
>>  }
>>  
>>
> 
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
  2016-11-19  2:10     ` Jakub Kicinski
@ 2016-11-19  2:43       ` John Fastabend
  2016-11-19  2:57         ` Jakub Kicinski
  0 siblings, 1 reply; 18+ messages in thread
From: John Fastabend @ 2016-11-19  2:43 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer

On 16-11-18 06:10 PM, Jakub Kicinski wrote:
> On Fri, 18 Nov 2016 13:09:53 -0800, Jakub Kicinski wrote:
>> Looks very cool! :)
>>
>> On Fri, 18 Nov 2016 11:00:41 -0800, John Fastabend wrote:
>>> @@ -1542,12 +1546,34 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog)
>>>  		return -EINVAL;
>>>  	}
>>>  
>>> +	curr_qp = vi->curr_queue_pairs - vi->xdp_queue_pairs;
>>> +	if (prog)
>>> +		xdp_qp = num_online_cpus();  
>>
>> Is num_online_cpus() correct here?
> 
> Sorry, I don't know the virto_net code, so I'm probably wrong.  I was
> concerned whether the number of cpus can change but also that the cpu
> mask may be sparse and therefore offsetting by smp_processor_id()
> into the queue table below could bring trouble.
> 

Seem like a valid concerns to me how about num_possible_cpus() instead.

> @@ -353,9 +381,15 @@ static u32 do_xdp_prog(struct virtnet_info *vi,
>  	switch (act) {
>  	case XDP_PASS:
>  		return XDP_PASS;
> +	case XDP_TX:
> +		qp = vi->curr_queue_pairs -
> +			vi->xdp_queue_pairs +
> +			smp_processor_id();
> +		xdp.data = buf + (vi->mergeable_rx_bufs ? 0 : 4);
> +		virtnet_xdp_xmit(vi, qp, &xdp);
> +		return XDP_TX;
>  	default:
>  		bpf_warn_invalid_xdp_action(act);
> -	case XDP_TX:
>  	case XDP_ABORTED:
>  	case XDP_DROP:
>  		return XDP_DROP;
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
  2016-11-19  2:43       ` John Fastabend
@ 2016-11-19  2:57         ` Jakub Kicinski
  2016-11-19  3:20           ` Eric Dumazet
  0 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2016-11-19  2:57 UTC (permalink / raw)
  To: John Fastabend
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer, Eric Dumazet

On Fri, 18 Nov 2016 18:43:55 -0800, John Fastabend wrote:
> On 16-11-18 06:10 PM, Jakub Kicinski wrote:
> > On Fri, 18 Nov 2016 13:09:53 -0800, Jakub Kicinski wrote:  
> >> Looks very cool! :)
> >>
> >> On Fri, 18 Nov 2016 11:00:41 -0800, John Fastabend wrote:  
>  [...]  
> >>
> >> Is num_online_cpus() correct here?  
> > 
> > Sorry, I don't know the virto_net code, so I'm probably wrong.  I was
> > concerned whether the number of cpus can change but also that the cpu
> > mask may be sparse and therefore offsetting by smp_processor_id()
> > into the queue table below could bring trouble.
> >   
> 
> Seem like a valid concerns to me how about num_possible_cpus() instead.

That would solve problem 1, but could cpu_possible_mask still be sparse
on strange setups?  Let me try to dig into this, I recall someone
(Eric?) was fixing similar problems some time ago.

> > @@ -353,9 +381,15 @@ static u32 do_xdp_prog(struct virtnet_info *vi,
> >  	switch (act) {
> >  	case XDP_PASS:
> >  		return XDP_PASS;
> > +	case XDP_TX:
> > +		qp = vi->curr_queue_pairs -
> > +			vi->xdp_queue_pairs +
> > +			smp_processor_id();
> > +		xdp.data = buf + (vi->mergeable_rx_bufs ? 0 : 4);
> > +		virtnet_xdp_xmit(vi, qp, &xdp);
> > +		return XDP_TX;
> >  	default:
> >  		bpf_warn_invalid_xdp_action(act);
> > -	case XDP_TX:
> >  	case XDP_ABORTED:
> >  	case XDP_DROP:
> >  		return XDP_DROP;
> >   

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
  2016-11-19  2:57         ` Jakub Kicinski
@ 2016-11-19  3:20           ` Eric Dumazet
  2016-11-19  3:23             ` Jakub Kicinski
  0 siblings, 1 reply; 18+ messages in thread
From: Eric Dumazet @ 2016-11-19  3:20 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: John Fastabend, tgraf, shm, alexei.starovoitov, daniel, davem,
	john.r.fastabend, netdev, bblanco, brouer

On Fri, 2016-11-18 at 18:57 -0800, Jakub Kicinski wrote:
> On Fri, 18 Nov 2016 18:43:55 -0800, John Fastabend wrote:
> > On 16-11-18 06:10 PM, Jakub Kicinski wrote:
> > > On Fri, 18 Nov 2016 13:09:53 -0800, Jakub Kicinski wrote:  
> > >> Looks very cool! :)
> > >>
> > >> On Fri, 18 Nov 2016 11:00:41 -0800, John Fastabend wrote:  
> >  [...]  
> > >>
> > >> Is num_online_cpus() correct here?  
> > > 
> > > Sorry, I don't know the virto_net code, so I'm probably wrong.  I was
> > > concerned whether the number of cpus can change but also that the cpu
> > > mask may be sparse and therefore offsetting by smp_processor_id()
> > > into the queue table below could bring trouble.
> > >   
> > 
> > Seem like a valid concerns to me how about num_possible_cpus() instead.
> 
> That would solve problem 1, but could cpu_possible_mask still be sparse
> on strange setups?  Let me try to dig into this, I recall someone
> (Eric?) was fixing similar problems some time ago.

nr_cpu_ids is probably what you want ;)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
  2016-11-19  3:20           ` Eric Dumazet
@ 2016-11-19  3:23             ` Jakub Kicinski
  2016-11-19 21:33               ` John Fastabend
  0 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2016-11-19  3:23 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: John Fastabend, tgraf, shm, alexei.starovoitov, daniel, davem,
	john.r.fastabend, netdev, bblanco, brouer

On Fri, 18 Nov 2016 19:20:58 -0800, Eric Dumazet wrote:
> On Fri, 2016-11-18 at 18:57 -0800, Jakub Kicinski wrote:
> > On Fri, 18 Nov 2016 18:43:55 -0800, John Fastabend wrote:  
> > > On 16-11-18 06:10 PM, Jakub Kicinski wrote:  
>  [...]  
> > > 
> > > Seem like a valid concerns to me how about num_possible_cpus() instead.  
> > 
> > That would solve problem 1, but could cpu_possible_mask still be sparse
> > on strange setups?  Let me try to dig into this, I recall someone
> > (Eric?) was fixing similar problems some time ago.  
> 
> nr_cpu_ids is probably what you want ;)

Thank you :)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
  2016-11-19  3:23             ` Jakub Kicinski
@ 2016-11-19 21:33               ` John Fastabend
  0 siblings, 0 replies; 18+ messages in thread
From: John Fastabend @ 2016-11-19 21:33 UTC (permalink / raw)
  To: Jakub Kicinski, Eric Dumazet
  Cc: tgraf, shm, alexei.starovoitov, daniel, davem, john.r.fastabend,
	netdev, bblanco, brouer

On 16-11-18 07:23 PM, Jakub Kicinski wrote:
> On Fri, 18 Nov 2016 19:20:58 -0800, Eric Dumazet wrote:
>> On Fri, 2016-11-18 at 18:57 -0800, Jakub Kicinski wrote:
>>> On Fri, 18 Nov 2016 18:43:55 -0800, John Fastabend wrote:  
>>>> On 16-11-18 06:10 PM, Jakub Kicinski wrote:  
>>  [...]  
>>>>
>>>> Seem like a valid concerns to me how about num_possible_cpus() instead.  
>>>
>>> That would solve problem 1, but could cpu_possible_mask still be sparse
>>> on strange setups?  Let me try to dig into this, I recall someone
>>> (Eric?) was fixing similar problems some time ago.  
>>
>> nr_cpu_ids is probably what you want ;)
> 
> Thank you :)
> 

Yep poked around a bit and the common pattern seems to be to use
nr_cpu_ids to build a cpu array and then index it with
smp_processor_id(). So I'll do this as well.

Although I'm not sure I entirely follow on the x86 platforms at
least how/if nr_cpu_ids != num_possible_cpus().

Nice catch Jakub.

Thanks,
John

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-11-19 21:33 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-18 18:59 [PATCH 0/5] XDP for virtio_net John Fastabend
2016-11-18 18:59 ` [PATCH 1/5] net: virtio dynamically disable/enable LRO John Fastabend
2016-11-18 18:59 ` [PATCH 2/5] net: xdp: add invalid buffer warning John Fastabend
2016-11-18 19:00 ` [PATCH 3/5] virtio_net: Add XDP support John Fastabend
2016-11-18 23:21   ` Eric Dumazet
2016-11-19  2:16     ` John Fastabend
2016-11-18 23:23   ` Eric Dumazet
2016-11-19  1:02     ` John Fastabend
2016-11-18 19:00 ` [PATCH 4/5] virtio_net: add dedicated XDP transmit queues John Fastabend
2016-11-18 21:09   ` Jakub Kicinski
2016-11-19  2:10     ` Jakub Kicinski
2016-11-19  2:43       ` John Fastabend
2016-11-19  2:57         ` Jakub Kicinski
2016-11-19  3:20           ` Eric Dumazet
2016-11-19  3:23             ` Jakub Kicinski
2016-11-19 21:33               ` John Fastabend
2016-11-18 19:01 ` [PATCH 5/5] virtio_net: add XDP_TX support John Fastabend
2016-11-18 19:06 ` [PATCH 0/5] XDP for virtio_net John Fastabend

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.