All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
@ 2016-12-17 13:39 =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff allocation =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= @ 2016-12-17 13:39 UTC (permalink / raw)
  To: intel-wired-lan

From: Bj?rn T?pel <bjorn.topel@intel.com>

This series adds XDP support for i40e-based NICs.

The first patch prepares i40e_fetch_rx_buffer() for upcoming changes,
followed by XDP_RX support, the third adds XDP_TX support and the last
patch validates bpf_xdp_adjust_head() support.

Thanks to Alex, Daniel, John and Scott for all the feedback!

v4:
  * Removed unused i40e_page_is_reserved function
  * Prior running the XDP program, set the struct xdp_buff
    data_hard_start member

v3:
  * Rebased patch set on Jeff's dev-queue branch
  * MSI-X is no longer a prerequisite for XDP
  * RCU locking for the XDP program and XDP_RX support is introduced
    in the same patch
  * Rx bytes is now bumped for XDP
  * Removed pointer-to-pointer clunkiness
  * Added comments to XDP preconditions in ndo_xdp
  * When a non-EOF is received, log once, and drop the frame

v2:
  * Fixed kbuild error for PAGE_SIZE >= 8192.
  * Renamed i40e_try_flip_rx_page to i40e_can_reuse_rx_page, which is
    more in line to the other Intel Ethernet drivers (igb/fm10k).
  * Validate xdp_adjust_head support in ndo_xdp/XDP_SETUP_PROG.


Bj?rn


Bj?rn T?pel (4):
  i40e: Sync DMA region prior skbuff allocation
  i40e: Initial support for XDP
  i40e: Add XDP_TX support
  i40e: Validate xdp_adjust_head support

 drivers/net/ethernet/intel/i40e/i40e.h         |  18 ++
 drivers/net/ethernet/intel/i40e/i40e_ethtool.c |   4 +
 drivers/net/ethernet/intel/i40e/i40e_main.c    | 380 ++++++++++++++++++++----
 drivers/net/ethernet/intel/i40e/i40e_txrx.c    | 383 ++++++++++++++++++++++---
 drivers/net/ethernet/intel/i40e/i40e_txrx.h    |   7 +
 5 files changed, 703 insertions(+), 89 deletions(-)

-- 
2.9.3


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff allocation
  2016-12-17 13:39 [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2016-12-17 13:39 ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2016-12-23 17:35   ` Bowers, AndrewX
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 2/4] i40e: Initial support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= @ 2016-12-17 13:39 UTC (permalink / raw)
  To: intel-wired-lan

From: Bj?rn T?pel <bjorn.topel@intel.com>

This patch prepares i40e_fetch_rx_buffer() for upcoming XDP support,
where there's a need to access the device buffers prior skbuff
allocation.

Signed-off-by: Bj?rn T?pel <bjorn.topel@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_txrx.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 9de05a0e8201..8bdc95c9e9b7 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1646,6 +1646,13 @@ struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 	page = rx_buffer->page;
 	prefetchw(page);
 
+	/* we are reusing so sync this buffer for CPU use */
+	dma_sync_single_range_for_cpu(rx_ring->dev,
+				      rx_buffer->dma,
+				      rx_buffer->page_offset,
+				      size,
+				      DMA_FROM_DEVICE);
+
 	if (likely(!skb)) {
 		void *page_addr = page_address(page) + rx_buffer->page_offset;
 
@@ -1671,13 +1678,6 @@ struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 		prefetchw(skb->data);
 	}
 
-	/* we are reusing so sync this buffer for CPU use */
-	dma_sync_single_range_for_cpu(rx_ring->dev,
-				      rx_buffer->dma,
-				      rx_buffer->page_offset,
-				      size,
-				      DMA_FROM_DEVICE);
-
 	/* pull page into skb */
 	if (i40e_add_rx_frag(rx_ring, rx_buffer, size, skb)) {
 		/* hand second half of page back to the ring */
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 2/4] i40e: Initial support for XDP
  2016-12-17 13:39 [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff allocation =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2016-12-17 13:39 ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2016-12-23 17:35   ` Bowers, AndrewX
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 3/4] i40e: Add XDP_TX support =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= @ 2016-12-17 13:39 UTC (permalink / raw)
  To: intel-wired-lan

From: Bj?rn T?pel <bjorn.topel@intel.com>

This commit adds basic XDP support for i40e derived NICs. All XDP
actions will end up in XDP_DROP.

Only the default/main VSI has support for enabling XDP.

Signed-off-by: Bj?rn T?pel <bjorn.topel@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e.h         |  13 +++
 drivers/net/ethernet/intel/i40e/i40e_ethtool.c |   4 +
 drivers/net/ethernet/intel/i40e/i40e_main.c    |  83 +++++++++++++++++
 drivers/net/ethernet/intel/i40e/i40e_txrx.c    | 124 +++++++++++++++++++++++--
 drivers/net/ethernet/intel/i40e/i40e_txrx.h    |   2 +
 5 files changed, 220 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 19a296d46023..5382d4782396 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -589,6 +589,8 @@ struct i40e_vsi {
 	struct i40e_ring **rx_rings;
 	struct i40e_ring **tx_rings;
 
+	bool xdp_enabled;
+
 	u32  active_filters;
 	u32  promisc_threshold;
 
@@ -948,4 +950,15 @@ i40e_status i40e_get_npar_bw_setting(struct i40e_pf *pf);
 i40e_status i40e_set_npar_bw_setting(struct i40e_pf *pf);
 i40e_status i40e_commit_npar_bw_setting(struct i40e_pf *pf);
 void i40e_print_link_message(struct i40e_vsi *vsi, bool isup);
+
+/**
+ * i40e_enabled_xdp_vsi - Check if VSI has XDP enabled
+ * @vsi: pointer to a vsi
+ *
+ * Returns true if the VSI has XDP enabled.
+ **/
+static inline bool i40e_enabled_xdp_vsi(const struct i40e_vsi *vsi)
+{
+	return !!vsi->xdp_enabled;
+}
 #endif /* _I40E_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index dece0d676482..ccb3b77405d7 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -1257,6 +1257,10 @@ static int i40e_set_ringparam(struct net_device *netdev,
 	if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
 		return -EINVAL;
 
+	/* Don't allow any change while XDP is enabled. */
+	if (i40e_enabled_xdp_vsi(vsi))
+		return -EINVAL;
+
 	if (ring->tx_pending > I40E_MAX_NUM_DESCRIPTORS ||
 	    ring->tx_pending < I40E_MIN_NUM_DESCRIPTORS ||
 	    ring->rx_pending > I40E_MAX_NUM_DESCRIPTORS ||
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 3f81a8503165..86bd2131d2bc 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -24,6 +24,7 @@
  *
  ******************************************************************************/
 
+#include <linux/bpf.h>
 #include <linux/etherdevice.h>
 #include <linux/of_net.h>
 #include <linux/pci.h>
@@ -2483,6 +2484,13 @@ static int i40e_change_mtu(struct net_device *netdev, int new_mtu)
 	struct i40e_netdev_priv *np = netdev_priv(netdev);
 	struct i40e_vsi *vsi = np->vsi;
 
+	if (i40e_enabled_xdp_vsi(vsi)) {
+		int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+
+		if (max_frame > I40E_RXBUFFER_2048)
+			return -EINVAL;
+	}
+
 	netdev_info(netdev, "changing MTU from %d to %d\n",
 		    netdev->mtu, new_mtu);
 	netdev->mtu = new_mtu;
@@ -9341,6 +9349,78 @@ static netdev_features_t i40e_features_check(struct sk_buff *skb,
 	return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
 }
 
+/**
+ * i40e_xdp_setup - Add/remove an XDP program to a VSI
+ * @vsi: the VSI to add the program
+ * @prog: the XDP program
+ **/
+static int i40e_xdp_setup(struct i40e_vsi *vsi,
+			  struct bpf_prog *prog)
+{
+	struct i40e_pf *pf = vsi->back;
+	struct net_device *netdev = vsi->netdev;
+	int i, frame_size = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+	bool need_reset;
+	struct bpf_prog *old_prog;
+
+	/* The Rx frame has to fit in 2k */
+	if (frame_size > I40E_RXBUFFER_2048)
+		return -EINVAL;
+
+	if (!i40e_enabled_xdp_vsi(vsi) && !prog)
+		return 0;
+
+	if (prog) {
+		prog = bpf_prog_add(prog, vsi->num_queue_pairs - 1);
+		if (IS_ERR(prog))
+			return PTR_ERR(prog);
+	}
+
+	/* When turning XDP on->off/off->on we reset and rebuild the rings. */
+	need_reset = (i40e_enabled_xdp_vsi(vsi) != !!prog);
+
+	if (need_reset)
+		i40e_prep_for_reset(pf);
+
+	vsi->xdp_enabled = !!prog;
+
+	if (need_reset)
+		i40e_reset_and_rebuild(pf, true);
+
+	for (i = 0; i < vsi->num_queue_pairs; i++) {
+		old_prog = rtnl_dereference(vsi->rx_rings[i]->xdp_prog);
+		rcu_assign_pointer(vsi->rx_rings[i]->xdp_prog, prog);
+		if (old_prog)
+			bpf_prog_put(old_prog);
+	}
+	return 0;
+}
+
+/**
+ * i40e_xdp - NDO for enabled/query
+ * @dev: the netdev
+ * @xdp: XDP program
+ **/
+static int i40e_xdp(struct net_device *dev,
+		    struct netdev_xdp *xdp)
+{
+	struct i40e_netdev_priv *np = netdev_priv(dev);
+	struct i40e_vsi *vsi = np->vsi;
+
+	if (vsi->type != I40E_VSI_MAIN)
+		return -EINVAL;
+
+	switch (xdp->command) {
+	case XDP_SETUP_PROG:
+		return i40e_xdp_setup(vsi, xdp->prog);
+	case XDP_QUERY_PROG:
+		xdp->prog_attached = i40e_enabled_xdp_vsi(vsi);
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
 static const struct net_device_ops i40e_netdev_ops = {
 	.ndo_open		= i40e_open,
 	.ndo_stop		= i40e_close,
@@ -9377,6 +9457,7 @@ static const struct net_device_ops i40e_netdev_ops = {
 	.ndo_features_check	= i40e_features_check,
 	.ndo_bridge_getlink	= i40e_ndo_bridge_getlink,
 	.ndo_bridge_setlink	= i40e_ndo_bridge_setlink,
+	.ndo_xdp                = i40e_xdp,
 };
 
 /**
@@ -11600,7 +11681,9 @@ static void i40e_remove(struct pci_dev *pdev)
 		pf->flags &= ~I40E_FLAG_SRIOV_ENABLED;
 	}
 
+	rtnl_lock();
 	i40e_fdir_teardown(pf);
+	rtnl_unlock();
 
 	/* If there is a switch structure or any orphans, remove them.
 	 * This will leave only the PF's VSI remaining.
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 8bdc95c9e9b7..ad57c406c5f7 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -24,6 +24,7 @@
  *
  ******************************************************************************/
 
+#include <linux/bpf.h>
 #include <linux/prefetch.h>
 #include <net/busy_poll.h>
 #include "i40e.h"
@@ -1013,6 +1014,7 @@ void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
 	struct device *dev = rx_ring->dev;
 	unsigned long bi_size;
 	u16 i;
+	struct bpf_prog *old_prog;
 
 	/* ring already cleared, nothing to do */
 	if (!rx_ring->rx_bi)
@@ -1046,6 +1048,11 @@ void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
 	rx_ring->next_to_alloc = 0;
 	rx_ring->next_to_clean = 0;
 	rx_ring->next_to_use = 0;
+
+	old_prog = rtnl_dereference(rx_ring->xdp_prog);
+	RCU_INIT_POINTER(rx_ring->xdp_prog, NULL);
+	if (old_prog)
+		bpf_prog_put(old_prog);
 }
 
 /**
@@ -1620,19 +1627,84 @@ static bool i40e_add_rx_frag(struct i40e_ring *rx_ring,
 }
 
 /**
+ * i40e_run_xdp - Runs an XDP program for an Rx ring
+ * @rx_ring: Rx ring used for XDP
+ * @rx_buffer: current Rx buffer
+ * @rx_desc: current Rx descriptor
+ * @size: buffer size
+ * @xdp_prog: the XDP program to run
+ *
+ * Returns true if the XDP program consumed the incoming frame. False
+ * means pass the frame to the good old stack.
+ **/
+static bool i40e_run_xdp(struct i40e_ring *rx_ring,
+			 struct i40e_rx_buffer *rx_buffer,
+			 union i40e_rx_desc *rx_desc,
+			 unsigned int size,
+			 struct bpf_prog *xdp_prog)
+{
+	struct xdp_buff xdp;
+	u32 xdp_action;
+
+	if (unlikely(!i40e_test_staterr(rx_desc,
+					BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
+		dev_warn_once(&rx_ring->vsi->back->pdev->dev,
+			      "Received unexpected RXD_EOF!\n");
+		goto do_drop;
+	}
+
+	xdp.data = page_address(rx_buffer->page) + rx_buffer->page_offset;
+	xdp.data_end = xdp.data + size;
+	xdp.data_hard_start = xdp.data;
+	xdp_action = bpf_prog_run_xdp(xdp_prog, &xdp);
+
+	switch (xdp_action) {
+	case XDP_PASS:
+		return false;
+	default:
+		bpf_warn_invalid_xdp_action(xdp_action);
+	case XDP_ABORTED:
+	case XDP_TX:
+	case XDP_DROP:
+do_drop:
+		if (likely(i40e_page_is_reusable(rx_buffer->page))) {
+			i40e_reuse_rx_page(rx_ring, rx_buffer);
+			rx_ring->rx_stats.page_reuse_count++;
+			break;
+		}
+
+		/* we are not reusing the buffer so unmap it */
+		dma_unmap_page(rx_ring->dev, rx_buffer->dma, PAGE_SIZE,
+			       DMA_FROM_DEVICE);
+		__free_pages(rx_buffer->page, 0);
+	}
+
+	/* clear contents of buffer_info */
+	rx_buffer->page = NULL;
+	return true; /* Swallowed by XDP */
+}
+
+/**
  * i40e_fetch_rx_buffer - Allocate skb and populate it
  * @rx_ring: rx descriptor ring to transact packets on
  * @rx_desc: descriptor containing info written by hardware
+ * @skb: The allocated skb, if any
+ * @xdp_consumed_bytes: The size of the frame consumed by XDP
  *
- * This function allocates an skb on the fly, and populates it with the page
- * data from the current receive descriptor, taking care to set up the skb
- * correctly, as well as handling calling the page recycle function if
- * necessary.
+ * Unless XDP is enabled, this function allocates an skb on the fly,
+ * and populates it with the page data from the current receive
+ * descriptor, taking care to set up the skb correctly, as well as
+ * handling calling the page recycle function if necessary.
+ *
+ * If the received frame was handled by XDP, true is
+ * returned. Otherwise, the skb is returned to the caller via the skb
+ * parameter.
  */
 static inline
 struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 				     union i40e_rx_desc *rx_desc,
-				     struct sk_buff *skb)
+				     struct sk_buff *skb,
+				     unsigned int *xdp_consumed_bytes)
 {
 	u64 local_status_error_len =
 		le64_to_cpu(rx_desc->wb.qword1.status_error_len);
@@ -1641,6 +1713,7 @@ struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 		I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
 	struct i40e_rx_buffer *rx_buffer;
 	struct page *page;
+	struct bpf_prog *xdp_prog;
 
 	rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
 	page = rx_buffer->page;
@@ -1653,6 +1726,19 @@ struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 				      size,
 				      DMA_FROM_DEVICE);
 
+	rcu_read_lock();
+	xdp_prog = rcu_dereference(rx_ring->xdp_prog);
+	if (xdp_prog) {
+		bool xdp_consumed = i40e_run_xdp(rx_ring, rx_buffer, rx_desc,
+						 size, xdp_prog);
+		if (xdp_consumed) {
+			rcu_read_unlock();
+			*xdp_consumed_bytes = size;
+			return NULL;
+		}
+	}
+	rcu_read_unlock();
+
 	if (likely(!skb)) {
 		void *page_addr = page_address(page) + rx_buffer->page_offset;
 
@@ -1734,6 +1820,20 @@ static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
 }
 
 /**
+ * i40e_update_rx_next_to_clean - Bumps the next-to-clean for an Rx ing
+ * @rx_ring: Rx ring to bump
+ **/
+static void i40e_update_rx_next_to_clean(struct i40e_ring *rx_ring)
+{
+	u32 ntc = rx_ring->next_to_clean + 1;
+
+	ntc = (ntc < rx_ring->count) ? ntc : 0;
+	rx_ring->next_to_clean = ntc;
+
+	prefetch(I40E_RX_DESC(rx_ring, ntc));
+}
+
+/**
  * i40e_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
  * @rx_ring: rx descriptor ring to transact packets on
  * @budget: Total limit on number of packets to process
@@ -1757,6 +1857,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 		u16 vlan_tag;
 		u8 rx_ptype;
 		u64 qword;
+		unsigned int xdp_consumed_bytes = 0;
 
 		/* return some buffers to hardware, one at a time is too slow */
 		if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
@@ -1782,7 +1883,18 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 		 */
 		dma_rmb();
 
-		skb = i40e_fetch_rx_buffer(rx_ring, rx_desc, skb);
+		skb = i40e_fetch_rx_buffer(rx_ring, rx_desc, skb,
+					   &xdp_consumed_bytes);
+		if (xdp_consumed_bytes) {
+			cleaned_count++;
+
+			i40e_update_rx_next_to_clean(rx_ring);
+
+			total_rx_bytes += xdp_consumed_bytes;
+			total_rx_packets++;
+			continue;
+		}
+
 		if (!skb)
 			break;
 
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index f80979025c01..78d0aa0468f1 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -361,6 +361,8 @@ struct i40e_ring {
 					 * i40e_clean_rx_ring_irq() is called
 					 * for this ring.
 					 */
+
+	struct bpf_prog __rcu *xdp_prog;
 } ____cacheline_internodealigned_in_smp;
 
 enum i40e_latency_range {
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 3/4] i40e: Add XDP_TX support
  2016-12-17 13:39 [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff allocation =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 2/4] i40e: Initial support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2016-12-17 13:39 ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2016-12-23 17:36   ` Bowers, AndrewX
  2016-12-17 13:40 ` [Intel-wired-lan] [PATCH v4 4/4] i40e: Validate xdp_adjust_head support =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2017-01-31 21:37 ` [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP Alexander Duyck
  4 siblings, 1 reply; 17+ messages in thread
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= @ 2016-12-17 13:39 UTC (permalink / raw)
  To: intel-wired-lan

From: Bj?rn T?pel <bjorn.topel@intel.com>

This patch adds proper XDP_TX support.

Signed-off-by: Bj?rn T?pel <bjorn.topel@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e.h      |   5 +
 drivers/net/ethernet/intel/i40e/i40e_main.c | 294 +++++++++++++++++++++++-----
 drivers/net/ethernet/intel/i40e/i40e_txrx.c | 255 +++++++++++++++++++++---
 drivers/net/ethernet/intel/i40e/i40e_txrx.h |   5 +
 4 files changed, 478 insertions(+), 81 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 5382d4782396..1b0fadaf6fc9 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -589,6 +589,10 @@ struct i40e_vsi {
 	struct i40e_ring **rx_rings;
 	struct i40e_ring **tx_rings;
 
+	/* The XDP rings are Tx only, and follows the count of the
+	 * regular rings, i.e. alloc_queue_pairs/num_queue_pairs
+	 */
+	struct i40e_ring **xdp_rings;
 	bool xdp_enabled;
 
 	u32  active_filters;
@@ -666,6 +670,7 @@ struct i40e_q_vector {
 
 	struct i40e_ring_container rx;
 	struct i40e_ring_container tx;
+	struct i40e_ring_container xdp;
 
 	u8 num_ringpairs;	/* total number of ring pairs in vector */
 
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 86bd2131d2bc..efb95fb851f4 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -107,6 +107,18 @@ MODULE_VERSION(DRV_VERSION);
 static struct workqueue_struct *i40e_wq;
 
 /**
+ * i40e_alloc_queue_pairs_xdp_vsi - required # of XDP queue pairs
+ * @vsi: pointer to a vsi
+ **/
+static u16 i40e_alloc_queue_pairs_xdp_vsi(const struct i40e_vsi *vsi)
+{
+	if (i40e_enabled_xdp_vsi(vsi))
+		return vsi->alloc_queue_pairs;
+
+	return 0;
+}
+
+/**
  * i40e_allocate_dma_mem_d - OS specific memory alloc for shared code
  * @hw:   pointer to the HW structure
  * @mem:  ptr to mem struct to fill out
@@ -2886,6 +2898,12 @@ static int i40e_vsi_setup_tx_resources(struct i40e_vsi *vsi)
 	for (i = 0; i < vsi->num_queue_pairs && !err; i++)
 		err = i40e_setup_tx_descriptors(vsi->tx_rings[i]);
 
+	if (!i40e_enabled_xdp_vsi(vsi))
+		return err;
+
+	for (i = 0; i < vsi->num_queue_pairs && !err; i++)
+		err = i40e_setup_tx_descriptors(vsi->xdp_rings[i]);
+
 	return err;
 }
 
@@ -2899,12 +2917,17 @@ static void i40e_vsi_free_tx_resources(struct i40e_vsi *vsi)
 {
 	int i;
 
-	if (!vsi->tx_rings)
-		return;
+	if (vsi->tx_rings) {
+		for (i = 0; i < vsi->num_queue_pairs; i++)
+			if (vsi->tx_rings[i] && vsi->tx_rings[i]->desc)
+				i40e_free_tx_resources(vsi->tx_rings[i]);
+	}
 
-	for (i = 0; i < vsi->num_queue_pairs; i++)
-		if (vsi->tx_rings[i] && vsi->tx_rings[i]->desc)
-			i40e_free_tx_resources(vsi->tx_rings[i]);
+	if (vsi->xdp_rings) {
+		for (i = 0; i < vsi->num_queue_pairs; i++)
+			if (vsi->xdp_rings[i] && vsi->xdp_rings[i]->desc)
+				i40e_free_tx_resources(vsi->xdp_rings[i]);
+	}
 }
 
 /**
@@ -3170,6 +3193,12 @@ static int i40e_vsi_configure_tx(struct i40e_vsi *vsi)
 	for (i = 0; (i < vsi->num_queue_pairs) && !err; i++)
 		err = i40e_configure_tx_ring(vsi->tx_rings[i]);
 
+	if (!i40e_enabled_xdp_vsi(vsi))
+		return err;
+
+	for (i = 0; (i < vsi->num_queue_pairs) && !err; i++)
+		err = i40e_configure_tx_ring(vsi->xdp_rings[i]);
+
 	return err;
 }
 
@@ -3318,7 +3347,7 @@ static void i40e_vsi_configure_msix(struct i40e_vsi *vsi)
 	struct i40e_hw *hw = &pf->hw;
 	u16 vector;
 	int i, q;
-	u32 qp;
+	u32 qp, qp_idx = 0;
 
 	/* The interrupt indexing is offset by 1 in the PFINT_ITRn
 	 * and PFINT_LNKLSTn registers, e.g.:
@@ -3345,16 +3374,33 @@ static void i40e_vsi_configure_msix(struct i40e_vsi *vsi)
 		wr32(hw, I40E_PFINT_LNKLSTN(vector - 1), qp);
 		for (q = 0; q < q_vector->num_ringpairs; q++) {
 			u32 val;
+			u32 nqp = qp;
+
+			if (i40e_enabled_xdp_vsi(vsi)) {
+				nqp = vsi->base_queue +
+				      vsi->xdp_rings[qp_idx]->queue_index;
+			}
 
 			val = I40E_QINT_RQCTL_CAUSE_ENA_MASK |
-			      (I40E_RX_ITR << I40E_QINT_RQCTL_ITR_INDX_SHIFT)  |
-			      (vector      << I40E_QINT_RQCTL_MSIX_INDX_SHIFT) |
-			      (qp          << I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT)|
+			      (I40E_RX_ITR << I40E_QINT_RQCTL_ITR_INDX_SHIFT)   |
+			      (vector      << I40E_QINT_RQCTL_MSIX_INDX_SHIFT)  |
+			      (nqp         << I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) |
 			      (I40E_QUEUE_TYPE_TX
 				      << I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT);
 
 			wr32(hw, I40E_QINT_RQCTL(qp), val);
 
+			if (i40e_enabled_xdp_vsi(vsi)) {
+				val = I40E_QINT_TQCTL_CAUSE_ENA_MASK |
+				      (I40E_TX_ITR << I40E_QINT_TQCTL_ITR_INDX_SHIFT)   |
+				      (vector      << I40E_QINT_TQCTL_MSIX_INDX_SHIFT)  |
+				      (qp          << I40E_QINT_TQCTL_NEXTQ_INDX_SHIFT) |
+				      (I40E_QUEUE_TYPE_TX
+				       << I40E_QINT_TQCTL_NEXTQ_TYPE_SHIFT);
+
+				wr32(hw, I40E_QINT_TQCTL(nqp), val);
+			}
+
 			val = I40E_QINT_TQCTL_CAUSE_ENA_MASK |
 			      (I40E_TX_ITR << I40E_QINT_TQCTL_ITR_INDX_SHIFT)  |
 			      (vector      << I40E_QINT_TQCTL_MSIX_INDX_SHIFT) |
@@ -3369,6 +3415,7 @@ static void i40e_vsi_configure_msix(struct i40e_vsi *vsi)
 
 			wr32(hw, I40E_QINT_TQCTL(qp), val);
 			qp++;
+			qp_idx++;
 		}
 	}
 
@@ -3422,7 +3469,7 @@ static void i40e_configure_msi_and_legacy(struct i40e_vsi *vsi)
 	struct i40e_q_vector *q_vector = vsi->q_vectors[0];
 	struct i40e_pf *pf = vsi->back;
 	struct i40e_hw *hw = &pf->hw;
-	u32 val;
+	u32 val, nqp = 0;
 
 	/* set the ITR configuration */
 	q_vector->itr_countdown = ITR_COUNTDOWN_START;
@@ -3438,13 +3485,28 @@ static void i40e_configure_msi_and_legacy(struct i40e_vsi *vsi)
 	/* FIRSTQ_INDX = 0, FIRSTQ_TYPE = 0 (rx) */
 	wr32(hw, I40E_PFINT_LNKLST0, 0);
 
+	if (i40e_enabled_xdp_vsi(vsi)) {
+		nqp = vsi->base_queue +
+		      vsi->xdp_rings[0]->queue_index;
+	}
+
 	/* Associate the queue pair to the vector and enable the queue int */
-	val = I40E_QINT_RQCTL_CAUSE_ENA_MASK		      |
-	      (I40E_RX_ITR << I40E_QINT_RQCTL_ITR_INDX_SHIFT) |
+	val = I40E_QINT_RQCTL_CAUSE_ENA_MASK			|
+	      (I40E_RX_ITR << I40E_QINT_RQCTL_ITR_INDX_SHIFT)	|
+	      (nqp	   << I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) |
 	      (I40E_QUEUE_TYPE_TX << I40E_QINT_TQCTL_NEXTQ_TYPE_SHIFT);
 
 	wr32(hw, I40E_QINT_RQCTL(0), val);
 
+	if (i40e_enabled_xdp_vsi(vsi)) {
+		val = I40E_QINT_TQCTL_CAUSE_ENA_MASK		      |
+		      (I40E_TX_ITR << I40E_QINT_TQCTL_ITR_INDX_SHIFT) |
+		      (I40E_QUEUE_TYPE_TX
+		       << I40E_QINT_TQCTL_NEXTQ_TYPE_SHIFT);
+
+	       wr32(hw, I40E_QINT_TQCTL(nqp), val);
+	}
+
 	val = I40E_QINT_TQCTL_CAUSE_ENA_MASK		      |
 	      (I40E_TX_ITR << I40E_QINT_TQCTL_ITR_INDX_SHIFT) |
 	      (I40E_QUEUE_END_OF_LIST << I40E_QINT_TQCTL_NEXTQ_INDX_SHIFT);
@@ -3611,6 +3673,10 @@ static void i40e_vsi_disable_irq(struct i40e_vsi *vsi)
 	for (i = 0; i < vsi->num_queue_pairs; i++) {
 		wr32(hw, I40E_QINT_TQCTL(vsi->tx_rings[i]->reg_idx), 0);
 		wr32(hw, I40E_QINT_RQCTL(vsi->rx_rings[i]->reg_idx), 0);
+		if (i40e_enabled_xdp_vsi(vsi)) {
+			wr32(hw, I40E_QINT_TQCTL(vsi->xdp_rings[i]->reg_idx),
+			     0);
+		}
 	}
 
 	if (pf->flags & I40E_FLAG_MSIX_ENABLED) {
@@ -3920,6 +3986,24 @@ static void i40e_map_vector_to_qp(struct i40e_vsi *vsi, int v_idx, int qp_idx)
 }
 
 /**
+ * i40e_map_vector_to_xdp_ring - Assigns the XDP Tx queue to the vector
+ * @vsi: the VSI being configured
+ * @v_idx: vector index
+ * @xdp_idx: XDP Tx queue index
+ **/
+static void i40e_map_vector_to_xdp_ring(struct i40e_vsi *vsi, int v_idx,
+					int xdp_idx)
+{
+	struct i40e_q_vector *q_vector = vsi->q_vectors[v_idx];
+	struct i40e_ring *xdp_ring = vsi->xdp_rings[xdp_idx];
+
+	xdp_ring->q_vector = q_vector;
+	xdp_ring->next = q_vector->xdp.ring;
+	q_vector->xdp.ring = xdp_ring;
+	q_vector->xdp.count++;
+}
+
+/**
  * i40e_vsi_map_rings_to_vectors - Maps descriptor rings to vectors
  * @vsi: the VSI being configured
  *
@@ -3952,11 +4036,17 @@ static void i40e_vsi_map_rings_to_vectors(struct i40e_vsi *vsi)
 
 		q_vector->rx.count = 0;
 		q_vector->tx.count = 0;
+		q_vector->xdp.count = 0;
 		q_vector->rx.ring = NULL;
 		q_vector->tx.ring = NULL;
+		q_vector->xdp.ring = NULL;
 
 		while (num_ringpairs--) {
 			i40e_map_vector_to_qp(vsi, v_start, qp_idx);
+			if (i40e_enabled_xdp_vsi(vsi)) {
+				i40e_map_vector_to_xdp_ring(vsi, v_start,
+							    qp_idx);
+			}
 			qp_idx++;
 			qp_remaining--;
 		}
@@ -4050,56 +4140,82 @@ static int i40e_pf_txq_wait(struct i40e_pf *pf, int pf_q, bool enable)
 }
 
 /**
- * i40e_vsi_control_tx - Start or stop a VSI's rings
+ * i40e_vsi_control_txq - Start or stop a VSI's queue
  * @vsi: the VSI being configured
  * @enable: start or stop the rings
+ * @pf_q: the PF queue
  **/
-static int i40e_vsi_control_tx(struct i40e_vsi *vsi, bool enable)
+static int i40e_vsi_control_txq(struct i40e_vsi *vsi, bool enable, int pf_q)
 {
 	struct i40e_pf *pf = vsi->back;
 	struct i40e_hw *hw = &pf->hw;
-	int i, j, pf_q, ret = 0;
+	int j, ret = 0;
 	u32 tx_reg;
 
-	pf_q = vsi->base_queue;
-	for (i = 0; i < vsi->num_queue_pairs; i++, pf_q++) {
+	/* warn the TX unit of coming changes */
+	i40e_pre_tx_queue_cfg(&pf->hw, pf_q, enable);
+	if (!enable)
+		usleep_range(10, 20);
 
-		/* warn the TX unit of coming changes */
-		i40e_pre_tx_queue_cfg(&pf->hw, pf_q, enable);
-		if (!enable)
-			usleep_range(10, 20);
+	for (j = 0; j < 50; j++) {
+		tx_reg = rd32(hw, I40E_QTX_ENA(pf_q));
+		if (((tx_reg >> I40E_QTX_ENA_QENA_REQ_SHIFT) & 1) ==
+		    ((tx_reg >> I40E_QTX_ENA_QENA_STAT_SHIFT) & 1))
+			break;
+		usleep_range(1000, 2000);
+	}
+	/* Skip if the queue is already in the requested state */
+	if (enable == !!(tx_reg & I40E_QTX_ENA_QENA_STAT_MASK))
+		return 0;
 
-		for (j = 0; j < 50; j++) {
-			tx_reg = rd32(hw, I40E_QTX_ENA(pf_q));
-			if (((tx_reg >> I40E_QTX_ENA_QENA_REQ_SHIFT) & 1) ==
-			    ((tx_reg >> I40E_QTX_ENA_QENA_STAT_SHIFT) & 1))
-				break;
-			usleep_range(1000, 2000);
-		}
-		/* Skip if the queue is already in the requested state */
-		if (enable == !!(tx_reg & I40E_QTX_ENA_QENA_STAT_MASK))
-			continue;
+	/* turn on/off the queue */
+	if (enable) {
+		wr32(hw, I40E_QTX_HEAD(pf_q), 0);
+		tx_reg |= I40E_QTX_ENA_QENA_REQ_MASK;
+	} else {
+		tx_reg &= ~I40E_QTX_ENA_QENA_REQ_MASK;
+	}
 
-		/* turn on/off the queue */
-		if (enable) {
-			wr32(hw, I40E_QTX_HEAD(pf_q), 0);
-			tx_reg |= I40E_QTX_ENA_QENA_REQ_MASK;
-		} else {
-			tx_reg &= ~I40E_QTX_ENA_QENA_REQ_MASK;
-		}
+	wr32(hw, I40E_QTX_ENA(pf_q), tx_reg);
+	/* No waiting for the Tx queue to disable */
+	if (!enable && test_bit(__I40E_PORT_TX_SUSPENDED, &pf->state))
+		return 0;
 
-		wr32(hw, I40E_QTX_ENA(pf_q), tx_reg);
-		/* No waiting for the Tx queue to disable */
-		if (!enable && test_bit(__I40E_PORT_TX_SUSPENDED, &pf->state))
-			continue;
+	/* wait for the change to finish */
+	ret = i40e_pf_txq_wait(pf, pf_q, enable);
+	if (ret) {
+		dev_info(&pf->pdev->dev,
+			 "VSI seid %d Tx ring %d %sable timeout\n",
+			 vsi->seid, pf_q, (enable ? "en" : "dis"));
+		return ret;
+	}
+	return 0;
+}
 
-		/* wait for the change to finish */
-		ret = i40e_pf_txq_wait(pf, pf_q, enable);
-		if (ret) {
-			dev_info(&pf->pdev->dev,
-				 "VSI seid %d Tx ring %d %sable timeout\n",
-				 vsi->seid, pf_q, (enable ? "en" : "dis"));
+/**
+ * i40e_vsi_control_tx - Start or stop a VSI's rings
+ * @vsi: the VSI being configured
+ * @enable: start or stop the rings
+ **/
+static int i40e_vsi_control_tx(struct i40e_vsi *vsi, bool enable)
+{
+	struct i40e_pf *pf = vsi->back;
+	struct i40e_hw *hw = &pf->hw;
+	int i, pf_q, ret = 0;
+
+	pf_q = vsi->base_queue;
+	for (i = 0; i < vsi->num_queue_pairs; i++, pf_q++) {
+		ret = i40e_vsi_control_txq(vsi, enable, pf_q);
+		if (ret)
 			break;
+	}
+
+	if (!ret && i40e_enabled_xdp_vsi(vsi)) {
+		for (i = 0; i < vsi->num_queue_pairs; i++) {
+			pf_q = vsi->base_queue + vsi->xdp_rings[i]->queue_index;
+			ret = i40e_vsi_control_txq(vsi, enable, pf_q);
+			if (ret)
+				break;
 		}
 	}
 
@@ -4360,6 +4476,9 @@ static void i40e_free_q_vector(struct i40e_vsi *vsi, int v_idx)
 	i40e_for_each_ring(ring, q_vector->rx)
 		ring->q_vector = NULL;
 
+	i40e_for_each_ring(ring, q_vector->xdp)
+		ring->q_vector = NULL;
+
 	/* only VSI w/ an associated netdev is set up w/ NAPI */
 	if (vsi->netdev)
 		netif_napi_del(&q_vector->napi);
@@ -4583,6 +4702,21 @@ static int i40e_vsi_wait_queues_disabled(struct i40e_vsi *vsi)
 		}
 	}
 
+	if (!i40e_enabled_xdp_vsi(vsi))
+		return 0;
+
+	for (i = 0; i < vsi->num_queue_pairs; i++) {
+		pf_q = vsi->base_queue + vsi->xdp_rings[i]->queue_index;
+		/* Check and wait for the disable status of the queue */
+		ret = i40e_pf_txq_wait(pf, pf_q, false);
+		if (ret) {
+			dev_info(&pf->pdev->dev,
+				 "VSI seid %d XDP Tx ring %d disable timeout\n",
+				 vsi->seid, pf_q);
+			return ret;
+		}
+	}
+
 	return 0;
 }
 
@@ -5540,6 +5674,8 @@ void i40e_down(struct i40e_vsi *vsi)
 
 	for (i = 0; i < vsi->num_queue_pairs; i++) {
 		i40e_clean_tx_ring(vsi->tx_rings[i]);
+		if (i40e_enabled_xdp_vsi(vsi))
+			i40e_clean_tx_ring(vsi->xdp_rings[i]);
 		i40e_clean_rx_ring(vsi->rx_rings[i]);
 	}
 
@@ -7542,6 +7678,16 @@ static int i40e_vsi_alloc_arrays(struct i40e_vsi *vsi, bool alloc_qvectors)
 		return -ENOMEM;
 	vsi->rx_rings = &vsi->tx_rings[vsi->alloc_queue_pairs];
 
+	if (i40e_enabled_xdp_vsi(vsi)) {
+		size = sizeof(struct i40e_ring *) *
+		       i40e_alloc_queue_pairs_xdp_vsi(vsi);
+		vsi->xdp_rings = kzalloc(size, GFP_KERNEL);
+		if (!vsi->xdp_rings) {
+			ret = -ENOMEM;
+			goto err_xdp_rings;
+		}
+	}
+
 	if (alloc_qvectors) {
 		/* allocate memory for q_vector pointers */
 		size = sizeof(struct i40e_q_vector *) * vsi->num_q_vectors;
@@ -7554,6 +7700,8 @@ static int i40e_vsi_alloc_arrays(struct i40e_vsi *vsi, bool alloc_qvectors)
 	return ret;
 
 err_vectors:
+	kfree(vsi->xdp_rings);
+err_xdp_rings:
 	kfree(vsi->tx_rings);
 	return ret;
 }
@@ -7660,6 +7808,8 @@ static void i40e_vsi_free_arrays(struct i40e_vsi *vsi, bool free_qvectors)
 	kfree(vsi->tx_rings);
 	vsi->tx_rings = NULL;
 	vsi->rx_rings = NULL;
+	kfree(vsi->xdp_rings);
+	vsi->xdp_rings = NULL;
 }
 
 /**
@@ -7745,6 +7895,13 @@ static void i40e_vsi_clear_rings(struct i40e_vsi *vsi)
 			vsi->rx_rings[i] = NULL;
 		}
 	}
+
+	if (vsi->xdp_rings && vsi->xdp_rings[0]) {
+		for (i = 0; i < vsi->alloc_queue_pairs; i++) {
+			kfree_rcu(vsi->xdp_rings[i], rcu);
+			vsi->xdp_rings[i] = NULL;
+		}
+	}
 }
 
 /**
@@ -7792,6 +7949,31 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
 		vsi->rx_rings[i] = rx_ring;
 	}
 
+	if (!i40e_enabled_xdp_vsi(vsi))
+		return 0;
+
+	for (i = 0; i < vsi->alloc_queue_pairs; i++) {
+		tx_ring = kzalloc(sizeof(*tx_ring), GFP_KERNEL);
+		if (!tx_ring)
+			goto err_out;
+
+		tx_ring->queue_index = vsi->alloc_queue_pairs + i;
+		tx_ring->reg_idx = vsi->base_queue + vsi->alloc_queue_pairs + i;
+		tx_ring->ring_active = false;
+		tx_ring->vsi = vsi;
+		tx_ring->netdev = NULL;
+		tx_ring->dev = &pf->pdev->dev;
+		tx_ring->count = vsi->num_desc;
+		tx_ring->size = 0;
+		tx_ring->dcb_tc = 0;
+		if (vsi->back->flags & I40E_FLAG_WB_ON_ITR_CAPABLE)
+			tx_ring->flags = I40E_TXR_FLAGS_WB_ON_ITR;
+		tx_ring->tx_itr_setting = pf->tx_itr_default;
+		tx_ring->xdp_sibling = vsi->rx_rings[i];
+		vsi->xdp_rings[i] = tx_ring;
+		vsi->rx_rings[i]->xdp_sibling = tx_ring;
+	}
+
 	return 0;
 
 err_out:
@@ -10035,6 +10217,7 @@ static struct i40e_vsi *i40e_vsi_reinit_setup(struct i40e_vsi *vsi)
 	struct i40e_pf *pf;
 	u8 enabled_tc;
 	int ret;
+	u16 alloc_queue_pairs;
 
 	if (!vsi)
 		return NULL;
@@ -10050,11 +10233,13 @@ static struct i40e_vsi *i40e_vsi_reinit_setup(struct i40e_vsi *vsi)
 	if (ret)
 		goto err_vsi;
 
-	ret = i40e_get_lump(pf, pf->qp_pile, vsi->alloc_queue_pairs, vsi->idx);
+	alloc_queue_pairs = vsi->alloc_queue_pairs +
+			    i40e_alloc_queue_pairs_xdp_vsi(vsi);
+	ret = i40e_get_lump(pf, pf->qp_pile, alloc_queue_pairs, vsi->idx);
 	if (ret < 0) {
 		dev_info(&pf->pdev->dev,
 			 "failed to get tracking for %d queues for VSI %d err %d\n",
-			 vsi->alloc_queue_pairs, vsi->seid, ret);
+			 alloc_queue_pairs, vsi->seid, ret);
 		goto err_vsi;
 	}
 	vsi->base_queue = ret;
@@ -10112,6 +10297,7 @@ struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf, u8 type,
 	struct i40e_veb *veb = NULL;
 	int ret, i;
 	int v_idx;
+	u16 alloc_queue_pairs;
 
 	/* The requested uplink_seid must be either
 	 *     - the PF's port seid
@@ -10196,13 +10382,15 @@ struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf, u8 type,
 		pf->lan_vsi = v_idx;
 	else if (type == I40E_VSI_SRIOV)
 		vsi->vf_id = param1;
+
+	alloc_queue_pairs = vsi->alloc_queue_pairs +
+			    i40e_alloc_queue_pairs_xdp_vsi(vsi);
 	/* assign it some queues */
-	ret = i40e_get_lump(pf, pf->qp_pile, vsi->alloc_queue_pairs,
-				vsi->idx);
+	ret = i40e_get_lump(pf, pf->qp_pile, alloc_queue_pairs,	vsi->idx);
 	if (ret < 0) {
 		dev_info(&pf->pdev->dev,
 			 "failed to get tracking for %d queues for VSI %d err=%d\n",
-			 vsi->alloc_queue_pairs, vsi->seid, ret);
+			 alloc_queue_pairs, vsi->seid, ret);
 		goto err_vsi;
 	}
 	vsi->base_queue = ret;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index ad57c406c5f7..14d84509a3cc 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -525,6 +525,8 @@ static void i40e_unmap_and_free_tx_resource(struct i40e_ring *ring,
 	if (tx_buffer->skb) {
 		if (tx_buffer->tx_flags & I40E_TX_FLAGS_FD_SB)
 			kfree(tx_buffer->raw_buf);
+		else if (tx_buffer->tx_flags & I40E_TX_FLAGS_XDP)
+			put_page(tx_buffer->page);
 		else
 			dev_kfree_skb_any(tx_buffer->skb);
 		if (dma_unmap_len(tx_buffer, len))
@@ -767,6 +769,98 @@ static bool i40e_clean_tx_irq(struct i40e_vsi *vsi,
 	return !!budget;
 }
 
+static bool i40e_clean_xdp_irq(struct i40e_vsi *vsi,
+			       struct i40e_ring *tx_ring)
+{
+	u16 i = tx_ring->next_to_clean;
+	struct i40e_tx_buffer *tx_buf;
+	struct i40e_tx_desc *tx_head;
+	struct i40e_tx_desc *tx_desc;
+	unsigned int total_bytes = 0, total_packets = 0;
+	unsigned int budget = vsi->work_limit;
+
+	tx_buf = &tx_ring->tx_bi[i];
+	tx_desc = I40E_TX_DESC(tx_ring, i);
+	i -= tx_ring->count;
+
+	tx_head = I40E_TX_DESC(tx_ring, i40e_get_head(tx_ring));
+
+	do {
+		struct i40e_tx_desc *eop_desc = tx_buf->next_to_watch;
+
+		/* if next_to_watch is not set then there is no work pending */
+		if (!eop_desc)
+			break;
+
+		/* prevent any other reads prior to eop_desc */
+		read_barrier_depends();
+
+		/* we have caught up to head, no work left to do */
+		if (tx_head == tx_desc)
+			break;
+
+		/* clear next_to_watch to prevent false hangs */
+		tx_buf->next_to_watch = NULL;
+
+		/* update the statistics for this packet */
+		total_bytes += tx_buf->bytecount;
+		total_packets += tx_buf->gso_segs;
+
+		put_page(tx_buf->page);
+
+		/* unmap skb header data */
+		dma_unmap_single(tx_ring->dev,
+				 dma_unmap_addr(tx_buf, dma),
+				 dma_unmap_len(tx_buf, len),
+				 DMA_TO_DEVICE);
+
+		/* clear tx_buffer data */
+		tx_buf->skb = NULL;
+		dma_unmap_len_set(tx_buf, len, 0);
+
+		/* move us one more past the eop_desc for start of next pkt */
+		tx_buf++;
+		tx_desc++;
+		i++;
+		if (unlikely(!i)) {
+			i -= tx_ring->count;
+			tx_buf = tx_ring->tx_bi;
+			tx_desc = I40E_TX_DESC(tx_ring, 0);
+		}
+
+		prefetch(tx_desc);
+
+		/* update budget accounting */
+		budget--;
+	} while (likely(budget));
+
+	i += tx_ring->count;
+	tx_ring->next_to_clean = i;
+	u64_stats_update_begin(&tx_ring->syncp);
+	tx_ring->stats.bytes += total_bytes;
+	tx_ring->stats.packets += total_packets;
+	u64_stats_update_end(&tx_ring->syncp);
+	tx_ring->q_vector->tx.total_bytes += total_bytes;
+	tx_ring->q_vector->tx.total_packets += total_packets;
+
+	if (tx_ring->flags & I40E_TXR_FLAGS_WB_ON_ITR) {
+		/* check to see if there are < 4 descriptors
+		 * waiting to be written back, then kick the hardware to force
+		 * them to be written back in case we stay in NAPI.
+		 * In this mode on X722 we do not enable Interrupt.
+		 */
+		unsigned int j = i40e_get_tx_pending(tx_ring, false);
+
+		if (budget &&
+		    ((j / WB_STRIDE) == 0) && (j > 0) &&
+		    !test_bit(__I40E_DOWN, &vsi->state) &&
+		    (I40E_DESC_UNUSED(tx_ring) != tx_ring->count))
+			tx_ring->arm_wb = true;
+	}
+
+	return !!budget;
+}
+
 /**
  * i40e_enable_wb_on_itr - Arm hardware to do a wb, interrupts are not enabled
  * @vsi: the VSI we care about
@@ -1460,29 +1554,6 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb)
 }
 
 /**
- * i40e_reuse_rx_page - page flip buffer and store it back on the ring
- * @rx_ring: rx descriptor ring to store buffers on
- * @old_buff: donor buffer to have page reused
- *
- * Synchronizes page for reuse by the adapter
- **/
-static void i40e_reuse_rx_page(struct i40e_ring *rx_ring,
-			       struct i40e_rx_buffer *old_buff)
-{
-	struct i40e_rx_buffer *new_buff;
-	u16 nta = rx_ring->next_to_alloc;
-
-	new_buff = &rx_ring->rx_bi[nta];
-
-	/* update, and store next to alloc */
-	nta++;
-	rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
-
-	/* transfer page from old buffer to new buffer */
-	*new_buff = *old_buff;
-}
-
-/**
  * i40e_page_is_reusable - check if any reuse is possible
  * @page: page struct to check
  *
@@ -1627,6 +1698,103 @@ static bool i40e_add_rx_frag(struct i40e_ring *rx_ring,
 }
 
 /**
+ * i40e_xdp_xmit_tail_bump - updates the tail and sets the RS bit
+ * @xdp_ring: XDP Tx ring
+ **/
+static
+void i40e_xdp_xmit_tail_bump(struct i40e_ring *xdp_ring)
+{
+	struct i40e_tx_desc *tx_desc;
+
+	/* Set RS and bump tail */
+	tx_desc = I40E_TX_DESC(xdp_ring, xdp_ring->curr_in_use);
+	tx_desc->cmd_type_offset_bsz |=
+		cpu_to_le64(I40E_TX_DESC_CMD_RS << I40E_TXD_QW1_CMD_SHIFT);
+	/* Force memory writes to complete before letting h/w know
+	 * there are new descriptors to fetch.  (Only applicable for
+	 * weak-ordered memory model archs, such as IA-64).
+	 */
+	wmb();
+	writel(xdp_ring->curr_in_use, xdp_ring->tail);
+
+	xdp_ring->xdp_needs_tail_bump = false;
+}
+
+/**
+ * i40e_xdp_xmit - transmit a frame on the XDP Tx queue
+ * @xdp_ring: XDP Tx ring
+ * @page: current page containing the frame
+ * @page_offset: offset where the frame resides
+ * @dma: Bus address of the frame
+ * @size: size of the frame
+ *
+ * Returns true successfully sent.
+ **/
+static bool i40e_xdp_xmit(void *data, size_t size, struct page *page,
+			  struct i40e_ring *xdp_ring)
+{
+	struct i40e_tx_buffer *tx_bi;
+	struct i40e_tx_desc *tx_desc;
+	u16 i = xdp_ring->next_to_use;
+	dma_addr_t dma;
+
+	if (unlikely(I40E_DESC_UNUSED(xdp_ring) < 1)) {
+		if (xdp_ring->xdp_needs_tail_bump)
+			i40e_xdp_xmit_tail_bump(xdp_ring);
+		xdp_ring->tx_stats.tx_busy++;
+		return false;
+	}
+
+	tx_bi = &xdp_ring->tx_bi[i];
+	tx_bi->bytecount = size;
+	tx_bi->gso_segs = 1;
+	tx_bi->tx_flags = I40E_TX_FLAGS_XDP;
+	tx_bi->page = page;
+
+	dma = dma_map_single(xdp_ring->dev, data, size, DMA_TO_DEVICE);
+	if (dma_mapping_error(xdp_ring->dev, dma))
+		return false;
+
+	/* record length, and DMA address */
+	dma_unmap_len_set(tx_bi, len, size);
+	dma_unmap_addr_set(tx_bi, dma, dma);
+
+	tx_desc = I40E_TX_DESC(xdp_ring, i);
+	tx_desc->buffer_addr = cpu_to_le64(dma);
+	tx_desc->cmd_type_offset_bsz = build_ctob(I40E_TX_DESC_CMD_ICRC
+						  | I40E_TX_DESC_CMD_EOP,
+						  0, size, 0);
+	tx_bi->next_to_watch = tx_desc;
+	xdp_ring->curr_in_use = i++;
+	xdp_ring->next_to_use = (i < xdp_ring->count) ? i : 0;
+	xdp_ring->xdp_needs_tail_bump = true;
+	return true;
+}
+
+/**
+ * i40e_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the adapter
+ **/
+static void i40e_reuse_rx_page(struct i40e_ring *rx_ring,
+			       struct i40e_rx_buffer *old_buff)
+{
+	struct i40e_rx_buffer *new_buff;
+	u16 nta = rx_ring->next_to_alloc;
+
+	new_buff = &rx_ring->rx_bi[nta];
+
+	/* update, and store next to alloc */
+	nta++;
+	rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
+
+	/* transfer page from old buffer to new buffer */
+	*new_buff = *old_buff;
+}
+
+/**
  * i40e_run_xdp - Runs an XDP program for an Rx ring
  * @rx_ring: Rx ring used for XDP
  * @rx_buffer: current Rx buffer
@@ -1643,8 +1811,14 @@ static bool i40e_run_xdp(struct i40e_ring *rx_ring,
 			 unsigned int size,
 			 struct bpf_prog *xdp_prog)
 {
+#if (PAGE_SIZE < 8192)
+	unsigned int truesize = I40E_RXBUFFER_2048;
+#else
+	unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+#endif
 	struct xdp_buff xdp;
 	u32 xdp_action;
+	bool tx_ok;
 
 	if (unlikely(!i40e_test_staterr(rx_desc,
 					BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
@@ -1661,10 +1835,21 @@ static bool i40e_run_xdp(struct i40e_ring *rx_ring,
 	switch (xdp_action) {
 	case XDP_PASS:
 		return false;
-	default:
-		bpf_warn_invalid_xdp_action(xdp_action);
-	case XDP_ABORTED:
 	case XDP_TX:
+		tx_ok = i40e_xdp_xmit(xdp.data, size, rx_buffer->page,
+				      rx_ring->xdp_sibling);
+		if (likely(tx_ok)) {
+			if (i40e_can_reuse_rx_page(rx_buffer, rx_buffer->page,
+						   truesize)) {
+				i40e_reuse_rx_page(rx_ring, rx_buffer);
+				rx_ring->rx_stats.page_reuse_count++;
+			} else {
+				dma_unmap_page(rx_ring->dev, rx_buffer->dma,
+					       PAGE_SIZE, DMA_FROM_DEVICE);
+			}
+			break;
+		}
+	case XDP_ABORTED:
 	case XDP_DROP:
 do_drop:
 		if (likely(i40e_page_is_reusable(rx_buffer->page))) {
@@ -1672,11 +1857,13 @@ static bool i40e_run_xdp(struct i40e_ring *rx_ring,
 			rx_ring->rx_stats.page_reuse_count++;
 			break;
 		}
-
-		/* we are not reusing the buffer so unmap it */
 		dma_unmap_page(rx_ring->dev, rx_buffer->dma, PAGE_SIZE,
 			       DMA_FROM_DEVICE);
 		__free_pages(rx_buffer->page, 0);
+		break;
+	default:
+		bpf_warn_invalid_xdp_action(xdp_action);
+		goto do_drop;
 	}
 
 	/* clear contents of buffer_info */
@@ -2104,6 +2291,15 @@ int i40e_napi_poll(struct napi_struct *napi, int budget)
 		ring->arm_wb = false;
 	}
 
+	i40e_for_each_ring(ring, q_vector->xdp) {
+		if (!i40e_clean_xdp_irq(vsi, ring)) {
+			clean_complete = false;
+			continue;
+		}
+		arm_wb |= ring->arm_wb;
+		ring->arm_wb = false;
+	}
+
 	/* Handle case where we are called by netpoll with a budget of 0 */
 	if (budget <= 0)
 		goto tx_only;
@@ -2116,6 +2312,9 @@ int i40e_napi_poll(struct napi_struct *napi, int budget)
 	i40e_for_each_ring(ring, q_vector->rx) {
 		int cleaned = i40e_clean_rx_irq(ring, budget_per_ring);
 
+		if (ring->xdp_sibling && ring->xdp_sibling->xdp_needs_tail_bump)
+			i40e_xdp_xmit_tail_bump(ring->xdp_sibling);
+
 		work_done += cleaned;
 		/* if we clean as many as budgeted, we must not be done */
 		if (cleaned >= budget_per_ring)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index 78d0aa0468f1..3250be70271d 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -233,6 +233,7 @@ static inline unsigned int i40e_txd_use_count(unsigned int size)
 #define I40E_TX_FLAGS_TSYN		BIT(8)
 #define I40E_TX_FLAGS_FD_SB		BIT(9)
 #define I40E_TX_FLAGS_UDP_TUNNEL	BIT(10)
+#define I40E_TX_FLAGS_XDP		BIT(11)
 #define I40E_TX_FLAGS_VLAN_MASK		0xffff0000
 #define I40E_TX_FLAGS_VLAN_PRIO_MASK	0xe0000000
 #define I40E_TX_FLAGS_VLAN_PRIO_SHIFT	29
@@ -243,6 +244,7 @@ struct i40e_tx_buffer {
 	union {
 		struct sk_buff *skb;
 		void *raw_buf;
+		struct page *page;
 	};
 	unsigned int bytecount;
 	unsigned short gso_segs;
@@ -363,6 +365,9 @@ struct i40e_ring {
 					 */
 
 	struct bpf_prog __rcu *xdp_prog;
+	struct i40e_ring *xdp_sibling;  /* rx to xdp, and xdp to rx */
+	bool xdp_needs_tail_bump;
+	u16 curr_in_use;
 } ____cacheline_internodealigned_in_smp;
 
 enum i40e_latency_range {
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 4/4] i40e: Validate xdp_adjust_head support
  2016-12-17 13:39 [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
                   ` (2 preceding siblings ...)
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 3/4] i40e: Add XDP_TX support =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2016-12-17 13:40 ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2016-12-23 17:36   ` Bowers, AndrewX
  2017-01-31 21:37 ` [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP Alexander Duyck
  4 siblings, 1 reply; 17+ messages in thread
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= @ 2016-12-17 13:40 UTC (permalink / raw)
  To: intel-wired-lan

From: Bj?rn T?pel <bjorn.topel@intel.com>

This patch will tell the user that bpf_xdp_adjust_head() is currently
not supported.

Signed-off-by: Bj?rn T?pel <bjorn.topel@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index efb95fb851f4..94eed585a01b 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -9545,6 +9545,9 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi,
 	bool need_reset;
 	struct bpf_prog *old_prog;
 
+	if (prog && prog->xdp_adjust_head)
+		return -EOPNOTSUPP;
+
 	/* The Rx frame has to fit in 2k */
 	if (frame_size > I40E_RXBUFFER_2048)
 		return -EINVAL;
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff allocation
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff allocation =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2016-12-23 17:35   ` Bowers, AndrewX
  0 siblings, 0 replies; 17+ messages in thread
From: Bowers, AndrewX @ 2016-12-23 17:35 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at lists.osuosl.org] On
> Behalf Of Bj?rn T?pel
> Sent: Saturday, December 17, 2016 5:40 AM
> To: Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; intel-wired-
> lan at lists.osuosl.org
> Cc: daniel at iogearbox.net; Topel, Bjorn <bjorn.topel@intel.com>; Karlsson,
> Magnus <magnus.karlsson@intel.com>
> Subject: [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff
> allocation
> 
> From: Bj?rn T?pel <bjorn.topel@intel.com>
> 
> This patch prepares i40e_fetch_rx_buffer() for upcoming XDP support,
> where there's a need to access the device buffers prior skbuff allocation.
> 
> Signed-off-by: Bj?rn T?pel <bjorn.topel@intel.com>
> ---
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Does not break base driver



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 2/4] i40e: Initial support for XDP
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 2/4] i40e: Initial support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2016-12-23 17:35   ` Bowers, AndrewX
  0 siblings, 0 replies; 17+ messages in thread
From: Bowers, AndrewX @ 2016-12-23 17:35 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at lists.osuosl.org] On
> Behalf Of Bj?rn T?pel
> Sent: Saturday, December 17, 2016 5:40 AM
> To: Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; intel-wired-
> lan at lists.osuosl.org
> Cc: daniel at iogearbox.net; Topel, Bjorn <bjorn.topel@intel.com>; Karlsson,
> Magnus <magnus.karlsson@intel.com>
> Subject: [Intel-wired-lan] [PATCH v4 2/4] i40e: Initial support for XDP
> 
> From: Bj?rn T?pel <bjorn.topel@intel.com>
> 
> This commit adds basic XDP support for i40e derived NICs. All XDP actions will
> end up in XDP_DROP.
> 
> Only the default/main VSI has support for enabling XDP.
> 
> Signed-off-by: Bj?rn T?pel <bjorn.topel@intel.com>
> ---
>  drivers/net/ethernet/intel/i40e/i40e.h         |  13 +++
>  drivers/net/ethernet/intel/i40e/i40e_ethtool.c |   4 +
>  drivers/net/ethernet/intel/i40e/i40e_main.c    |  83 +++++++++++++++++
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c    | 124
> +++++++++++++++++++++++--
>  drivers/net/ethernet/intel/i40e/i40e_txrx.h    |   2 +
>  5 files changed, 220 insertions(+), 6 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Does not break base driver



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 3/4] i40e: Add XDP_TX support
  2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 3/4] i40e: Add XDP_TX support =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2016-12-23 17:36   ` Bowers, AndrewX
  0 siblings, 0 replies; 17+ messages in thread
From: Bowers, AndrewX @ 2016-12-23 17:36 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at lists.osuosl.org] On
> Behalf Of Bj?rn T?pel
> Sent: Saturday, December 17, 2016 5:40 AM
> To: Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; intel-wired-
> lan at lists.osuosl.org
> Cc: daniel at iogearbox.net; Topel, Bjorn <bjorn.topel@intel.com>; Karlsson,
> Magnus <magnus.karlsson@intel.com>
> Subject: [Intel-wired-lan] [PATCH v4 3/4] i40e: Add XDP_TX support
> 
> From: Bj?rn T?pel <bjorn.topel@intel.com>
> 
> This patch adds proper XDP_TX support.
> 
> Signed-off-by: Bj?rn T?pel <bjorn.topel@intel.com>
> ---
>  drivers/net/ethernet/intel/i40e/i40e.h      |   5 +
>  drivers/net/ethernet/intel/i40e/i40e_main.c | 294
> +++++++++++++++++++++++-----
> drivers/net/ethernet/intel/i40e/i40e_txrx.c | 255
> +++++++++++++++++++++---
>  drivers/net/ethernet/intel/i40e/i40e_txrx.h |   5 +
>  4 files changed, 478 insertions(+), 81 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Does not break base driver



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 4/4] i40e: Validate xdp_adjust_head support
  2016-12-17 13:40 ` [Intel-wired-lan] [PATCH v4 4/4] i40e: Validate xdp_adjust_head support =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2016-12-23 17:36   ` Bowers, AndrewX
  0 siblings, 0 replies; 17+ messages in thread
From: Bowers, AndrewX @ 2016-12-23 17:36 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at lists.osuosl.org] On
> Behalf Of Bj?rn T?pel
> Sent: Saturday, December 17, 2016 5:40 AM
> To: Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; intel-wired-
> lan at lists.osuosl.org
> Cc: daniel at iogearbox.net; Topel, Bjorn <bjorn.topel@intel.com>; Karlsson,
> Magnus <magnus.karlsson@intel.com>
> Subject: [Intel-wired-lan] [PATCH v4 4/4] i40e: Validate xdp_adjust_head
> support
> 
> From: Bj?rn T?pel <bjorn.topel@intel.com>
> 
> This patch will tell the user that bpf_xdp_adjust_head() is currently not
> supported.
> 
> Signed-off-by: Bj?rn T?pel <bjorn.topel@intel.com>
> ---
>  drivers/net/ethernet/intel/i40e/i40e_main.c | 3 +++
>  1 file changed, 3 insertions(+)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Does not break base driver



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
  2016-12-17 13:39 [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
                   ` (3 preceding siblings ...)
  2016-12-17 13:40 ` [Intel-wired-lan] [PATCH v4 4/4] i40e: Validate xdp_adjust_head support =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2017-01-31 21:37 ` Alexander Duyck
  2017-01-31 22:30   ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  4 siblings, 1 reply; 17+ messages in thread
From: Alexander Duyck @ 2017-01-31 21:37 UTC (permalink / raw)
  To: intel-wired-lan

We probably need to respin this and do a v5 of the XDP code so that we
can reserve some headroom at the start of the frames.  I have patches
I already working on to enable build_skb like I have done for igb and
ixgbe.  I can probably take on respinning this and getting it applied
to our out-of-tree driver.

- Alex

On Sat, Dec 17, 2016 at 5:39 AM, Bj?rn T?pel <bjorn.topel@gmail.com> wrote:
> From: Bj?rn T?pel <bjorn.topel@intel.com>
>
> This series adds XDP support for i40e-based NICs.
>
> The first patch prepares i40e_fetch_rx_buffer() for upcoming changes,
> followed by XDP_RX support, the third adds XDP_TX support and the last
> patch validates bpf_xdp_adjust_head() support.
>
> Thanks to Alex, Daniel, John and Scott for all the feedback!
>
> v4:
>   * Removed unused i40e_page_is_reserved function
>   * Prior running the XDP program, set the struct xdp_buff
>     data_hard_start member
>
> v3:
>   * Rebased patch set on Jeff's dev-queue branch
>   * MSI-X is no longer a prerequisite for XDP
>   * RCU locking for the XDP program and XDP_RX support is introduced
>     in the same patch
>   * Rx bytes is now bumped for XDP
>   * Removed pointer-to-pointer clunkiness
>   * Added comments to XDP preconditions in ndo_xdp
>   * When a non-EOF is received, log once, and drop the frame
>
> v2:
>   * Fixed kbuild error for PAGE_SIZE >= 8192.
>   * Renamed i40e_try_flip_rx_page to i40e_can_reuse_rx_page, which is
>     more in line to the other Intel Ethernet drivers (igb/fm10k).
>   * Validate xdp_adjust_head support in ndo_xdp/XDP_SETUP_PROG.
>
>
> Bj?rn
>
>
> Bj?rn T?pel (4):
>   i40e: Sync DMA region prior skbuff allocation
>   i40e: Initial support for XDP
>   i40e: Add XDP_TX support
>   i40e: Validate xdp_adjust_head support
>
>  drivers/net/ethernet/intel/i40e/i40e.h         |  18 ++
>  drivers/net/ethernet/intel/i40e/i40e_ethtool.c |   4 +
>  drivers/net/ethernet/intel/i40e/i40e_main.c    | 380 ++++++++++++++++++++----
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c    | 383 ++++++++++++++++++++++---
>  drivers/net/ethernet/intel/i40e/i40e_txrx.h    |   7 +
>  5 files changed, 703 insertions(+), 89 deletions(-)
>
> --
> 2.9.3
>
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan at lists.osuosl.org
> http://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
  2017-01-31 21:37 ` [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP Alexander Duyck
@ 2017-01-31 22:30   ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2017-02-01 21:32     ` Alexander Duyck
  0 siblings, 1 reply; 17+ messages in thread
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= @ 2017-01-31 22:30 UTC (permalink / raw)
  To: intel-wired-lan

2017-01-31 22:37 GMT+01:00 Alexander Duyck <alexander.duyck@gmail.com>:
> We probably need to respin this and do a v5 of the XDP code so that we
> can reserve some headroom at the start of the frames.  I have patches
> I already working on to enable build_skb like I have done for igb and
> ixgbe.  I can probably take on respinning this and getting it applied
> to our out-of-tree driver.

Respinning or an additional patch for the xdp_adjust_head()
support? Maybe it's better to respin it, so we get proper
xdp_adjust_head() support, prior upstreaming.

If headroom support for XDP aligns nicely into your build_skb
patches for i40e, feel free to do that! If not, let me know, and
I'll start working on it.


Thanks for bringing this up!
Bj?rn



>
> - Alex
>
> On Sat, Dec 17, 2016 at 5:39 AM, Bj?rn T?pel <bjorn.topel@gmail.com> wrote:
>> From: Bj?rn T?pel <bjorn.topel@intel.com>
>>
>> This series adds XDP support for i40e-based NICs.
>>
>> The first patch prepares i40e_fetch_rx_buffer() for upcoming changes,
>> followed by XDP_RX support, the third adds XDP_TX support and the last
>> patch validates bpf_xdp_adjust_head() support.
>>
>> Thanks to Alex, Daniel, John and Scott for all the feedback!
>>
>> v4:
>>   * Removed unused i40e_page_is_reserved function
>>   * Prior running the XDP program, set the struct xdp_buff
>>     data_hard_start member
>>
>> v3:
>>   * Rebased patch set on Jeff's dev-queue branch
>>   * MSI-X is no longer a prerequisite for XDP
>>   * RCU locking for the XDP program and XDP_RX support is introduced
>>     in the same patch
>>   * Rx bytes is now bumped for XDP
>>   * Removed pointer-to-pointer clunkiness
>>   * Added comments to XDP preconditions in ndo_xdp
>>   * When a non-EOF is received, log once, and drop the frame
>>
>> v2:
>>   * Fixed kbuild error for PAGE_SIZE >= 8192.
>>   * Renamed i40e_try_flip_rx_page to i40e_can_reuse_rx_page, which is
>>     more in line to the other Intel Ethernet drivers (igb/fm10k).
>>   * Validate xdp_adjust_head support in ndo_xdp/XDP_SETUP_PROG.
>>
>>
>> Bj?rn
>>
>>
>> Bj?rn T?pel (4):
>>   i40e: Sync DMA region prior skbuff allocation
>>   i40e: Initial support for XDP
>>   i40e: Add XDP_TX support
>>   i40e: Validate xdp_adjust_head support
>>
>>  drivers/net/ethernet/intel/i40e/i40e.h         |  18 ++
>>  drivers/net/ethernet/intel/i40e/i40e_ethtool.c |   4 +
>>  drivers/net/ethernet/intel/i40e/i40e_main.c    | 380 ++++++++++++++++++++----
>>  drivers/net/ethernet/intel/i40e/i40e_txrx.c    | 383 ++++++++++++++++++++++---
>>  drivers/net/ethernet/intel/i40e/i40e_txrx.h    |   7 +
>>  5 files changed, 703 insertions(+), 89 deletions(-)
>>
>> --
>> 2.9.3
>>
>> _______________________________________________
>> Intel-wired-lan mailing list
>> Intel-wired-lan at lists.osuosl.org
>> http://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
  2017-01-31 22:30   ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2017-02-01 21:32     ` Alexander Duyck
  2017-02-02 11:34       ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2017-02-02 22:20       ` John Fastabend
  0 siblings, 2 replies; 17+ messages in thread
From: Alexander Duyck @ 2017-02-01 21:32 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, Jan 31, 2017 at 2:30 PM, Bj?rn T?pel <bjorn.topel@gmail.com> wrote:
> 2017-01-31 22:37 GMT+01:00 Alexander Duyck <alexander.duyck@gmail.com>:
>> We probably need to respin this and do a v5 of the XDP code so that we
>> can reserve some headroom at the start of the frames.  I have patches
>> I already working on to enable build_skb like I have done for igb and
>> ixgbe.  I can probably take on respinning this and getting it applied
>> to our out-of-tree driver.
>
> Respinning or an additional patch for the xdp_adjust_head()
> support? Maybe it's better to respin it, so we get proper
> xdp_adjust_head() support, prior upstreaming.

It makes the upstreaming process easier for us since we normally work
out of tree and then upstream patches after testing in the case of
i40e.

> If headroom support for XDP aligns nicely into your build_skb
> patches for i40e, feel free to do that! If not, let me know, and
> I'll start working on it.

Yeah, I'll be working on providing headroom in order to support
build_skb anyway so I can just combine the efforts.  Then it makes the
upstreaming effort easier as well as we will have the headroom added
before we actually start work on XDP.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
  2017-02-01 21:32     ` Alexander Duyck
@ 2017-02-02 11:34       ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2017-02-02 22:20       ` John Fastabend
  1 sibling, 0 replies; 17+ messages in thread
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= @ 2017-02-02 11:34 UTC (permalink / raw)
  To: intel-wired-lan

2017-02-01 22:32 GMT+01:00 Alexander Duyck <alexander.duyck@gmail.com>:
> On Tue, Jan 31, 2017 at 2:30 PM, Bj?rn T?pel <bjorn.topel@gmail.com> wrote:
>> 2017-01-31 22:37 GMT+01:00 Alexander Duyck <alexander.duyck@gmail.com>:
>>> We probably need to respin this and do a v5 of the XDP code so that we
>>> can reserve some headroom at the start of the frames.  I have patches
>>> I already working on to enable build_skb like I have done for igb and
>>> ixgbe.  I can probably take on respinning this and getting it applied
>>> to our out-of-tree driver.
>>
>> Respinning or an additional patch for the xdp_adjust_head()
>> support? Maybe it's better to respin it, so we get proper
>> xdp_adjust_head() support, prior upstreaming.
>
> It makes the upstreaming process easier for us since we normally work
> out of tree and then upstream patches after testing in the case of
> i40e.

Makes sense. So, I guess Jeff needs to pull out the XDP patches from the
dev-queue branch then.

Jeff, please remove the XDP patch set from the dev-queue.


Cheers,
Bj?rn

>
>> If headroom support for XDP aligns nicely into your build_skb
>> patches for i40e, feel free to do that! If not, let me know, and
>> I'll start working on it.
>
> Yeah, I'll be working on providing headroom in order to support
> build_skb anyway so I can just combine the efforts.  Then it makes the
> upstreaming effort easier as well as we will have the headroom added
> before we actually start work on XDP.
>
> Thanks.
>
> - Alex

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
  2017-02-01 21:32     ` Alexander Duyck
  2017-02-02 11:34       ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
@ 2017-02-02 22:20       ` John Fastabend
  2017-02-02 22:57         ` Alexander Duyck
  1 sibling, 1 reply; 17+ messages in thread
From: John Fastabend @ 2017-02-02 22:20 UTC (permalink / raw)
  To: intel-wired-lan

On 17-02-01 01:32 PM, Alexander Duyck wrote:
> On Tue, Jan 31, 2017 at 2:30 PM, Bj?rn T?pel <bjorn.topel@gmail.com> wrote:
>> 2017-01-31 22:37 GMT+01:00 Alexander Duyck <alexander.duyck@gmail.com>:
>>> We probably need to respin this and do a v5 of the XDP code so that we
>>> can reserve some headroom at the start of the frames.  I have patches
>>> I already working on to enable build_skb like I have done for igb and
>>> ixgbe.  I can probably take on respinning this and getting it applied
>>> to our out-of-tree driver.
>>
>> Respinning or an additional patch for the xdp_adjust_head()
>> support? Maybe it's better to respin it, so we get proper
>> xdp_adjust_head() support, prior upstreaming.
> 
> It makes the upstreaming process easier for us since we normally work
> out of tree and then upstream patches after testing in the case of
> i40e.
> 

Any hints on when we can expect a v5 to show up? I have a few folks
pulling these outside Intel that are waiting to get these patches.

>> If headroom support for XDP aligns nicely into your build_skb
>> patches for i40e, feel free to do that! If not, let me know, and
>> I'll start working on it.
> 
> Yeah, I'll be working on providing headroom in order to support
> build_skb anyway so I can just combine the efforts.  Then it makes the
> upstreaming effort easier as well as we will have the headroom added
> before we actually start work on XDP.
> 
> Thanks.
> 
> - Alex
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
  2017-02-02 22:20       ` John Fastabend
@ 2017-02-02 22:57         ` Alexander Duyck
  2017-02-06 22:02           ` John Fastabend
  0 siblings, 1 reply; 17+ messages in thread
From: Alexander Duyck @ 2017-02-02 22:57 UTC (permalink / raw)
  To: intel-wired-lan

It would likely be several weeks.  I am still trying to finish up the
out-of-tree build_skb changes for i40e.  Once I have that then I can
start working on the XDP bits.

My guess would be that these would probably end up in the 4.12 kernel
when everything is said and done.

- Alex

On Thu, Feb 2, 2017 at 2:20 PM, John Fastabend <john.fastabend@gmail.com> wrote:
> On 17-02-01 01:32 PM, Alexander Duyck wrote:
>> On Tue, Jan 31, 2017 at 2:30 PM, Bj?rn T?pel <bjorn.topel@gmail.com> wrote:
>>> 2017-01-31 22:37 GMT+01:00 Alexander Duyck <alexander.duyck@gmail.com>:
>>>> We probably need to respin this and do a v5 of the XDP code so that we
>>>> can reserve some headroom at the start of the frames.  I have patches
>>>> I already working on to enable build_skb like I have done for igb and
>>>> ixgbe.  I can probably take on respinning this and getting it applied
>>>> to our out-of-tree driver.
>>>
>>> Respinning or an additional patch for the xdp_adjust_head()
>>> support? Maybe it's better to respin it, so we get proper
>>> xdp_adjust_head() support, prior upstreaming.
>>
>> It makes the upstreaming process easier for us since we normally work
>> out of tree and then upstream patches after testing in the case of
>> i40e.
>>
>
> Any hints on when we can expect a v5 to show up? I have a few folks
> pulling these outside Intel that are waiting to get these patches.
>
>>> If headroom support for XDP aligns nicely into your build_skb
>>> patches for i40e, feel free to do that! If not, let me know, and
>>> I'll start working on it.
>>
>> Yeah, I'll be working on providing headroom in order to support
>> build_skb anyway so I can just combine the efforts.  Then it makes the
>> upstreaming effort easier as well as we will have the headroom added
>> before we actually start work on XDP.
>>
>> Thanks.
>>
>> - Alex
>>
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
  2017-02-02 22:57         ` Alexander Duyck
@ 2017-02-06 22:02           ` John Fastabend
  2017-02-06 23:09             ` Alexander Duyck
  0 siblings, 1 reply; 17+ messages in thread
From: John Fastabend @ 2017-02-06 22:02 UTC (permalink / raw)
  To: intel-wired-lan

On 17-02-02 02:57 PM, Alexander Duyck wrote:
> It would likely be several weeks.  I am still trying to finish up the
> out-of-tree build_skb changes for i40e.  Once I have that then I can
> start working on the XDP bits.
> 
> My guess would be that these would probably end up in the 4.12 kernel
> when everything is said and done.

OK, iirc though ixgbe already has these changes and we can add XDP support
without any delays there?

I have a handful of folks wanting to test/use this with adjust_header support
so for the time being I'm going to push a fork to my github branch for folks
to pull. But the sooner the better for mainline support.

Thanks,
John

> 
> - Alex
> 
> On Thu, Feb 2, 2017 at 2:20 PM, John Fastabend <john.fastabend@gmail.com> wrote:
>> On 17-02-01 01:32 PM, Alexander Duyck wrote:
>>> On Tue, Jan 31, 2017 at 2:30 PM, Bj?rn T?pel <bjorn.topel@gmail.com> wrote:
>>>> 2017-01-31 22:37 GMT+01:00 Alexander Duyck <alexander.duyck@gmail.com>:
>>>>> We probably need to respin this and do a v5 of the XDP code so that we
>>>>> can reserve some headroom at the start of the frames.  I have patches
>>>>> I already working on to enable build_skb like I have done for igb and
>>>>> ixgbe.  I can probably take on respinning this and getting it applied
>>>>> to our out-of-tree driver.
>>>>
>>>> Respinning or an additional patch for the xdp_adjust_head()
>>>> support? Maybe it's better to respin it, so we get proper
>>>> xdp_adjust_head() support, prior upstreaming.
>>>
>>> It makes the upstreaming process easier for us since we normally work
>>> out of tree and then upstream patches after testing in the case of
>>> i40e.
>>>
>>
>> Any hints on when we can expect a v5 to show up? I have a few folks
>> pulling these outside Intel that are waiting to get these patches.
>>
>>>> If headroom support for XDP aligns nicely into your build_skb
>>>> patches for i40e, feel free to do that! If not, let me know, and
>>>> I'll start working on it.
>>>
>>> Yeah, I'll be working on providing headroom in order to support
>>> build_skb anyway so I can just combine the efforts.  Then it makes the
>>> upstreaming effort easier as well as we will have the headroom added
>>> before we actually start work on XDP.
>>>
>>> Thanks.
>>>
>>> - Alex
>>>
>>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP
  2017-02-06 22:02           ` John Fastabend
@ 2017-02-06 23:09             ` Alexander Duyck
  0 siblings, 0 replies; 17+ messages in thread
From: Alexander Duyck @ 2017-02-06 23:09 UTC (permalink / raw)
  To: intel-wired-lan

On Mon, Feb 6, 2017 at 2:02 PM, John Fastabend <john.fastabend@gmail.com> wrote:
> On 17-02-02 02:57 PM, Alexander Duyck wrote:
>> It would likely be several weeks.  I am still trying to finish up the
>> out-of-tree build_skb changes for i40e.  Once I have that then I can
>> start working on the XDP bits.
>>
>> My guess would be that these would probably end up in the 4.12 kernel
>> when everything is said and done.
>
> OK, iirc though ixgbe already has these changes and we can add XDP support
> without any delays there?
>
> I have a handful of folks wanting to test/use this with adjust_header support
> so for the time being I'm going to push a fork to my github branch for folks
> to pull. But the sooner the better for mainline support.
>
> Thanks,
> John
>

Agreed.  I will try to get this completed ASAP, and you can work with
the fork in the meantime.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-02-06 23:09 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-17 13:39 [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff allocation =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
2016-12-23 17:35   ` Bowers, AndrewX
2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 2/4] i40e: Initial support for XDP =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
2016-12-23 17:35   ` Bowers, AndrewX
2016-12-17 13:39 ` [Intel-wired-lan] [PATCH v4 3/4] i40e: Add XDP_TX support =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
2016-12-23 17:36   ` Bowers, AndrewX
2016-12-17 13:40 ` [Intel-wired-lan] [PATCH v4 4/4] i40e: Validate xdp_adjust_head support =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
2016-12-23 17:36   ` Bowers, AndrewX
2017-01-31 21:37 ` [Intel-wired-lan] [PATCH v4 0/4] i40e: Support for XDP Alexander Duyck
2017-01-31 22:30   ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
2017-02-01 21:32     ` Alexander Duyck
2017-02-02 11:34       ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
2017-02-02 22:20       ` John Fastabend
2017-02-02 22:57         ` Alexander Duyck
2017-02-06 22:02           ` John Fastabend
2017-02-06 23:09             ` Alexander Duyck

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.