netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support
@ 2019-06-29  5:23 Ilias Apalodimas
  2019-06-29  5:23 ` [net-next, PATCH 1/3, v2] net: netsec: Use page_pool API Ilias Apalodimas
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Ilias Apalodimas @ 2019-06-29  5:23 UTC (permalink / raw)
  To: netdev, jaswinder.singh
  Cc: ard.biesheuvel, bjorn.topel, magnus.karlsson, brouer, daniel,
	ast, makita.toshiaki, jakub.kicinski, john.fastabend, davem,
	maciejromanfijalkowski, Ilias Apalodimas

This is a respin of https://www.spinics.net/lists/netdev/msg526066.html
Since page_pool API fixes are merged into net-next we can now safely use 
it's DMA mapping capabilities. 

First patch changes the buffer allocation from napi/netdev_alloc_frag()
to page_pool API. Although this will lead to slightly reduced performance 
(on raw packet drops only) we can use the API for XDP buffer recycling. 
Another side effect is a slight increase in memory usage, due to using a 
single page per packet.

The second patch adds XDP support on the driver. 
There's a bunch of interesting options that come up due to the single 
Tx queue.
Locking is needed(to avoid messing up the Tx queues since ndo_xdp_xmit 
and the normal stack can co-exist). We also need to track down the 
'buffer type' for TX and properly free or recycle the packet depending 
on it's nature.


Changes since RFC:
- Bug fixes from Jesper and Maciej
- Added page pool API to retrieve the DMA direction

Changes since v1:
- Use page_pool_free correctly if xdp_rxq_info_reg() failed

Ilias Apalodimas (3):
  net: netsec: Use page_pool API
  net: page_pool: add helper function for retrieving dma direction
  net: netsec: add XDP support

 drivers/net/ethernet/socionext/Kconfig  |   1 +
 drivers/net/ethernet/socionext/netsec.c | 473 ++++++++++++++++++++----
 include/net/page_pool.h                 |   9 +
 3 files changed, 416 insertions(+), 67 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [net-next, PATCH 1/3, v2] net: netsec: Use page_pool API
  2019-06-29  5:23 [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support Ilias Apalodimas
@ 2019-06-29  5:23 ` Ilias Apalodimas
  2019-06-29  5:23 ` [net-next, PATCH 2/3, v2] net: page_pool: add helper function for retrieving dma direction Ilias Apalodimas
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 17+ messages in thread
From: Ilias Apalodimas @ 2019-06-29  5:23 UTC (permalink / raw)
  To: netdev, jaswinder.singh
  Cc: ard.biesheuvel, bjorn.topel, magnus.karlsson, brouer, daniel,
	ast, makita.toshiaki, jakub.kicinski, john.fastabend, davem,
	maciejromanfijalkowski, Ilias Apalodimas

Use page_pool and it's DMA mapping capabilities for Rx buffers instead
of netdev/napi_alloc_frag()

Although this will result in a slight performance penalty on small sized
packets (~10%) the use of the API will allow to easily add XDP support.
The penalty won't be visible in network testing i.e ipef/netperf etc, it
only happens during raw packet drops.
Furthermore we intend to add recycling capabilities on the API
in the future. Once the recycling is added the performance penalty will
go away.
The only 'real' penalty is the slightly increased memory usage, since we
now allocate a page per packet instead of the amount of bytes we need +
skb metadata (difference is roughly 2kb per packet).
With a minimum of 4BG of RAM on the only SoC that has this NIC the
extra memory usage is negligible (a bit more on 64K pages)

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 drivers/net/ethernet/socionext/Kconfig  |   1 +
 drivers/net/ethernet/socionext/netsec.c | 126 +++++++++++++++---------
 2 files changed, 80 insertions(+), 47 deletions(-)

diff --git a/drivers/net/ethernet/socionext/Kconfig b/drivers/net/ethernet/socionext/Kconfig
index 25f18be27423..95e99baf3f45 100644
--- a/drivers/net/ethernet/socionext/Kconfig
+++ b/drivers/net/ethernet/socionext/Kconfig
@@ -26,6 +26,7 @@ config SNI_NETSEC
 	tristate "Socionext NETSEC ethernet support"
 	depends on (ARCH_SYNQUACER || COMPILE_TEST) && OF
 	select PHYLIB
+	select PAGE_POOL
 	select MII
 	---help---
 	  Enable to add support for the SocioNext NetSec Gigabit Ethernet
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index 48fd7448b513..7791bff2f2af 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -11,6 +11,7 @@
 #include <linux/io.h>
 
 #include <net/tcp.h>
+#include <net/page_pool.h>
 #include <net/ip6_checksum.h>
 
 #define NETSEC_REG_SOFT_RST			0x104
@@ -235,7 +236,8 @@
 #define DESC_NUM	256
 
 #define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
-#define NETSEC_RX_BUF_SZ 1536
+#define NETSEC_RX_BUF_NON_DATA (NETSEC_SKB_PAD + \
+				SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
 
 #define DESC_SZ	sizeof(struct netsec_de)
 
@@ -258,6 +260,8 @@ struct netsec_desc_ring {
 	struct netsec_desc *desc;
 	void *vaddr;
 	u16 head, tail;
+	struct page_pool *page_pool;
+	struct xdp_rxq_info xdp_rxq;
 };
 
 struct netsec_priv {
@@ -673,33 +677,27 @@ static void netsec_process_tx(struct netsec_priv *priv)
 }
 
 static void *netsec_alloc_rx_data(struct netsec_priv *priv,
-				  dma_addr_t *dma_handle, u16 *desc_len,
-				  bool napi)
+				  dma_addr_t *dma_handle, u16 *desc_len)
+
 {
-	size_t total_len = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-	size_t payload_len = NETSEC_RX_BUF_SZ;
-	dma_addr_t mapping;
-	void *buf;
 
-	total_len += SKB_DATA_ALIGN(payload_len + NETSEC_SKB_PAD);
+	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
+	struct page *page;
 
-	buf = napi ? napi_alloc_frag(total_len) : netdev_alloc_frag(total_len);
-	if (!buf)
+	page = page_pool_dev_alloc_pages(dring->page_pool);
+	if (!page)
 		return NULL;
 
-	mapping = dma_map_single(priv->dev, buf + NETSEC_SKB_PAD, payload_len,
-				 DMA_FROM_DEVICE);
-	if (unlikely(dma_mapping_error(priv->dev, mapping)))
-		goto err_out;
-
-	*dma_handle = mapping;
-	*desc_len = payload_len;
-
-	return buf;
+	/* page_pool API will map the whole page, skip
+	 * NET_SKB_PAD + NET_IP_ALIGN for the payload
+	 */
+	*dma_handle = page_pool_get_dma_addr(page) + NETSEC_SKB_PAD;
+	/* make sure the incoming payload fits in the page with the needed
+	 * NET_SKB_PAD + NET_IP_ALIGN + skb_shared_info
+	 */
+	*desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA;
 
-err_out:
-	skb_free_frag(buf);
-	return NULL;
+	return page_address(page);
 }
 
 static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num)
@@ -728,10 +726,10 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
 		u16 idx = dring->tail;
 		struct netsec_de *de = dring->vaddr + (DESC_SZ * idx);
 		struct netsec_desc *desc = &dring->desc[idx];
+		struct page *page = virt_to_page(desc->addr);
 		u16 pkt_len, desc_len;
 		dma_addr_t dma_handle;
 		void *buf_addr;
-		u32 truesize;
 
 		if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) {
 			/* reading the register clears the irq */
@@ -766,8 +764,8 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
 		/* allocate a fresh buffer and map it to the hardware.
 		 * This will eventually replace the old buffer in the hardware
 		 */
-		buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len,
-						true);
+		buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len);
+
 		if (unlikely(!buf_addr))
 			break;
 
@@ -775,22 +773,19 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
 					DMA_FROM_DEVICE);
 		prefetch(desc->addr);
 
-		truesize = SKB_DATA_ALIGN(desc->len + NETSEC_SKB_PAD) +
-			SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-		skb = build_skb(desc->addr, truesize);
+		skb = build_skb(desc->addr, desc->len + NETSEC_RX_BUF_NON_DATA);
 		if (unlikely(!skb)) {
-			/* free the newly allocated buffer, we are not going to
-			 * use it
+			/* If skb fails recycle_direct will either unmap and
+			 * free the page or refill the cache depending on the
+			 * cache state. Since we paid the allocation cost if
+			 * building an skb fails try to put the page into cache
 			 */
-			dma_unmap_single(priv->dev, dma_handle, desc_len,
-					 DMA_FROM_DEVICE);
-			skb_free_frag(buf_addr);
+			page_pool_recycle_direct(dring->page_pool, page);
 			netif_err(priv, drv, priv->ndev,
 				  "rx failed to build skb\n");
 			break;
 		}
-		dma_unmap_single_attrs(priv->dev, desc->dma_addr, desc->len,
-				       DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
+		page_pool_release_page(dring->page_pool, page);
 
 		/* Update the descriptor with the new buffer we allocated */
 		desc->len = desc_len;
@@ -980,19 +975,31 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id)
 
 	if (!dring->vaddr || !dring->desc)
 		return;
-
 	for (idx = 0; idx < DESC_NUM; idx++) {
 		desc = &dring->desc[idx];
 		if (!desc->addr)
 			continue;
 
-		dma_unmap_single(priv->dev, desc->dma_addr, desc->len,
-				 id == NETSEC_RING_RX ? DMA_FROM_DEVICE :
-							      DMA_TO_DEVICE);
-		if (id == NETSEC_RING_RX)
-			skb_free_frag(desc->addr);
-		else if (id == NETSEC_RING_TX)
+		if (id == NETSEC_RING_RX) {
+			struct page *page = virt_to_page(desc->addr);
+
+			page_pool_put_page(dring->page_pool, page, false);
+		} else if (id == NETSEC_RING_TX) {
+			dma_unmap_single(priv->dev, desc->dma_addr, desc->len,
+					 DMA_TO_DEVICE);
 			dev_kfree_skb(desc->skb);
+		}
+	}
+
+	/* Rx is currently using page_pool
+	 * since the pool is created during netsec_setup_rx_dring(), we need to
+	 * free the pool manually if the registration failed
+	 */
+	if (id == NETSEC_RING_RX) {
+		if (xdp_rxq_info_is_reg(&dring->xdp_rxq))
+			xdp_rxq_info_unreg(&dring->xdp_rxq);
+		else
+			page_pool_free(dring->page_pool);
 	}
 
 	memset(dring->desc, 0, sizeof(struct netsec_desc) * DESC_NUM);
@@ -1059,7 +1066,23 @@ static void netsec_setup_tx_dring(struct netsec_priv *priv)
 static int netsec_setup_rx_dring(struct netsec_priv *priv)
 {
 	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
-	int i;
+	struct page_pool_params pp_params = { 0 };
+	int i, err;
+
+	pp_params.order = 0;
+	/* internal DMA mapping in page_pool */
+	pp_params.flags = PP_FLAG_DMA_MAP;
+	pp_params.pool_size = DESC_NUM;
+	pp_params.nid = cpu_to_node(0);
+	pp_params.dev = priv->dev;
+	pp_params.dma_dir = DMA_FROM_DEVICE;
+
+	dring->page_pool = page_pool_create(&pp_params);
+	if (IS_ERR(dring->page_pool)) {
+		err = PTR_ERR(dring->page_pool);
+		dring->page_pool = NULL;
+		goto err_out;
+	}
 
 	for (i = 0; i < DESC_NUM; i++) {
 		struct netsec_desc *desc = &dring->desc[i];
@@ -1067,10 +1090,10 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv)
 		void *buf;
 		u16 len;
 
-		buf = netsec_alloc_rx_data(priv, &dma_handle, &len,
-					   false);
+		buf = netsec_alloc_rx_data(priv, &dma_handle, &len);
+
 		if (!buf) {
-			netsec_uninit_pkt_dring(priv, NETSEC_RING_RX);
+			err = -ENOMEM;
 			goto err_out;
 		}
 		desc->dma_addr = dma_handle;
@@ -1079,11 +1102,20 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv)
 	}
 
 	netsec_rx_fill(priv, 0, DESC_NUM);
+	err = xdp_rxq_info_reg(&dring->xdp_rxq, priv->ndev, 0);
+	if (err)
+		goto err_out;
+
+	err = xdp_rxq_info_reg_mem_model(&dring->xdp_rxq, MEM_TYPE_PAGE_POOL,
+					 dring->page_pool);
+	if (err)
+		goto err_out;
 
 	return 0;
 
 err_out:
-	return -ENOMEM;
+	netsec_uninit_pkt_dring(priv, NETSEC_RING_RX);
+	return err;
 }
 
 static int netsec_netdev_load_ucode_region(struct netsec_priv *priv, u32 reg,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [net-next, PATCH 2/3, v2] net: page_pool: add helper function for retrieving dma direction
  2019-06-29  5:23 [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support Ilias Apalodimas
  2019-06-29  5:23 ` [net-next, PATCH 1/3, v2] net: netsec: Use page_pool API Ilias Apalodimas
@ 2019-06-29  5:23 ` Ilias Apalodimas
  2019-06-29  5:23 ` [net-next, PATCH 3/3, v2] net: netsec: add XDP support Ilias Apalodimas
  2019-07-02  2:27 ` [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support David Miller
  3 siblings, 0 replies; 17+ messages in thread
From: Ilias Apalodimas @ 2019-06-29  5:23 UTC (permalink / raw)
  To: netdev, jaswinder.singh
  Cc: ard.biesheuvel, bjorn.topel, magnus.karlsson, brouer, daniel,
	ast, makita.toshiaki, jakub.kicinski, john.fastabend, davem,
	maciejromanfijalkowski, Ilias Apalodimas

Since the dma direction is stored in page pool params, offer an API
helper for driver that choose not to keep track of it locally

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 include/net/page_pool.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index f07c518ef8a5..ee9c871d2043 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -112,6 +112,15 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool)
 	return page_pool_alloc_pages(pool, gfp);
 }
 
+/* get the stored dma direction. A driver might decide to treat this locally and
+ * avoid the extra cache line from page_pool to determine the direction
+ */
+static
+inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
+{
+	return pool->p.dma_dir;
+}
+
 struct page_pool *page_pool_create(const struct page_pool_params *params);
 
 void __page_pool_free(struct page_pool *pool);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-29  5:23 [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support Ilias Apalodimas
  2019-06-29  5:23 ` [net-next, PATCH 1/3, v2] net: netsec: Use page_pool API Ilias Apalodimas
  2019-06-29  5:23 ` [net-next, PATCH 2/3, v2] net: page_pool: add helper function for retrieving dma direction Ilias Apalodimas
@ 2019-06-29  5:23 ` Ilias Apalodimas
  2019-06-29 10:36   ` Jesper Dangaard Brouer
                     ` (2 more replies)
  2019-07-02  2:27 ` [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support David Miller
  3 siblings, 3 replies; 17+ messages in thread
From: Ilias Apalodimas @ 2019-06-29  5:23 UTC (permalink / raw)
  To: netdev, jaswinder.singh
  Cc: ard.biesheuvel, bjorn.topel, magnus.karlsson, brouer, daniel,
	ast, makita.toshiaki, jakub.kicinski, john.fastabend, davem,
	maciejromanfijalkowski, Ilias Apalodimas

The interface only supports 1 Tx queue so locking is introduced on
the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
.ndo_xdp_xmit won't corrupt Tx ring

- Performance (SMMU off)

Benchmark   XDP_SKB     XDP_DRV
xdp1        291kpps     344kpps
rxdrop      282kpps     342kpps

- Performance (SMMU on)
Benchmark   XDP_SKB     XDP_DRV
xdp1        167kpps     324kpps
rxdrop      164kpps     323kpps

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 drivers/net/ethernet/socionext/netsec.c | 361 ++++++++++++++++++++++--
 1 file changed, 334 insertions(+), 27 deletions(-)

diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index 7791bff2f2af..5544a722543f 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -9,6 +9,9 @@
 #include <linux/etherdevice.h>
 #include <linux/interrupt.h>
 #include <linux/io.h>
+#include <linux/netlink.h>
+#include <linux/bpf.h>
+#include <linux/bpf_trace.h>
 
 #include <net/tcp.h>
 #include <net/page_pool.h>
@@ -236,23 +239,41 @@
 #define DESC_NUM	256
 
 #define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
-#define NETSEC_RX_BUF_NON_DATA (NETSEC_SKB_PAD + \
+#define NETSEC_RXBUF_HEADROOM (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + \
+			       NET_IP_ALIGN)
+#define NETSEC_RX_BUF_NON_DATA (NETSEC_RXBUF_HEADROOM + \
 				SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
 
 #define DESC_SZ	sizeof(struct netsec_de)
 
 #define NETSEC_F_NETSEC_VER_MAJOR_NUM(x)	((x) & 0xffff0000)
 
+#define NETSEC_XDP_PASS          0
+#define NETSEC_XDP_CONSUMED      BIT(0)
+#define NETSEC_XDP_TX            BIT(1)
+#define NETSEC_XDP_REDIR         BIT(2)
+#define NETSEC_XDP_RX_OK (NETSEC_XDP_PASS | NETSEC_XDP_TX | NETSEC_XDP_REDIR)
+
 enum ring_id {
 	NETSEC_RING_TX = 0,
 	NETSEC_RING_RX
 };
 
+enum buf_type {
+	TYPE_NETSEC_SKB = 0,
+	TYPE_NETSEC_XDP_TX,
+	TYPE_NETSEC_XDP_NDO,
+};
+
 struct netsec_desc {
-	struct sk_buff *skb;
+	union {
+		struct sk_buff *skb;
+		struct xdp_frame *xdpf;
+	};
 	dma_addr_t dma_addr;
 	void *addr;
 	u16 len;
+	u8 buf_type;
 };
 
 struct netsec_desc_ring {
@@ -260,13 +281,17 @@ struct netsec_desc_ring {
 	struct netsec_desc *desc;
 	void *vaddr;
 	u16 head, tail;
+	u16 xdp_xmit; /* netsec_xdp_xmit packets */
+	bool is_xdp;
 	struct page_pool *page_pool;
 	struct xdp_rxq_info xdp_rxq;
+	spinlock_t lock; /* XDP tx queue locking */
 };
 
 struct netsec_priv {
 	struct netsec_desc_ring desc_ring[NETSEC_RING_MAX];
 	struct ethtool_coalesce et_coalesce;
+	struct bpf_prog *xdp_prog;
 	spinlock_t reglock; /* protect reg access */
 	struct napi_struct napi;
 	phy_interface_t phy_interface;
@@ -303,6 +328,11 @@ struct netsec_rx_pkt_info {
 	bool err_flag;
 };
 
+static void netsec_set_tx_de(struct netsec_priv *priv,
+			     struct netsec_desc_ring *dring,
+			     const struct netsec_tx_pkt_ctrl *tx_ctrl,
+			     const struct netsec_desc *desc, void *buf);
+
 static void netsec_write(struct netsec_priv *priv, u32 reg_addr, u32 val)
 {
 	writel(val, priv->ioaddr + reg_addr);
@@ -609,6 +639,9 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv)
 	int tail = dring->tail;
 	int cnt = 0;
 
+	if (dring->is_xdp)
+		spin_lock(&dring->lock);
+
 	pkts = 0;
 	bytes = 0;
 	entry = dring->vaddr + DESC_SZ * tail;
@@ -622,13 +655,23 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv)
 		eop = (entry->attr >> NETSEC_TX_LAST) & 1;
 		dma_rmb();
 
-		dma_unmap_single(priv->dev, desc->dma_addr, desc->len,
-				 DMA_TO_DEVICE);
-		if (eop) {
-			pkts++;
+		if (desc->buf_type == TYPE_NETSEC_SKB)
+			dma_unmap_single(priv->dev, desc->dma_addr, desc->len,
+					 DMA_TO_DEVICE);
+		else if (desc->buf_type == TYPE_NETSEC_XDP_NDO)
+			dma_unmap_single(priv->dev, desc->dma_addr,
+					 desc->len, DMA_TO_DEVICE);
+
+		if (!eop)
+			goto next;
+
+		if (desc->buf_type == TYPE_NETSEC_SKB) {
 			bytes += desc->skb->len;
 			dev_kfree_skb(desc->skb);
+		} else {
+			xdp_return_frame(desc->xdpf);
 		}
+next:
 		/* clean up so netsec_uninit_pkt_dring() won't free the skb
 		 * again
 		 */
@@ -645,6 +688,8 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv)
 		entry = dring->vaddr + DESC_SZ * tail;
 		cnt++;
 	}
+	if (dring->is_xdp)
+		spin_unlock(&dring->lock);
 
 	if (!cnt)
 		return false;
@@ -688,12 +733,13 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv,
 	if (!page)
 		return NULL;
 
-	/* page_pool API will map the whole page, skip
-	 * NET_SKB_PAD + NET_IP_ALIGN for the payload
+	/* We allocate the same buffer length for XDP and non-XDP cases.
+	 * page_pool API will map the whole page, skip what's needed for
+	 * network payloads and/or XDP
 	 */
-	*dma_handle = page_pool_get_dma_addr(page) + NETSEC_SKB_PAD;
-	/* make sure the incoming payload fits in the page with the needed
-	 * NET_SKB_PAD + NET_IP_ALIGN + skb_shared_info
+	*dma_handle = page_pool_get_dma_addr(page) + NETSEC_RXBUF_HEADROOM;
+	/* Make sure the incoming payload fits in the page for XDP and non-XDP
+	 * cases and reserve enough space for headroom + skb_shared_info
 	 */
 	*desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA;
 
@@ -714,21 +760,159 @@ static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num)
 	}
 }
 
+static void netsec_xdp_ring_tx_db(struct netsec_priv *priv, u16 pkts)
+{
+	if (likely(pkts))
+		netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, pkts);
+}
+
+static void netsec_finalize_xdp_rx(struct netsec_priv *priv, u32 xdp_res,
+				   u16 pkts)
+{
+	if (xdp_res & NETSEC_XDP_REDIR)
+		xdp_do_flush_map();
+
+	if (xdp_res & NETSEC_XDP_TX)
+		netsec_xdp_ring_tx_db(priv, pkts);
+}
+
+/* The current driver only supports 1 Txq, this should run under spin_lock() */
+static u32 netsec_xdp_queue_one(struct netsec_priv *priv,
+				struct xdp_frame *xdpf, bool is_ndo)
+
+{
+	struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX];
+	struct page *page = virt_to_page(xdpf->data);
+	struct netsec_tx_pkt_ctrl tx_ctrl = {};
+	struct netsec_desc tx_desc;
+	dma_addr_t dma_handle;
+	u16 filled;
+
+	if (tx_ring->head >= tx_ring->tail)
+		filled = tx_ring->head - tx_ring->tail;
+	else
+		filled = tx_ring->head + DESC_NUM - tx_ring->tail;
+
+	if (DESC_NUM - filled <= 1)
+		return NETSEC_XDP_CONSUMED;
+
+	if (is_ndo) {
+		/* this is for ndo_xdp_xmit, the buffer needs mapping before
+		 * sending
+		 */
+		dma_handle = dma_map_single(priv->dev, xdpf->data, xdpf->len,
+					    DMA_TO_DEVICE);
+		if (dma_mapping_error(priv->dev, dma_handle))
+			return NETSEC_XDP_CONSUMED;
+		tx_desc.buf_type = TYPE_NETSEC_XDP_NDO;
+	} else {
+		/* This is the device Rx buffer from page_pool. No need to remap
+		 * just sync and send it
+		 */
+		struct netsec_desc_ring *rx_ring =
+			&priv->desc_ring[NETSEC_RING_RX];
+		enum dma_data_direction dma_dir =
+			page_pool_get_dma_dir(rx_ring->page_pool);
+
+		dma_handle = page_pool_get_dma_addr(page) +
+			NETSEC_RXBUF_HEADROOM;
+		dma_sync_single_for_device(priv->dev, dma_handle, xdpf->len,
+					   dma_dir);
+		tx_desc.buf_type = TYPE_NETSEC_XDP_TX;
+	}
+
+	tx_desc.dma_addr = dma_handle;
+	tx_desc.addr = xdpf->data;
+	tx_desc.len = xdpf->len;
+
+	netsec_set_tx_de(priv, tx_ring, &tx_ctrl, &tx_desc, xdpf);
+
+	return NETSEC_XDP_TX;
+}
+
+static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp)
+{
+	struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX];
+	struct xdp_frame *xdpf = convert_to_xdp_frame(xdp);
+	u32 ret;
+
+	if (unlikely(!xdpf))
+		return NETSEC_XDP_CONSUMED;
+
+	spin_lock(&tx_ring->lock);
+	ret = netsec_xdp_queue_one(priv, xdpf, false);
+	spin_unlock(&tx_ring->lock);
+
+	return ret;
+}
+
+static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
+			  struct xdp_buff *xdp)
+{
+	u32 ret = NETSEC_XDP_PASS;
+	int err;
+	u32 act;
+
+	act = bpf_prog_run_xdp(prog, xdp);
+
+	switch (act) {
+	case XDP_PASS:
+		ret = NETSEC_XDP_PASS;
+		break;
+	case XDP_TX:
+		ret = netsec_xdp_xmit_back(priv, xdp);
+		if (ret != NETSEC_XDP_TX)
+			xdp_return_buff(xdp);
+		break;
+	case XDP_REDIRECT:
+		err = xdp_do_redirect(priv->ndev, xdp, prog);
+		if (!err) {
+			ret = NETSEC_XDP_REDIR;
+		} else {
+			ret = NETSEC_XDP_CONSUMED;
+			xdp_return_buff(xdp);
+		}
+		break;
+	default:
+		bpf_warn_invalid_xdp_action(act);
+		/* fall through */
+	case XDP_ABORTED:
+		trace_xdp_exception(priv->ndev, prog, act);
+		/* fall through -- handle aborts by dropping packet */
+	case XDP_DROP:
+		ret = NETSEC_XDP_CONSUMED;
+		xdp_return_buff(xdp);
+		break;
+	}
+
+	return ret;
+}
+
 static int netsec_process_rx(struct netsec_priv *priv, int budget)
 {
 	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
 	struct net_device *ndev = priv->ndev;
 	struct netsec_rx_pkt_info rx_info;
-	struct sk_buff *skb;
+	enum dma_data_direction dma_dir;
+	struct bpf_prog *xdp_prog;
+	struct sk_buff *skb = NULL;
+	u16 xdp_xmit = 0;
+	u32 xdp_act = 0;
 	int done = 0;
 
+	rcu_read_lock();
+	xdp_prog = READ_ONCE(priv->xdp_prog);
+	dma_dir = page_pool_get_dma_dir(dring->page_pool);
+
 	while (done < budget) {
 		u16 idx = dring->tail;
 		struct netsec_de *de = dring->vaddr + (DESC_SZ * idx);
 		struct netsec_desc *desc = &dring->desc[idx];
 		struct page *page = virt_to_page(desc->addr);
+		u32 xdp_result = XDP_PASS;
 		u16 pkt_len, desc_len;
 		dma_addr_t dma_handle;
+		struct xdp_buff xdp;
 		void *buf_addr;
 
 		if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) {
@@ -770,10 +954,26 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
 			break;
 
 		dma_sync_single_for_cpu(priv->dev, desc->dma_addr, pkt_len,
-					DMA_FROM_DEVICE);
+					dma_dir);
 		prefetch(desc->addr);
 
+		xdp.data_hard_start = desc->addr;
+		xdp.data = desc->addr + NETSEC_RXBUF_HEADROOM;
+		xdp_set_data_meta_invalid(&xdp);
+		xdp.data_end = xdp.data + pkt_len;
+		xdp.rxq = &dring->xdp_rxq;
+
+		if (xdp_prog) {
+			xdp_result = netsec_run_xdp(priv, xdp_prog, &xdp);
+			if (xdp_result != NETSEC_XDP_PASS) {
+				xdp_act |= xdp_result;
+				if (xdp_result == NETSEC_XDP_TX)
+					xdp_xmit++;
+				goto next;
+			}
+		}
 		skb = build_skb(desc->addr, desc->len + NETSEC_RX_BUF_NON_DATA);
+
 		if (unlikely(!skb)) {
 			/* If skb fails recycle_direct will either unmap and
 			 * free the page or refill the cache depending on the
@@ -787,27 +987,32 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
 		}
 		page_pool_release_page(dring->page_pool, page);
 
-		/* Update the descriptor with the new buffer we allocated */
-		desc->len = desc_len;
-		desc->dma_addr = dma_handle;
-		desc->addr = buf_addr;
-
-		skb_reserve(skb, NETSEC_SKB_PAD);
-		skb_put(skb, pkt_len);
+		skb_reserve(skb, xdp.data - xdp.data_hard_start);
+		skb_put(skb, xdp.data_end - xdp.data);
 		skb->protocol = eth_type_trans(skb, priv->ndev);
 
 		if (priv->rx_cksum_offload_flag &&
 		    rx_info.rx_cksum_result == NETSEC_RX_CKSUM_OK)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		if (napi_gro_receive(&priv->napi, skb) != GRO_DROP) {
+next:
+		if ((skb && napi_gro_receive(&priv->napi, skb) != GRO_DROP) ||
+		    xdp_result & NETSEC_XDP_RX_OK) {
 			ndev->stats.rx_packets++;
-			ndev->stats.rx_bytes += pkt_len;
+			ndev->stats.rx_bytes += xdp.data_end - xdp.data;
 		}
 
+		/* Update the descriptor with fresh buffers */
+		desc->len = desc_len;
+		desc->dma_addr = dma_handle;
+		desc->addr = buf_addr;
+
 		netsec_rx_fill(priv, idx, 1);
 		dring->tail = (dring->tail + 1) % DESC_NUM;
 	}
+	netsec_finalize_xdp_rx(priv, xdp_act, xdp_xmit);
+
+	rcu_read_unlock();
 
 	return done;
 }
@@ -837,8 +1042,7 @@ static int netsec_napi_poll(struct napi_struct *napi, int budget)
 static void netsec_set_tx_de(struct netsec_priv *priv,
 			     struct netsec_desc_ring *dring,
 			     const struct netsec_tx_pkt_ctrl *tx_ctrl,
-			     const struct netsec_desc *desc,
-			     struct sk_buff *skb)
+			     const struct netsec_desc *desc, void *buf)
 {
 	int idx = dring->head;
 	struct netsec_de *de;
@@ -861,10 +1065,16 @@ static void netsec_set_tx_de(struct netsec_priv *priv,
 	de->data_buf_addr_lw = lower_32_bits(desc->dma_addr);
 	de->buf_len_info = (tx_ctrl->tcp_seg_len << 16) | desc->len;
 	de->attr = attr;
-	dma_wmb();
+	/* under spin_lock if using XDP */
+	if (!dring->is_xdp)
+		dma_wmb();
 
 	dring->desc[idx] = *desc;
-	dring->desc[idx].skb = skb;
+	if (desc->buf_type == TYPE_NETSEC_SKB)
+		dring->desc[idx].skb = buf;
+	else if (desc->buf_type == TYPE_NETSEC_XDP_TX ||
+		 desc->buf_type == TYPE_NETSEC_XDP_NDO)
+		dring->desc[idx].xdpf = buf;
 
 	/* move head ahead */
 	dring->head = (dring->head + 1) % DESC_NUM;
@@ -915,8 +1125,12 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
 	u16 tso_seg_len = 0;
 	int filled;
 
+	if (dring->is_xdp)
+		spin_lock_bh(&dring->lock);
 	filled = netsec_desc_used(dring);
 	if (netsec_check_stop_tx(priv, filled)) {
+		if (dring->is_xdp)
+			spin_unlock_bh(&dring->lock);
 		net_warn_ratelimited("%s %s Tx queue full\n",
 				     dev_name(priv->dev), ndev->name);
 		return NETDEV_TX_BUSY;
@@ -949,6 +1163,8 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
 	tx_desc.dma_addr = dma_map_single(priv->dev, skb->data,
 					  skb_headlen(skb), DMA_TO_DEVICE);
 	if (dma_mapping_error(priv->dev, tx_desc.dma_addr)) {
+		if (dring->is_xdp)
+			spin_unlock_bh(&dring->lock);
 		netif_err(priv, drv, priv->ndev,
 			  "%s: DMA mapping failed\n", __func__);
 		ndev->stats.tx_dropped++;
@@ -957,11 +1173,14 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
 	}
 	tx_desc.addr = skb->data;
 	tx_desc.len = skb_headlen(skb);
+	tx_desc.buf_type = TYPE_NETSEC_SKB;
 
 	skb_tx_timestamp(skb);
 	netdev_sent_queue(priv->ndev, skb->len);
 
 	netsec_set_tx_de(priv, dring, &tx_ctrl, &tx_desc, skb);
+	if (dring->is_xdp)
+		spin_unlock_bh(&dring->lock);
 	netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, 1); /* submit another tx */
 
 	return NETDEV_TX_OK;
@@ -1049,6 +1268,7 @@ static int netsec_alloc_dring(struct netsec_priv *priv, enum ring_id id)
 static void netsec_setup_tx_dring(struct netsec_priv *priv)
 {
 	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX];
+	struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog);
 	int i;
 
 	for (i = 0; i < DESC_NUM; i++) {
@@ -1061,11 +1281,18 @@ static void netsec_setup_tx_dring(struct netsec_priv *priv)
 		 */
 		de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD;
 	}
+
+	if (xdp_prog)
+		dring->is_xdp = true;
+	else
+		dring->is_xdp = false;
+
 }
 
 static int netsec_setup_rx_dring(struct netsec_priv *priv)
 {
 	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
+	struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog);
 	struct page_pool_params pp_params = { 0 };
 	int i, err;
 
@@ -1075,7 +1302,7 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv)
 	pp_params.pool_size = DESC_NUM;
 	pp_params.nid = cpu_to_node(0);
 	pp_params.dev = priv->dev;
-	pp_params.dma_dir = DMA_FROM_DEVICE;
+	pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
 
 	dring->page_pool = page_pool_create(&pp_params);
 	if (IS_ERR(dring->page_pool)) {
@@ -1495,6 +1722,9 @@ static int netsec_netdev_init(struct net_device *ndev)
 	if (ret)
 		goto err2;
 
+	spin_lock_init(&priv->desc_ring[NETSEC_RING_TX].lock);
+	spin_lock_init(&priv->desc_ring[NETSEC_RING_RX].lock);
+
 	return 0;
 err2:
 	netsec_free_dring(priv, NETSEC_RING_RX);
@@ -1527,6 +1757,81 @@ static int netsec_netdev_ioctl(struct net_device *ndev, struct ifreq *ifr,
 	return phy_mii_ioctl(ndev->phydev, ifr, cmd);
 }
 
+static int netsec_xdp_xmit(struct net_device *ndev, int n,
+			   struct xdp_frame **frames, u32 flags)
+{
+	struct netsec_priv *priv = netdev_priv(ndev);
+	struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX];
+	int drops = 0;
+	int i;
+
+	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+		return -EINVAL;
+
+	spin_lock(&tx_ring->lock);
+	for (i = 0; i < n; i++) {
+		struct xdp_frame *xdpf = frames[i];
+		int err;
+
+		err = netsec_xdp_queue_one(priv, xdpf, true);
+		if (err != NETSEC_XDP_TX) {
+			xdp_return_frame_rx_napi(xdpf);
+			drops++;
+		} else {
+			tx_ring->xdp_xmit++;
+		}
+	}
+	spin_unlock(&tx_ring->lock);
+
+	if (unlikely(flags & XDP_XMIT_FLUSH)) {
+		netsec_xdp_ring_tx_db(priv, tx_ring->xdp_xmit);
+		tx_ring->xdp_xmit = 0;
+	}
+
+	return n - drops;
+}
+
+static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog,
+			    struct netlink_ext_ack *extack)
+{
+	struct net_device *dev = priv->ndev;
+	struct bpf_prog *old_prog;
+
+	/* For now just support only the usual MTU sized frames */
+	if (prog && dev->mtu > 1500) {
+		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
+		return -EOPNOTSUPP;
+	}
+
+	if (netif_running(dev))
+		netsec_netdev_stop(dev);
+
+	/* Detach old prog, if any */
+	old_prog = xchg(&priv->xdp_prog, prog);
+	if (old_prog)
+		bpf_prog_put(old_prog);
+
+	if (netif_running(dev))
+		netsec_netdev_open(dev);
+
+	return 0;
+}
+
+static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp)
+{
+	struct netsec_priv *priv = netdev_priv(ndev);
+
+	switch (xdp->command) {
+	case XDP_SETUP_PROG:
+		return netsec_xdp_setup(priv, xdp->prog, xdp->extack);
+	case XDP_QUERY_PROG:
+		xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
 static const struct net_device_ops netsec_netdev_ops = {
 	.ndo_init		= netsec_netdev_init,
 	.ndo_uninit		= netsec_netdev_uninit,
@@ -1537,6 +1842,8 @@ static const struct net_device_ops netsec_netdev_ops = {
 	.ndo_set_mac_address    = eth_mac_addr,
 	.ndo_validate_addr	= eth_validate_addr,
 	.ndo_do_ioctl		= netsec_netdev_ioctl,
+	.ndo_xdp_xmit		= netsec_xdp_xmit,
+	.ndo_bpf		= netsec_xdp,
 };
 
 static int netsec_of_probe(struct platform_device *pdev,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-29  5:23 ` [net-next, PATCH 3/3, v2] net: netsec: add XDP support Ilias Apalodimas
@ 2019-06-29 10:36   ` Jesper Dangaard Brouer
  2019-06-30 16:20   ` Ivan Khoronzhuk
  2019-06-30 16:25   ` Ivan Khoronzhuk
  2 siblings, 0 replies; 17+ messages in thread
From: Jesper Dangaard Brouer @ 2019-06-29 10:36 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, daniel, ast, makita.toshiaki, jakub.kicinski,
	john.fastabend, davem, maciejromanfijalkowski, brouer

On Sat, 29 Jun 2019 08:23:25 +0300
Ilias Apalodimas <ilias.apalodimas@linaro.org> wrote:

> The interface only supports 1 Tx queue so locking is introduced on
> the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
> .ndo_xdp_xmit won't corrupt Tx ring
> 
> - Performance (SMMU off)
> 
> Benchmark   XDP_SKB     XDP_DRV
> xdp1        291kpps     344kpps
> rxdrop      282kpps     342kpps
> 
> - Performance (SMMU on)
> Benchmark   XDP_SKB     XDP_DRV
> xdp1        167kpps     324kpps
> rxdrop      164kpps     323kpps
> 
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>

This version looks okay to me.  I don't have the hardware, so I've not
tested it...

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-29  5:23 ` [net-next, PATCH 3/3, v2] net: netsec: add XDP support Ilias Apalodimas
  2019-06-29 10:36   ` Jesper Dangaard Brouer
@ 2019-06-30 16:20   ` Ivan Khoronzhuk
  2019-06-30 16:34     ` Ilias Apalodimas
  2019-06-30 16:25   ` Ivan Khoronzhuk
  2 siblings, 1 reply; 17+ messages in thread
From: Ivan Khoronzhuk @ 2019-06-30 16:20 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sat, Jun 29, 2019 at 08:23:25AM +0300, Ilias Apalodimas wrote:

[...]

>+
>+static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp)
>+{
>+	struct netsec_priv *priv = netdev_priv(ndev);
>+
>+	switch (xdp->command) {
>+	case XDP_SETUP_PROG:
>+		return netsec_xdp_setup(priv, xdp->prog, xdp->extack);
>+	case XDP_QUERY_PROG:
>+		xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0;
xdp_attachment family to save bpf flags?

>+		return 0;
>+	default:
>+		return -EINVAL;
>+	}
>+}
>+
> static const struct net_device_ops netsec_netdev_ops = {
> 	.ndo_init		= netsec_netdev_init,
> 	.ndo_uninit		= netsec_netdev_uninit,
>@@ -1537,6 +1842,8 @@ static const struct net_device_ops netsec_netdev_ops = {
> 	.ndo_set_mac_address    = eth_mac_addr,
> 	.ndo_validate_addr	= eth_validate_addr,
> 	.ndo_do_ioctl		= netsec_netdev_ioctl,
>+	.ndo_xdp_xmit		= netsec_xdp_xmit,
>+	.ndo_bpf		= netsec_xdp,
> };
>
> static int netsec_of_probe(struct platform_device *pdev,
>-- 
>2.20.1
>

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-29  5:23 ` [net-next, PATCH 3/3, v2] net: netsec: add XDP support Ilias Apalodimas
  2019-06-29 10:36   ` Jesper Dangaard Brouer
  2019-06-30 16:20   ` Ivan Khoronzhuk
@ 2019-06-30 16:25   ` Ivan Khoronzhuk
  2019-06-30 16:32     ` Ilias Apalodimas
  2 siblings, 1 reply; 17+ messages in thread
From: Ivan Khoronzhuk @ 2019-06-30 16:25 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sat, Jun 29, 2019 at 08:23:25AM +0300, Ilias Apalodimas wrote:
>The interface only supports 1 Tx queue so locking is introduced on
>the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
>.ndo_xdp_xmit won't corrupt Tx ring
>
>- Performance (SMMU off)
>
>Benchmark   XDP_SKB     XDP_DRV
>xdp1        291kpps     344kpps
>rxdrop      282kpps     342kpps
>
>- Performance (SMMU on)
>Benchmark   XDP_SKB     XDP_DRV
>xdp1        167kpps     324kpps
>rxdrop      164kpps     323kpps
>
>Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
>---
> drivers/net/ethernet/socionext/netsec.c | 361 ++++++++++++++++++++++--
> 1 file changed, 334 insertions(+), 27 deletions(-)
>

[...]

>+
>+static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog,
>+			    struct netlink_ext_ack *extack)
>+{
>+	struct net_device *dev = priv->ndev;
>+	struct bpf_prog *old_prog;
>+
>+	/* For now just support only the usual MTU sized frames */
>+	if (prog && dev->mtu > 1500) {
>+		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
>+		return -EOPNOTSUPP;
>+	}
>+
>+	if (netif_running(dev))
>+		netsec_netdev_stop(dev);
And why to stop the interface. XDP allows to update prog in runtime.

>+
>+	/* Detach old prog, if any */
>+	old_prog = xchg(&priv->xdp_prog, prog);
>+	if (old_prog)
>+		bpf_prog_put(old_prog);
>+
>+	if (netif_running(dev))
>+		netsec_netdev_open(dev);
>+
>+	return 0;
>+}
>+
>+static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp)
>+{
>+	struct netsec_priv *priv = netdev_priv(ndev);
>+
>+	switch (xdp->command) {
>+	case XDP_SETUP_PROG:
>+		return netsec_xdp_setup(priv, xdp->prog, xdp->extack);
>+	case XDP_QUERY_PROG:
>+		xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0;
>+		return 0;
>+	default:
>+		return -EINVAL;
>+	}
>+}
>+
> static const struct net_device_ops netsec_netdev_ops = {
> 	.ndo_init		= netsec_netdev_init,
> 	.ndo_uninit		= netsec_netdev_uninit,
>@@ -1537,6 +1842,8 @@ static const struct net_device_ops netsec_netdev_ops = {
> 	.ndo_set_mac_address    = eth_mac_addr,
> 	.ndo_validate_addr	= eth_validate_addr,
> 	.ndo_do_ioctl		= netsec_netdev_ioctl,
>+	.ndo_xdp_xmit		= netsec_xdp_xmit,
>+	.ndo_bpf		= netsec_xdp,
> };
>
> static int netsec_of_probe(struct platform_device *pdev,
>-- 
>2.20.1
>

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-30 16:25   ` Ivan Khoronzhuk
@ 2019-06-30 16:32     ` Ilias Apalodimas
  2019-06-30 16:41       ` Ivan Khoronzhuk
  0 siblings, 1 reply; 17+ messages in thread
From: Ilias Apalodimas @ 2019-06-30 16:32 UTC (permalink / raw)
  To: Ivan Khoronzhuk
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sun, Jun 30, 2019 at 07:25:53PM +0300, Ivan Khoronzhuk wrote:
> On Sat, Jun 29, 2019 at 08:23:25AM +0300, Ilias Apalodimas wrote:
> >The interface only supports 1 Tx queue so locking is introduced on
> >the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
> >.ndo_xdp_xmit won't corrupt Tx ring
> >
> >- Performance (SMMU off)
> >
> >Benchmark   XDP_SKB     XDP_DRV
> >xdp1        291kpps     344kpps
> >rxdrop      282kpps     342kpps
> >
> >- Performance (SMMU on)
> >Benchmark   XDP_SKB     XDP_DRV
> >xdp1        167kpps     324kpps
> >rxdrop      164kpps     323kpps
> >
> >Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> >---
> >drivers/net/ethernet/socionext/netsec.c | 361 ++++++++++++++++++++++--
> >1 file changed, 334 insertions(+), 27 deletions(-)
> >
> 
> [...]
> 
> >+
> >+static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog,
> >+			    struct netlink_ext_ack *extack)
> >+{
> >+	struct net_device *dev = priv->ndev;
> >+	struct bpf_prog *old_prog;
> >+
> >+	/* For now just support only the usual MTU sized frames */
> >+	if (prog && dev->mtu > 1500) {
> >+		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
> >+		return -EOPNOTSUPP;
> >+	}
> >+
> >+	if (netif_running(dev))
> >+		netsec_netdev_stop(dev);
> And why to stop the interface. XDP allows to update prog in runtime.
> 
Adding the support is not limited to  adding a prog only in this driver.
It also rebuilts the queues which changes the dma mapping of buffers.
Since i don't want to map BIDIRECTIONAL buffers if XDP is not in place,
i am resetting the device and forcing the buffer re-allocation

Thanks
/Ilias

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-30 16:20   ` Ivan Khoronzhuk
@ 2019-06-30 16:34     ` Ilias Apalodimas
  2019-06-30 16:45       ` Ivan Khoronzhuk
  0 siblings, 1 reply; 17+ messages in thread
From: Ilias Apalodimas @ 2019-06-30 16:34 UTC (permalink / raw)
  To: Ivan Khoronzhuk
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

Hi Ivan,
> 
> [...]
> 
> >+
> >+static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp)
> >+{
> >+	struct netsec_priv *priv = netdev_priv(ndev);
> >+
> >+	switch (xdp->command) {
> >+	case XDP_SETUP_PROG:
> >+		return netsec_xdp_setup(priv, xdp->prog, xdp->extack);
> >+	case XDP_QUERY_PROG:
> >+		xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0;
> xdp_attachment family to save bpf flags?
Sure why not. This can always be added later though since many drivers are
already doing it similarly no?

> 
> >+		return 0;
> >+	default:
> >+		return -EINVAL;
> >+	}
> >+}
> >+
> >static const struct net_device_ops netsec_netdev_ops = {
> >	.ndo_init		= netsec_netdev_init,
> >	.ndo_uninit		= netsec_netdev_uninit,
> >@@ -1537,6 +1842,8 @@ static const struct net_device_ops netsec_netdev_ops = {
> >	.ndo_set_mac_address    = eth_mac_addr,
> >	.ndo_validate_addr	= eth_validate_addr,
> >	.ndo_do_ioctl		= netsec_netdev_ioctl,
> >+	.ndo_xdp_xmit		= netsec_xdp_xmit,
> >+	.ndo_bpf		= netsec_xdp,
> >};
> >
> >static int netsec_of_probe(struct platform_device *pdev,
> >-- 
> >2.20.1
> >
> 
> -- 
> Regards,
> Ivan Khoronzhuk

Thanks
/Ilias

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-30 16:32     ` Ilias Apalodimas
@ 2019-06-30 16:41       ` Ivan Khoronzhuk
  2019-06-30 16:47         ` Ilias Apalodimas
  0 siblings, 1 reply; 17+ messages in thread
From: Ivan Khoronzhuk @ 2019-06-30 16:41 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sun, Jun 30, 2019 at 07:32:14PM +0300, Ilias Apalodimas wrote:
>On Sun, Jun 30, 2019 at 07:25:53PM +0300, Ivan Khoronzhuk wrote:
>> On Sat, Jun 29, 2019 at 08:23:25AM +0300, Ilias Apalodimas wrote:
>> >The interface only supports 1 Tx queue so locking is introduced on
>> >the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
>> >.ndo_xdp_xmit won't corrupt Tx ring
>> >
>> >- Performance (SMMU off)
>> >
>> >Benchmark   XDP_SKB     XDP_DRV
>> >xdp1        291kpps     344kpps
>> >rxdrop      282kpps     342kpps
>> >
>> >- Performance (SMMU on)
>> >Benchmark   XDP_SKB     XDP_DRV
>> >xdp1        167kpps     324kpps
>> >rxdrop      164kpps     323kpps
>> >
>> >Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
>> >---
>> >drivers/net/ethernet/socionext/netsec.c | 361 ++++++++++++++++++++++--
>> >1 file changed, 334 insertions(+), 27 deletions(-)
>> >
>>
>> [...]
>>
>> >+
>> >+static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog,
>> >+			    struct netlink_ext_ack *extack)
>> >+{
>> >+	struct net_device *dev = priv->ndev;
>> >+	struct bpf_prog *old_prog;
>> >+
>> >+	/* For now just support only the usual MTU sized frames */
>> >+	if (prog && dev->mtu > 1500) {
>> >+		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
>> >+		return -EOPNOTSUPP;
>> >+	}
>> >+
>> >+	if (netif_running(dev))
>> >+		netsec_netdev_stop(dev);
>> And why to stop the interface. XDP allows to update prog in runtime.
>>
>Adding the support is not limited to  adding a prog only in this driver.
>It also rebuilts the queues which changes the dma mapping of buffers.
>Since i don't want to map BIDIRECTIONAL buffers if XDP is not in place,
>i am resetting the device and forcing the buffer re-allocation
>
>Thanks
>/Ilias
I don't know the internals, probably it has some dependencies, but here you
just update the prog and can at least do it when exchange is happening.
I mean not in case of prog is attached/removed first time.
In case of prog -> prog it seems doable...

It ups to you ofc, but I can run smth like:
ip -force link set dev eth0 xdp obj xdp-example-pass.o sec .text
and expect it's updated w/o interface reset I mean on new prog.

I'm not sure, but maintainers can help, conceptually it's supposed to be in
runtime the prog be update uder rcu as a part of API usage...


-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-30 16:34     ` Ilias Apalodimas
@ 2019-06-30 16:45       ` Ivan Khoronzhuk
  2019-06-30 16:50         ` Ilias Apalodimas
  0 siblings, 1 reply; 17+ messages in thread
From: Ivan Khoronzhuk @ 2019-06-30 16:45 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sun, Jun 30, 2019 at 07:34:17PM +0300, Ilias Apalodimas wrote:
>Hi Ivan,
>>
>> [...]
>>
>> >+
>> >+static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp)
>> >+{
>> >+	struct netsec_priv *priv = netdev_priv(ndev);
>> >+
>> >+	switch (xdp->command) {
>> >+	case XDP_SETUP_PROG:
>> >+		return netsec_xdp_setup(priv, xdp->prog, xdp->extack);
>> >+	case XDP_QUERY_PROG:
>> >+		xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0;
>> xdp_attachment family to save bpf flags?
>Sure why not. This can always be added later though since many drivers are
>already doing it similarly no?
yes.
I can work w/o this ofc.
But netronome and cpsw (me) added this.
What I've seen it allows to prevent prog update if flag doesn't allow it.
Usually it doesn't allow, but can be forced with flag. In another case it can
be updated any time w/o reason...and seems like in your case it's sensitive.

>
>>
>> >+		return 0;
>> >+	default:
>> >+		return -EINVAL;
>> >+	}
>> >+}
>> >+
>> >static const struct net_device_ops netsec_netdev_ops = {
>> >	.ndo_init		= netsec_netdev_init,
>> >	.ndo_uninit		= netsec_netdev_uninit,
>> >@@ -1537,6 +1842,8 @@ static const struct net_device_ops netsec_netdev_ops = {
>> >	.ndo_set_mac_address    = eth_mac_addr,
>> >	.ndo_validate_addr	= eth_validate_addr,
>> >	.ndo_do_ioctl		= netsec_netdev_ioctl,
>> >+	.ndo_xdp_xmit		= netsec_xdp_xmit,
>> >+	.ndo_bpf		= netsec_xdp,
>> >};
>> >
>> >static int netsec_of_probe(struct platform_device *pdev,
>> >--
>> >2.20.1
>> >
>>
>> --
>> Regards,
>> Ivan Khoronzhuk
>
>Thanks
>/Ilias

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-30 16:41       ` Ivan Khoronzhuk
@ 2019-06-30 16:47         ` Ilias Apalodimas
  2019-06-30 16:51           ` Ivan Khoronzhuk
  2019-06-30 17:09           ` Ivan Khoronzhuk
  0 siblings, 2 replies; 17+ messages in thread
From: Ilias Apalodimas @ 2019-06-30 16:47 UTC (permalink / raw)
  To: Ivan Khoronzhuk
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sun, Jun 30, 2019 at 07:41:28PM +0300, Ivan Khoronzhuk wrote:
> On Sun, Jun 30, 2019 at 07:32:14PM +0300, Ilias Apalodimas wrote:
> >On Sun, Jun 30, 2019 at 07:25:53PM +0300, Ivan Khoronzhuk wrote:
> >>On Sat, Jun 29, 2019 at 08:23:25AM +0300, Ilias Apalodimas wrote:
> >>>The interface only supports 1 Tx queue so locking is introduced on
> >>>the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
> >>>.ndo_xdp_xmit won't corrupt Tx ring
> >>>
> >>>- Performance (SMMU off)
> >>>
> >>>Benchmark   XDP_SKB     XDP_DRV
> >>>xdp1        291kpps     344kpps
> >>>rxdrop      282kpps     342kpps
> >>>
> >>>- Performance (SMMU on)
> >>>Benchmark   XDP_SKB     XDP_DRV
> >>>xdp1        167kpps     324kpps
> >>>rxdrop      164kpps     323kpps
> >>>
> >>>Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> >>>---
> >>>drivers/net/ethernet/socionext/netsec.c | 361 ++++++++++++++++++++++--
> >>>1 file changed, 334 insertions(+), 27 deletions(-)
> >>>
> >>
> >>[...]
> >>
> >>>+
> >>>+static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog,
> >>>+			    struct netlink_ext_ack *extack)
> >>>+{
> >>>+	struct net_device *dev = priv->ndev;
> >>>+	struct bpf_prog *old_prog;
> >>>+
> >>>+	/* For now just support only the usual MTU sized frames */
> >>>+	if (prog && dev->mtu > 1500) {
> >>>+		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
> >>>+		return -EOPNOTSUPP;
> >>>+	}
> >>>+
> >>>+	if (netif_running(dev))
> >>>+		netsec_netdev_stop(dev);
> >>And why to stop the interface. XDP allows to update prog in runtime.
> >>
> >Adding the support is not limited to  adding a prog only in this driver.
> >It also rebuilts the queues which changes the dma mapping of buffers.
> >Since i don't want to map BIDIRECTIONAL buffers if XDP is not in place,
> >i am resetting the device and forcing the buffer re-allocation
> >
> >Thanks
> >/Ilias
> I don't know the internals, probably it has some dependencies, but here you
> just update the prog and can at least do it when exchange is happening.
> I mean not in case of prog is attached/removed first time.
> In case of prog -> prog it seems doable...
> 
> It ups to you ofc, but I can run smth like:
> ip -force link set dev eth0 xdp obj xdp-example-pass.o sec .text
> and expect it's updated w/o interface reset I mean on new prog.
> 
> I'm not sure, but maintainers can help, conceptually it's supposed to be in
> runtime the prog be update uder rcu as a part of API usage...
It's doable but it means i'd have to change the buffer allocation again. I'd
also prefer mapping FOR_DEVICE only if XDP is not loaded. Most of the drivers do
restart so i'll stick with this for the current version. 
Most of the drivers do restart now so i'll stick to that for now.

Thanks
/Ilias

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-30 16:45       ` Ivan Khoronzhuk
@ 2019-06-30 16:50         ` Ilias Apalodimas
  0 siblings, 0 replies; 17+ messages in thread
From: Ilias Apalodimas @ 2019-06-30 16:50 UTC (permalink / raw)
  To: Ivan Khoronzhuk
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sun, Jun 30, 2019 at 07:45:13PM +0300, Ivan Khoronzhuk wrote:
> On Sun, Jun 30, 2019 at 07:34:17PM +0300, Ilias Apalodimas wrote:
> >Hi Ivan,
> >>
> >>[...]
> >>
> >>>+
> >>>+static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp)
> >>>+{
> >>>+	struct netsec_priv *priv = netdev_priv(ndev);
> >>>+
> >>>+	switch (xdp->command) {
> >>>+	case XDP_SETUP_PROG:
> >>>+		return netsec_xdp_setup(priv, xdp->prog, xdp->extack);
> >>>+	case XDP_QUERY_PROG:
> >>>+		xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0;
> >>xdp_attachment family to save bpf flags?
> >Sure why not. This can always be added later though since many drivers are
> >already doing it similarly no?
> yes.
> I can work w/o this ofc.
> But netronome and cpsw (me) added this.
Ok let's start using that

> What I've seen it allows to prevent prog update if flag doesn't allow it.
> Usually it doesn't allow, but can be forced with flag. In another case it can
> be updated any time w/o reason...and seems like in your case it's sensitive.
I intend to send a follow up patch anyway to remove the declaration on the top
of the file of netsec_set_tx_de(). I intentionally choose to add that to make
the review easier (since re-arranging would mess that up).

I'll just this optimization as well on the follow up patch since it doesn't
break anything

Thanks
/Ilias

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-30 16:47         ` Ilias Apalodimas
@ 2019-06-30 16:51           ` Ivan Khoronzhuk
  2019-06-30 17:09           ` Ivan Khoronzhuk
  1 sibling, 0 replies; 17+ messages in thread
From: Ivan Khoronzhuk @ 2019-06-30 16:51 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sun, Jun 30, 2019 at 07:47:45PM +0300, Ilias Apalodimas wrote:
>On Sun, Jun 30, 2019 at 07:41:28PM +0300, Ivan Khoronzhuk wrote:
>> On Sun, Jun 30, 2019 at 07:32:14PM +0300, Ilias Apalodimas wrote:
>> >On Sun, Jun 30, 2019 at 07:25:53PM +0300, Ivan Khoronzhuk wrote:
>> >>On Sat, Jun 29, 2019 at 08:23:25AM +0300, Ilias Apalodimas wrote:
>> >>>The interface only supports 1 Tx queue so locking is introduced on
>> >>>the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
>> >>>.ndo_xdp_xmit won't corrupt Tx ring
>> >>>
>> >>>- Performance (SMMU off)
>> >>>
>> >>>Benchmark   XDP_SKB     XDP_DRV
>> >>>xdp1        291kpps     344kpps
>> >>>rxdrop      282kpps     342kpps
>> >>>
>> >>>- Performance (SMMU on)
>> >>>Benchmark   XDP_SKB     XDP_DRV
>> >>>xdp1        167kpps     324kpps
>> >>>rxdrop      164kpps     323kpps
>> >>>
>> >>>Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
>> >>>---
>> >>>drivers/net/ethernet/socionext/netsec.c | 361 ++++++++++++++++++++++--
>> >>>1 file changed, 334 insertions(+), 27 deletions(-)
>> >>>
>> >>
>> >>[...]
>> >>
>> >>>+
>> >>>+static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog,
>> >>>+			    struct netlink_ext_ack *extack)
>> >>>+{
>> >>>+	struct net_device *dev = priv->ndev;
>> >>>+	struct bpf_prog *old_prog;
>> >>>+
>> >>>+	/* For now just support only the usual MTU sized frames */
>> >>>+	if (prog && dev->mtu > 1500) {
>> >>>+		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
>> >>>+		return -EOPNOTSUPP;
>> >>>+	}
>> >>>+
>> >>>+	if (netif_running(dev))
>> >>>+		netsec_netdev_stop(dev);
>> >>And why to stop the interface. XDP allows to update prog in runtime.
>> >>
>> >Adding the support is not limited to  adding a prog only in this driver.
>> >It also rebuilts the queues which changes the dma mapping of buffers.
>> >Since i don't want to map BIDIRECTIONAL buffers if XDP is not in place,
>> >i am resetting the device and forcing the buffer re-allocation
>> >
>> >Thanks
>> >/Ilias
>> I don't know the internals, probably it has some dependencies, but here you
>> just update the prog and can at least do it when exchange is happening.
>> I mean not in case of prog is attached/removed first time.
>> In case of prog -> prog it seems doable...
>>
>> It ups to you ofc, but I can run smth like:
>> ip -force link set dev eth0 xdp obj xdp-example-pass.o sec .text
>> and expect it's updated w/o interface reset I mean on new prog.
>>
>> I'm not sure, but maintainers can help, conceptually it's supposed to be in
>> runtime the prog be update uder rcu as a part of API usage...
>It's doable but it means i'd have to change the buffer allocation again. I'd
>also prefer mapping FOR_DEVICE only if XDP is not loaded. Most of the drivers do
>restart so i'll stick with this for the current version.
>Most of the drivers do restart now so i'll stick to that for now.
I have nothing against it to be merged. Just pay attention on
XDP_FLAGS_UPDATE_IF_NOEXIST or update it layter or now...ups to your.

>
>Thanks
>/Ilias

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [net-next, PATCH 3/3, v2] net: netsec: add XDP support
  2019-06-30 16:47         ` Ilias Apalodimas
  2019-06-30 16:51           ` Ivan Khoronzhuk
@ 2019-06-30 17:09           ` Ivan Khoronzhuk
  1 sibling, 0 replies; 17+ messages in thread
From: Ivan Khoronzhuk @ 2019-06-30 17:09 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, davem, maciejromanfijalkowski

On Sun, Jun 30, 2019 at 07:47:45PM +0300, Ilias Apalodimas wrote:
>On Sun, Jun 30, 2019 at 07:41:28PM +0300, Ivan Khoronzhuk wrote:
>> On Sun, Jun 30, 2019 at 07:32:14PM +0300, Ilias Apalodimas wrote:
>> >On Sun, Jun 30, 2019 at 07:25:53PM +0300, Ivan Khoronzhuk wrote:
>> >>On Sat, Jun 29, 2019 at 08:23:25AM +0300, Ilias Apalodimas wrote:
>> >>>The interface only supports 1 Tx queue so locking is introduced on
>> >>>the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
>> >>>.ndo_xdp_xmit won't corrupt Tx ring
>> >>>
>> >>>- Performance (SMMU off)
>> >>>
>> >>>Benchmark   XDP_SKB     XDP_DRV
>> >>>xdp1        291kpps     344kpps
>> >>>rxdrop      282kpps     342kpps
>> >>>
>> >>>- Performance (SMMU on)
>> >>>Benchmark   XDP_SKB     XDP_DRV
>> >>>xdp1        167kpps     324kpps
>> >>>rxdrop      164kpps     323kpps
>> >>>
>> >>>Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
>> >>>---
>> >>>drivers/net/ethernet/socionext/netsec.c | 361 ++++++++++++++++++++++--
>> >>>1 file changed, 334 insertions(+), 27 deletions(-)
>> >>>
>> >>
>> >>[...]
>> >>
>> >>>+
>> >>>+static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog,
>> >>>+			    struct netlink_ext_ack *extack)
>> >>>+{
>> >>>+	struct net_device *dev = priv->ndev;
>> >>>+	struct bpf_prog *old_prog;
>> >>>+
>> >>>+	/* For now just support only the usual MTU sized frames */
>> >>>+	if (prog && dev->mtu > 1500) {
>> >>>+		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
>> >>>+		return -EOPNOTSUPP;
>> >>>+	}
>> >>>+
>> >>>+	if (netif_running(dev))
>> >>>+		netsec_netdev_stop(dev);
>> >>And why to stop the interface. XDP allows to update prog in runtime.
>> >>
>> >Adding the support is not limited to  adding a prog only in this driver.
>> >It also rebuilts the queues which changes the dma mapping of buffers.
>> >Since i don't want to map BIDIRECTIONAL buffers if XDP is not in place,
>> >i am resetting the device and forcing the buffer re-allocation
>> >
>> >Thanks
>> >/Ilias
>> I don't know the internals, probably it has some dependencies, but here you
>> just update the prog and can at least do it when exchange is happening.
>> I mean not in case of prog is attached/removed first time.
>> In case of prog -> prog it seems doable...
>>
>> It ups to you ofc, but I can run smth like:
>> ip -force link set dev eth0 xdp obj xdp-example-pass.o sec .text
>> and expect it's updated w/o interface reset I mean on new prog.
>>
>> I'm not sure, but maintainers can help, conceptually it's supposed to be in
>> runtime the prog be update uder rcu as a part of API usage...
>It's doable but it means i'd have to change the buffer allocation again. I'd
In case prog -> prog update you don't need to do anyting,
just exchange prog, That's it.

You can add it later, if you want. np.

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support
  2019-06-29  5:23 [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support Ilias Apalodimas
                   ` (2 preceding siblings ...)
  2019-06-29  5:23 ` [net-next, PATCH 3/3, v2] net: netsec: add XDP support Ilias Apalodimas
@ 2019-07-02  2:27 ` David Miller
  2019-07-02  3:18   ` Ilias Apalodimas
  3 siblings, 1 reply; 17+ messages in thread
From: David Miller @ 2019-07-02  2:27 UTC (permalink / raw)
  To: ilias.apalodimas
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, maciejromanfijalkowski

From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Date: Sat, 29 Jun 2019 08:23:22 +0300

> This is a respin of https://www.spinics.net/lists/netdev/msg526066.html
> Since page_pool API fixes are merged into net-next we can now safely use 
> it's DMA mapping capabilities. 
> 
> First patch changes the buffer allocation from napi/netdev_alloc_frag()
> to page_pool API. Although this will lead to slightly reduced performance 
> (on raw packet drops only) we can use the API for XDP buffer recycling. 
> Another side effect is a slight increase in memory usage, due to using a 
> single page per packet.
> 
> The second patch adds XDP support on the driver. 
> There's a bunch of interesting options that come up due to the single 
> Tx queue.
> Locking is needed(to avoid messing up the Tx queues since ndo_xdp_xmit 
> and the normal stack can co-exist). We also need to track down the 
> 'buffer type' for TX and properly free or recycle the packet depending 
> on it's nature.
> 
> 
> Changes since RFC:
> - Bug fixes from Jesper and Maciej
> - Added page pool API to retrieve the DMA direction
> 
> Changes since v1:
> - Use page_pool_free correctly if xdp_rxq_info_reg() failed

Series applied, thanks.

I realize from the discussion on patch #3 there will be follow-ups to this.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support
  2019-07-02  2:27 ` [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support David Miller
@ 2019-07-02  3:18   ` Ilias Apalodimas
  0 siblings, 0 replies; 17+ messages in thread
From: Ilias Apalodimas @ 2019-07-02  3:18 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, jaswinder.singh, ard.biesheuvel, bjorn.topel,
	magnus.karlsson, brouer, daniel, ast, makita.toshiaki,
	jakub.kicinski, john.fastabend, maciejromanfijalkowski

Hi David

[...]
> 
> Series applied, thanks.
> 
> I realize from the discussion on patch #3 there will be follow-ups to this.
Yea, small cosmetic changes. I'll send them shortly

Thanks
/Ilias

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-07-02  3:18 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-29  5:23 [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support Ilias Apalodimas
2019-06-29  5:23 ` [net-next, PATCH 1/3, v2] net: netsec: Use page_pool API Ilias Apalodimas
2019-06-29  5:23 ` [net-next, PATCH 2/3, v2] net: page_pool: add helper function for retrieving dma direction Ilias Apalodimas
2019-06-29  5:23 ` [net-next, PATCH 3/3, v2] net: netsec: add XDP support Ilias Apalodimas
2019-06-29 10:36   ` Jesper Dangaard Brouer
2019-06-30 16:20   ` Ivan Khoronzhuk
2019-06-30 16:34     ` Ilias Apalodimas
2019-06-30 16:45       ` Ivan Khoronzhuk
2019-06-30 16:50         ` Ilias Apalodimas
2019-06-30 16:25   ` Ivan Khoronzhuk
2019-06-30 16:32     ` Ilias Apalodimas
2019-06-30 16:41       ` Ivan Khoronzhuk
2019-06-30 16:47         ` Ilias Apalodimas
2019-06-30 16:51           ` Ivan Khoronzhuk
2019-06-30 17:09           ` Ivan Khoronzhuk
2019-07-02  2:27 ` [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support David Miller
2019-07-02  3:18   ` Ilias Apalodimas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).