All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff
@ 2020-12-12 17:41 Lorenzo Bianconi
  2020-12-12 17:41 ` [PATCH v3 bpf-next 1/2] net: xdp: introduce xdp_init_buff utility routine Lorenzo Bianconi
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Lorenzo Bianconi @ 2020-12-12 17:41 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, lorenzo.bianconi,
	alexander.duyck, maciej.fijalkowski, saeed

Introduce xdp_init_buff and xdp_prepare_buff utility routines to initialize
xdp_buff data structure and remove duplicated code in all XDP capable
drivers.

Changes since v2:
- precompute xdp->data as hard_start + headroom and save it in a local
  variable to reuse it for xdp->data_end and xdp->data_meta in
  xdp_prepare_buff()

Changes since v1:
- introduce xdp_prepare_buff utility routine

Lorenzo Bianconi (2):
  net: xdp: introduce xdp_init_buff utility routine
  net: xdp: introduce xdp_prepare_buff utility routine

 drivers/net/ethernet/amazon/ena/ena_netdev.c  |  8 +++-----
 drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |  7 ++-----
 .../net/ethernet/cavium/thunder/nicvf_main.c  | 11 ++++++-----
 .../net/ethernet/freescale/dpaa/dpaa_eth.c    | 10 ++++------
 .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  | 13 +++++--------
 drivers/net/ethernet/intel/i40e/i40e_txrx.c   | 18 +++++++++---------
 drivers/net/ethernet/intel/ice/ice_txrx.c     | 17 +++++++++--------
 drivers/net/ethernet/intel/igb/igb_main.c     | 18 +++++++++---------
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 19 +++++++++----------
 .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 19 +++++++++----------
 drivers/net/ethernet/marvell/mvneta.c         |  9 +++------
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 13 +++++++------
 drivers/net/ethernet/mellanox/mlx4/en_rx.c    |  8 +++-----
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   |  7 ++-----
 .../ethernet/netronome/nfp/nfp_net_common.c   | 12 ++++++------
 drivers/net/ethernet/qlogic/qede/qede_fp.c    |  7 ++-----
 drivers/net/ethernet/sfc/rx.c                 |  9 +++------
 drivers/net/ethernet/socionext/netsec.c       |  8 +++-----
 drivers/net/ethernet/ti/cpsw.c                | 17 ++++++-----------
 drivers/net/ethernet/ti/cpsw_new.c            | 17 ++++++-----------
 drivers/net/hyperv/netvsc_bpf.c               |  7 ++-----
 drivers/net/tun.c                             | 11 ++++-------
 drivers/net/veth.c                            | 14 +++++---------
 drivers/net/virtio_net.c                      | 18 ++++++------------
 drivers/net/xen-netfront.c                    |  8 +++-----
 include/net/xdp.h                             | 19 +++++++++++++++++++
 net/bpf/test_run.c                            |  9 +++------
 net/core/dev.c                                | 18 ++++++++----------
 28 files changed, 156 insertions(+), 195 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 bpf-next 1/2] net: xdp: introduce xdp_init_buff utility routine
  2020-12-12 17:41 [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff Lorenzo Bianconi
@ 2020-12-12 17:41 ` Lorenzo Bianconi
  2020-12-16  8:35   ` Jesper Dangaard Brouer
  2020-12-12 17:41 ` [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff " Lorenzo Bianconi
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Lorenzo Bianconi @ 2020-12-12 17:41 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, lorenzo.bianconi,
	alexander.duyck, maciej.fijalkowski, saeed

Introduce xdp_init_buff utility routine to initialize xdp_buff fields
const over NAPI iterations (e.g. frame_sz or rxq pointer). Rely on
xdp_init_buff in all XDP capable drivers.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c        | 3 +--
 drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c       | 3 +--
 drivers/net/ethernet/cavium/thunder/nicvf_main.c    | 4 ++--
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c      | 4 ++--
 drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c    | 8 ++++----
 drivers/net/ethernet/intel/i40e/i40e_txrx.c         | 6 +++---
 drivers/net/ethernet/intel/ice/ice_txrx.c           | 6 +++---
 drivers/net/ethernet/intel/igb/igb_main.c           | 6 +++---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c       | 7 +++----
 drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c   | 7 +++----
 drivers/net/ethernet/marvell/mvneta.c               | 3 +--
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c     | 8 +++++---
 drivers/net/ethernet/mellanox/mlx4/en_rx.c          | 3 +--
 drivers/net/ethernet/mellanox/mlx5/core/en_rx.c     | 3 +--
 drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 4 ++--
 drivers/net/ethernet/qlogic/qede/qede_fp.c          | 3 +--
 drivers/net/ethernet/sfc/rx.c                       | 3 +--
 drivers/net/ethernet/socionext/netsec.c             | 3 +--
 drivers/net/ethernet/ti/cpsw.c                      | 4 ++--
 drivers/net/ethernet/ti/cpsw_new.c                  | 4 ++--
 drivers/net/hyperv/netvsc_bpf.c                     | 3 +--
 drivers/net/tun.c                                   | 7 +++----
 drivers/net/veth.c                                  | 8 ++++----
 drivers/net/virtio_net.c                            | 6 ++----
 drivers/net/xen-netfront.c                          | 4 ++--
 include/net/xdp.h                                   | 7 +++++++
 net/bpf/test_run.c                                  | 4 ++--
 net/core/dev.c                                      | 8 ++++----
 28 files changed, 67 insertions(+), 72 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 0e98f45c2b22..338dce73927e 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -1567,8 +1567,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
 	netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev,
 		  "%s qid %d\n", __func__, rx_ring->qid);
 	res_budget = budget;
-	xdp.rxq = &rx_ring->xdp_rxq;
-	xdp.frame_sz = ENA_PAGE_SIZE;
+	xdp_init_buff(&xdp, ENA_PAGE_SIZE, &rx_ring->xdp_rxq);
 
 	do {
 		xdp_verdict = XDP_PASS;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
index fcc262064766..b7942c3440c0 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
@@ -133,12 +133,11 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
 	dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir);
 
 	txr = rxr->bnapi->tx_ring;
+	xdp_init_buff(&xdp, PAGE_SIZE, &rxr->xdp_rxq);
 	xdp.data_hard_start = *data_ptr - offset;
 	xdp.data = *data_ptr;
 	xdp_set_data_meta_invalid(&xdp);
 	xdp.data_end = *data_ptr + *len;
-	xdp.rxq = &rxr->xdp_rxq;
-	xdp.frame_sz = PAGE_SIZE; /* BNXT_RX_PAGE_MODE(bp) when XDP enabled */
 	orig_data = xdp.data;
 
 	rcu_read_lock();
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index f3b7b443f964..9fc672f075f2 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -547,12 +547,12 @@ static inline bool nicvf_xdp_rx(struct nicvf *nic, struct bpf_prog *prog,
 	cpu_addr = (u64)phys_to_virt(cpu_addr);
 	page = virt_to_page((void *)cpu_addr);
 
+	xdp_init_buff(&xdp, RCV_FRAG_LEN + XDP_PACKET_HEADROOM,
+		      &rq->xdp_rxq);
 	xdp.data_hard_start = page_address(page);
 	xdp.data = (void *)cpu_addr;
 	xdp_set_data_meta_invalid(&xdp);
 	xdp.data_end = xdp.data + len;
-	xdp.rxq = &rq->xdp_rxq;
-	xdp.frame_sz = RCV_FRAG_LEN + XDP_PACKET_HEADROOM;
 	orig_data = xdp.data;
 
 	rcu_read_lock();
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index e28510c282e5..93030000e0aa 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -2536,12 +2536,12 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr,
 		return XDP_PASS;
 	}
 
+	xdp_init_buff(&xdp, DPAA_BP_RAW_SIZE - DPAA_TX_PRIV_DATA_SIZE,
+		      &dpaa_fq->xdp_rxq);
 	xdp.data = vaddr + fd_off;
 	xdp.data_meta = xdp.data;
 	xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM;
 	xdp.data_end = xdp.data + qm_fd_get_length(fd);
-	xdp.frame_sz = DPAA_BP_RAW_SIZE - DPAA_TX_PRIV_DATA_SIZE;
-	xdp.rxq = &dpaa_fq->xdp_rxq;
 
 	/* We reserve a fixed headroom of 256 bytes under the erratum and we
 	 * offer it all to XDP programs to use. If no room is left for the
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
index 91cff93dbdae..a4ade0b5adb0 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
@@ -358,14 +358,14 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
 	if (!xdp_prog)
 		goto out;
 
+	xdp_init_buff(&xdp,
+		      DPAA2_ETH_RX_BUF_RAW_SIZE -
+		      (dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM),
+		      &ch->xdp_rxq);
 	xdp.data = vaddr + dpaa2_fd_get_offset(fd);
 	xdp.data_end = xdp.data + dpaa2_fd_get_len(fd);
 	xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM;
 	xdp_set_data_meta_invalid(&xdp);
-	xdp.rxq = &ch->xdp_rxq;
-
-	xdp.frame_sz = DPAA2_ETH_RX_BUF_RAW_SIZE -
-		(dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM);
 
 	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
 
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 9f73cd7aee09..4dbbbd49c389 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -2332,7 +2332,7 @@ static void i40e_inc_ntc(struct i40e_ring *rx_ring)
  **/
 static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 {
-	unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+	unsigned int total_rx_bytes = 0, total_rx_packets = 0, frame_sz = 0;
 	struct sk_buff *skb = rx_ring->skb;
 	u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
 	unsigned int xdp_xmit = 0;
@@ -2340,9 +2340,9 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 	struct xdp_buff xdp;
 
 #if (PAGE_SIZE < 8192)
-	xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, 0);
+	frame_sz = i40e_rx_frame_truesize(rx_ring, 0);
 #endif
-	xdp.rxq = &rx_ring->xdp_rxq;
+	xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
 
 	while (likely(total_rx_packets < (unsigned int)budget)) {
 		struct i40e_rx_buffer *rx_buffer;
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 77d5eae6b4c2..d52d98d56367 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -1077,18 +1077,18 @@ ice_is_non_eop(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc,
  */
 int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 {
-	unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
+	unsigned int total_rx_bytes = 0, total_rx_pkts = 0, frame_sz = 0;
 	u16 cleaned_count = ICE_DESC_UNUSED(rx_ring);
 	unsigned int xdp_res, xdp_xmit = 0;
 	struct bpf_prog *xdp_prog = NULL;
 	struct xdp_buff xdp;
 	bool failure;
 
-	xdp.rxq = &rx_ring->xdp_rxq;
 	/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */
 #if (PAGE_SIZE < 8192)
-	xdp.frame_sz = ice_rx_frame_truesize(rx_ring, 0);
+	frame_sz = ice_rx_frame_truesize(rx_ring, 0);
 #endif
+	xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
 
 	/* start the loop to process Rx packets bounded by 'budget' */
 	while (likely(total_rx_pkts < (unsigned int)budget)) {
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 6a4ef4934fcf..365dfc0e3b65 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -8666,13 +8666,13 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
 	u16 cleaned_count = igb_desc_unused(rx_ring);
 	unsigned int xdp_xmit = 0;
 	struct xdp_buff xdp;
-
-	xdp.rxq = &rx_ring->xdp_rxq;
+	u32 frame_sz = 0;
 
 	/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */
 #if (PAGE_SIZE < 8192)
-	xdp.frame_sz = igb_rx_frame_truesize(rx_ring, 0);
+	frame_sz = igb_rx_frame_truesize(rx_ring, 0);
 #endif
+	xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
 
 	while (likely(total_packets < budget)) {
 		union e1000_adv_rx_desc *rx_desc;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 50e6b8b6ba7b..dcd49cfa36f7 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2282,7 +2282,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 			       struct ixgbe_ring *rx_ring,
 			       const int budget)
 {
-	unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+	unsigned int total_rx_bytes = 0, total_rx_packets = 0, frame_sz = 0;
 	struct ixgbe_adapter *adapter = q_vector->adapter;
 #ifdef IXGBE_FCOE
 	int ddp_bytes;
@@ -2292,12 +2292,11 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 	unsigned int xdp_xmit = 0;
 	struct xdp_buff xdp;
 
-	xdp.rxq = &rx_ring->xdp_rxq;
-
 	/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */
 #if (PAGE_SIZE < 8192)
-	xdp.frame_sz = ixgbe_rx_frame_truesize(rx_ring, 0);
+	frame_sz = ixgbe_rx_frame_truesize(rx_ring, 0);
 #endif
+	xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
 
 	while (likely(total_rx_packets < budget)) {
 		union ixgbe_adv_rx_desc *rx_desc;
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index 4061cd7db5dd..624efcd71569 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -1121,19 +1121,18 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,
 				struct ixgbevf_ring *rx_ring,
 				int budget)
 {
-	unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+	unsigned int total_rx_bytes = 0, total_rx_packets = 0, frame_sz = 0;
 	struct ixgbevf_adapter *adapter = q_vector->adapter;
 	u16 cleaned_count = ixgbevf_desc_unused(rx_ring);
 	struct sk_buff *skb = rx_ring->skb;
 	bool xdp_xmit = false;
 	struct xdp_buff xdp;
 
-	xdp.rxq = &rx_ring->xdp_rxq;
-
 	/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */
 #if (PAGE_SIZE < 8192)
-	xdp.frame_sz = ixgbevf_rx_frame_truesize(rx_ring, 0);
+	frame_sz = ixgbevf_rx_frame_truesize(rx_ring, 0);
 #endif
+	xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
 
 	while (likely(total_rx_packets < budget)) {
 		struct ixgbevf_rx_buffer *rx_buffer;
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 563ceac3060f..acbb9cb85ada 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2363,9 +2363,8 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 	u32 desc_status, frame_sz;
 	struct xdp_buff xdp_buf;
 
+	xdp_init_buff(&xdp_buf, PAGE_SIZE, &rxq->xdp_rxq);
 	xdp_buf.data_hard_start = NULL;
-	xdp_buf.frame_sz = PAGE_SIZE;
-	xdp_buf.rxq = &rxq->xdp_rxq;
 
 	sinfo.nr_frags = 0;
 
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index afdd22827223..ca05dfc05058 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3562,16 +3562,18 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 			frag_size = bm_pool->frag_size;
 
 		if (xdp_prog) {
+			struct xdp_rxq_info *xdp_rxq;
+
 			xdp.data_hard_start = data;
 			xdp.data = data + MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM;
 			xdp.data_end = xdp.data + rx_bytes;
-			xdp.frame_sz = PAGE_SIZE;
 
 			if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
-				xdp.rxq = &rxq->xdp_rxq_short;
+				xdp_rxq = &rxq->xdp_rxq_short;
 			else
-				xdp.rxq = &rxq->xdp_rxq_long;
+				xdp_rxq = &rxq->xdp_rxq_long;
 
+			xdp_init_buff(&xdp, PAGE_SIZE, xdp_rxq);
 			xdp_set_data_meta_invalid(&xdp);
 
 			ret = mvpp2_run_xdp(port, rxq, xdp_prog, &xdp, pp, &ps);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 7954c1daf2b6..815381b484ca 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -682,8 +682,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
 	/* Protect accesses to: ring->xdp_prog, priv->mac_hash list */
 	rcu_read_lock();
 	xdp_prog = rcu_dereference(ring->xdp_prog);
-	xdp.rxq = &ring->xdp_rxq;
-	xdp.frame_sz = priv->frag_info[0].frag_stride;
+	xdp_init_buff(&xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq);
 	doorbell_pending = false;
 
 	/* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 6628a0197b4e..c68628b1f30b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1127,12 +1127,11 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va,
 static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom,
 				u32 len, struct xdp_buff *xdp)
 {
+	xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq);
 	xdp->data_hard_start = va;
 	xdp->data = va + headroom;
 	xdp_set_data_meta_invalid(xdp);
 	xdp->data_end = xdp->data + len;
-	xdp->rxq = &rq->xdp_rxq;
-	xdp->frame_sz = rq->buff.frame0_sz;
 }
 
 static struct sk_buff *
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index b4acf2f41e84..68e03e8257f2 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -1822,8 +1822,8 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget)
 	rcu_read_lock();
 	xdp_prog = READ_ONCE(dp->xdp_prog);
 	true_bufsz = xdp_prog ? PAGE_SIZE : dp->fl_bufsz;
-	xdp.frame_sz = PAGE_SIZE - NFP_NET_RX_BUF_HEADROOM;
-	xdp.rxq = &rx_ring->xdp_rxq;
+	xdp_init_buff(&xdp, PAGE_SIZE - NFP_NET_RX_BUF_HEADROOM,
+		      &rx_ring->xdp_rxq);
 	tx_ring = r_vec->xdp_ring;
 
 	while (pkts_polled < budget) {
diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
index a2494bf85007..d40220043883 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
@@ -1090,12 +1090,11 @@ static bool qede_rx_xdp(struct qede_dev *edev,
 	struct xdp_buff xdp;
 	enum xdp_action act;
 
+	xdp_init_buff(&xdp, rxq->rx_buf_seg_size, &rxq->xdp_rxq);
 	xdp.data_hard_start = page_address(bd->data);
 	xdp.data = xdp.data_hard_start + *data_offset;
 	xdp_set_data_meta_invalid(&xdp);
 	xdp.data_end = xdp.data + *len;
-	xdp.rxq = &rxq->xdp_rxq;
-	xdp.frame_sz = rxq->rx_buf_seg_size; /* PAGE_SIZE when XDP enabled */
 
 	/* Queues always have a full reset currently, so for the time
 	 * being until there's atomic program replace just mark read
diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
index aaa112877561..eaa6650955d1 100644
--- a/drivers/net/ethernet/sfc/rx.c
+++ b/drivers/net/ethernet/sfc/rx.c
@@ -293,14 +293,13 @@ static bool efx_do_xdp(struct efx_nic *efx, struct efx_channel *channel,
 	memcpy(rx_prefix, *ehp - efx->rx_prefix_size,
 	       efx->rx_prefix_size);
 
+	xdp_init_buff(&xdp, efx->rx_page_buf_step, &rx_queue->xdp_rxq_info);
 	xdp.data = *ehp;
 	xdp.data_hard_start = xdp.data - EFX_XDP_HEADROOM;
 
 	/* No support yet for XDP metadata */
 	xdp_set_data_meta_invalid(&xdp);
 	xdp.data_end = xdp.data + rx_buf->len;
-	xdp.rxq = &rx_queue->xdp_rxq_info;
-	xdp.frame_sz = efx->rx_page_buf_step;
 
 	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
 	rcu_read_unlock();
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index 19d20a6d0d44..945ca9517bf9 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -956,8 +956,7 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
 	u32 xdp_act = 0;
 	int done = 0;
 
-	xdp.rxq = &dring->xdp_rxq;
-	xdp.frame_sz = PAGE_SIZE;
+	xdp_init_buff(&xdp, PAGE_SIZE, &dring->xdp_rxq);
 
 	rcu_read_lock();
 	xdp_prog = READ_ONCE(priv->xdp_prog);
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index b0f00b4edd94..78a923391828 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -392,6 +392,8 @@ static void cpsw_rx_handler(void *token, int len, int status)
 	}
 
 	if (priv->xdp_prog) {
+		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
+
 		if (status & CPDMA_RX_VLAN_ENCAP) {
 			xdp.data = pa + CPSW_HEADROOM +
 				   CPSW_RX_VLAN_ENCAP_HDR_SIZE;
@@ -405,8 +407,6 @@ static void cpsw_rx_handler(void *token, int len, int status)
 		xdp_set_data_meta_invalid(&xdp);
 
 		xdp.data_hard_start = pa;
-		xdp.rxq = &priv->xdp_rxq[ch];
-		xdp.frame_sz = PAGE_SIZE;
 
 		port = priv->emac_port + cpsw->data.dual_emac;
 		ret = cpsw_run_xdp(priv, ch, &xdp, page, port);
diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
index 2f5e0ad23ad7..1b3385ec9645 100644
--- a/drivers/net/ethernet/ti/cpsw_new.c
+++ b/drivers/net/ethernet/ti/cpsw_new.c
@@ -335,6 +335,8 @@ static void cpsw_rx_handler(void *token, int len, int status)
 	}
 
 	if (priv->xdp_prog) {
+		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
+
 		if (status & CPDMA_RX_VLAN_ENCAP) {
 			xdp.data = pa + CPSW_HEADROOM +
 				   CPSW_RX_VLAN_ENCAP_HDR_SIZE;
@@ -348,8 +350,6 @@ static void cpsw_rx_handler(void *token, int len, int status)
 		xdp_set_data_meta_invalid(&xdp);
 
 		xdp.data_hard_start = pa;
-		xdp.rxq = &priv->xdp_rxq[ch];
-		xdp.frame_sz = PAGE_SIZE;
 
 		ret = cpsw_run_xdp(priv, ch, &xdp, page, priv->emac_port);
 		if (ret != CPSW_XDP_PASS)
diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
index 440486d9c999..14a7ee4c6899 100644
--- a/drivers/net/hyperv/netvsc_bpf.c
+++ b/drivers/net/hyperv/netvsc_bpf.c
@@ -44,12 +44,11 @@ u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan,
 		goto out;
 	}
 
+	xdp_init_buff(xdp, PAGE_SIZE, &nvchan->xdp_rxq);
 	xdp->data_hard_start = page_address(page);
 	xdp->data = xdp->data_hard_start + NETVSC_XDP_HDRM;
 	xdp_set_data_meta_invalid(xdp);
 	xdp->data_end = xdp->data + len;
-	xdp->rxq = &nvchan->xdp_rxq;
-	xdp->frame_sz = PAGE_SIZE;
 
 	memcpy(xdp->data, data, len);
 
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index fbed05ae7b0f..a82f7823d428 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1599,12 +1599,11 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
 		struct xdp_buff xdp;
 		u32 act;
 
+		xdp_init_buff(&xdp, buflen, &tfile->xdp_rxq);
 		xdp.data_hard_start = buf;
 		xdp.data = buf + pad;
 		xdp_set_data_meta_invalid(&xdp);
 		xdp.data_end = xdp.data + len;
-		xdp.rxq = &tfile->xdp_rxq;
-		xdp.frame_sz = buflen;
 
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		if (act == XDP_REDIRECT || act == XDP_TX) {
@@ -2344,9 +2343,9 @@ static int tun_xdp_one(struct tun_struct *tun,
 			skb_xdp = true;
 			goto build;
 		}
+
+		xdp_init_buff(xdp, buflen, &tfile->xdp_rxq);
 		xdp_set_data_meta_invalid(xdp);
-		xdp->rxq = &tfile->xdp_rxq;
-		xdp->frame_sz = buflen;
 
 		act = bpf_prog_run_xdp(xdp_prog, xdp);
 		err = tun_xdp_act(tun, xdp_prog, xdp, act);
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 02bfcdf50a7a..25f3601fb6dd 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -654,7 +654,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 					struct veth_xdp_tx_bq *bq,
 					struct veth_stats *stats)
 {
-	u32 pktlen, headroom, act, metalen;
+	u32 pktlen, headroom, act, metalen, frame_sz;
 	void *orig_data, *orig_data_end;
 	struct bpf_prog *xdp_prog;
 	int mac_len, delta, off;
@@ -714,11 +714,11 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	xdp.data = skb_mac_header(skb);
 	xdp.data_end = xdp.data + pktlen;
 	xdp.data_meta = xdp.data;
-	xdp.rxq = &rq->xdp_rxq;
 
 	/* SKB "head" area always have tailroom for skb_shared_info */
-	xdp.frame_sz = (void *)skb_end_pointer(skb) - xdp.data_hard_start;
-	xdp.frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	frame_sz = (void *)skb_end_pointer(skb) - xdp.data_hard_start;
+	frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	xdp_init_buff(&xdp, frame_sz, &rq->xdp_rxq);
 
 	orig_data = xdp.data;
 	orig_data_end = xdp.data_end;
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 052975ea0af4..a22ce87bcd9c 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -689,12 +689,11 @@ static struct sk_buff *receive_small(struct net_device *dev,
 			page = xdp_page;
 		}
 
+		xdp_init_buff(&xdp, buflen, &rq->xdp_rxq);
 		xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
 		xdp.data = xdp.data_hard_start + xdp_headroom;
 		xdp.data_end = xdp.data + len;
 		xdp.data_meta = xdp.data;
-		xdp.rxq = &rq->xdp_rxq;
-		xdp.frame_sz = buflen;
 		orig_data = xdp.data;
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		stats->xdp_packets++;
@@ -859,12 +858,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		 * the descriptor on if we get an XDP_TX return code.
 		 */
 		data = page_address(xdp_page) + offset;
+		xdp_init_buff(&xdp, frame_sz - vi->hdr_len, &rq->xdp_rxq);
 		xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
 		xdp.data = data + vi->hdr_len;
 		xdp.data_end = xdp.data + (len - vi->hdr_len);
 		xdp.data_meta = xdp.data;
-		xdp.rxq = &rq->xdp_rxq;
-		xdp.frame_sz = frame_sz - vi->hdr_len;
 
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		stats->xdp_packets++;
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index b01848ef4649..329397c60d84 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -864,12 +864,12 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
 	u32 act;
 	int err;
 
+	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
+		      &queue->xdp_rxq);
 	xdp->data_hard_start = page_address(pdata);
 	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
 	xdp_set_data_meta_invalid(xdp);
 	xdp->data_end = xdp->data + len;
-	xdp->rxq = &queue->xdp_rxq;
-	xdp->frame_sz = XEN_PAGE_SIZE - XDP_PACKET_HEADROOM;
 
 	act = bpf_prog_run_xdp(prog, xdp);
 	switch (act) {
diff --git a/include/net/xdp.h b/include/net/xdp.h
index 700ad5db7f5d..3fb3a9aa1b71 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -76,6 +76,13 @@ struct xdp_buff {
 	u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/
 };
 
+static inline void
+xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
+{
+	xdp->frame_sz = frame_sz;
+	xdp->rxq = rxq;
+}
+
 /* Reserve memory area at end-of data area.
  *
  * This macro reserves tailroom in the XDP buffer by limiting the
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index c1c30a9f76f3..a8fa5a9e4137 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -640,10 +640,10 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
 	xdp.data = data + headroom;
 	xdp.data_meta = xdp.data;
 	xdp.data_end = xdp.data + size;
-	xdp.frame_sz = headroom + max_data_sz + tailroom;
 
 	rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0);
-	xdp.rxq = &rxqueue->xdp_rxq;
+	xdp_init_buff(&xdp, headroom + max_data_sz + tailroom,
+		      &rxqueue->xdp_rxq);
 	bpf_prog_change_xdp(NULL, prog);
 	ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true);
 	if (ret)
diff --git a/net/core/dev.c b/net/core/dev.c
index ce8fea2e2788..bac56afcf6bc 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4588,11 +4588,11 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
 	struct netdev_rx_queue *rxqueue;
 	void *orig_data, *orig_data_end;
 	u32 metalen, act = XDP_DROP;
+	u32 mac_len, frame_sz;
 	__be16 orig_eth_type;
 	struct ethhdr *eth;
 	bool orig_bcast;
 	int hlen, off;
-	u32 mac_len;
 
 	/* Reinjected packets coming from act_mirred or similar should
 	 * not get XDP generic processing.
@@ -4631,8 +4631,8 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
 	xdp->data_hard_start = skb->data - skb_headroom(skb);
 
 	/* SKB "head" area always have tailroom for skb_shared_info */
-	xdp->frame_sz  = (void *)skb_end_pointer(skb) - xdp->data_hard_start;
-	xdp->frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start;
+	frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
 
 	orig_data_end = xdp->data_end;
 	orig_data = xdp->data;
@@ -4641,7 +4641,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
 	orig_eth_type = eth->h_proto;
 
 	rxqueue = netif_get_rxqueue(skb);
-	xdp->rxq = &rxqueue->xdp_rxq;
+	xdp_init_buff(xdp, frame_sz, &rxqueue->xdp_rxq);
 
 	act = bpf_prog_run_xdp(xdp_prog, xdp);
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-12 17:41 [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff Lorenzo Bianconi
  2020-12-12 17:41 ` [PATCH v3 bpf-next 1/2] net: xdp: introduce xdp_init_buff utility routine Lorenzo Bianconi
@ 2020-12-12 17:41 ` Lorenzo Bianconi
  2020-12-15 12:36   ` Maciej Fijalkowski
  2020-12-14 15:32 ` [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff Martin Habets
  2020-12-14 17:53 ` Camelia Alexandra Groza
  3 siblings, 1 reply; 19+ messages in thread
From: Lorenzo Bianconi @ 2020-12-12 17:41 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, lorenzo.bianconi,
	alexander.duyck, maciej.fijalkowski, saeed

Introduce xdp_prepare_buff utility routine to initialize per-descriptor
xdp_buff fields (e.g. xdp_buff pointers). Rely on xdp_prepare_buff() in
all XDP capable drivers.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c      |  5 ++---
 drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c     |  4 +---
 drivers/net/ethernet/cavium/thunder/nicvf_main.c  |  7 ++++---
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c    |  6 ++----
 drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c  | 13 +++++--------
 drivers/net/ethernet/intel/i40e/i40e_txrx.c       | 12 ++++++------
 drivers/net/ethernet/intel/ice/ice_txrx.c         | 11 ++++++-----
 drivers/net/ethernet/intel/igb/igb_main.c         | 12 ++++++------
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c     | 12 ++++++------
 drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 12 ++++++------
 drivers/net/ethernet/marvell/mvneta.c             |  6 ++----
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c   |  7 +++----
 drivers/net/ethernet/mellanox/mlx4/en_rx.c        |  5 ++---
 drivers/net/ethernet/mellanox/mlx5/core/en_rx.c   |  4 +---
 .../net/ethernet/netronome/nfp/nfp_net_common.c   |  8 ++++----
 drivers/net/ethernet/qlogic/qede/qede_fp.c        |  4 +---
 drivers/net/ethernet/sfc/rx.c                     |  6 ++----
 drivers/net/ethernet/socionext/netsec.c           |  5 ++---
 drivers/net/ethernet/ti/cpsw.c                    | 15 +++++----------
 drivers/net/ethernet/ti/cpsw_new.c                | 15 +++++----------
 drivers/net/hyperv/netvsc_bpf.c                   |  4 +---
 drivers/net/tun.c                                 |  4 +---
 drivers/net/veth.c                                |  6 +-----
 drivers/net/virtio_net.c                          | 12 ++++--------
 drivers/net/xen-netfront.c                        |  4 +---
 include/net/xdp.h                                 | 12 ++++++++++++
 net/bpf/test_run.c                                |  5 +----
 net/core/dev.c                                    | 10 ++++------
 28 files changed, 96 insertions(+), 130 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 338dce73927e..1cfd0c98677e 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -1519,10 +1519,9 @@ static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp)
 	int ret;
 
 	rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id];
-	xdp->data = page_address(rx_info->page) + rx_info->page_offset;
+	xdp_prepare_buff(xdp, page_address(rx_info->page),
+			 rx_info->page_offset, rx_ring->ena_bufs[0].len);
 	xdp_set_data_meta_invalid(xdp);
-	xdp->data_hard_start = page_address(rx_info->page);
-	xdp->data_end = xdp->data + rx_ring->ena_bufs[0].len;
 	/* If for some reason we received a bigger packet than
 	 * we expect, then we simply drop it
 	 */
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
index b7942c3440c0..e1664b86a7b8 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
@@ -134,10 +134,8 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
 
 	txr = rxr->bnapi->tx_ring;
 	xdp_init_buff(&xdp, PAGE_SIZE, &rxr->xdp_rxq);
-	xdp.data_hard_start = *data_ptr - offset;
-	xdp.data = *data_ptr;
+	xdp_prepare_buff(&xdp, *data_ptr - offset, offset, *len);
 	xdp_set_data_meta_invalid(&xdp);
-	xdp.data_end = *data_ptr + *len;
 	orig_data = xdp.data;
 
 	rcu_read_lock();
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index 9fc672f075f2..9bdac04359c6 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -530,6 +530,7 @@ static inline bool nicvf_xdp_rx(struct nicvf *nic, struct bpf_prog *prog,
 				struct cqe_rx_t *cqe_rx, struct snd_queue *sq,
 				struct rcv_queue *rq, struct sk_buff **skb)
 {
+	unsigned char *hard_start, *data;
 	struct xdp_buff xdp;
 	struct page *page;
 	u32 action;
@@ -549,10 +550,10 @@ static inline bool nicvf_xdp_rx(struct nicvf *nic, struct bpf_prog *prog,
 
 	xdp_init_buff(&xdp, RCV_FRAG_LEN + XDP_PACKET_HEADROOM,
 		      &rq->xdp_rxq);
-	xdp.data_hard_start = page_address(page);
-	xdp.data = (void *)cpu_addr;
+	hard_start = page_address(page);
+	data = (unsigned char *)cpu_addr;
+	xdp_prepare_buff(&xdp, hard_start, data - hard_start, len);
 	xdp_set_data_meta_invalid(&xdp);
-	xdp.data_end = xdp.data + len;
 	orig_data = xdp.data;
 
 	rcu_read_lock();
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 93030000e0aa..86ee07c90154 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -2538,10 +2538,8 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr,
 
 	xdp_init_buff(&xdp, DPAA_BP_RAW_SIZE - DPAA_TX_PRIV_DATA_SIZE,
 		      &dpaa_fq->xdp_rxq);
-	xdp.data = vaddr + fd_off;
-	xdp.data_meta = xdp.data;
-	xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM;
-	xdp.data_end = xdp.data + qm_fd_get_length(fd);
+	xdp_prepare_buff(&xdp, vaddr + fd_off - XDP_PACKET_HEADROOM,
+			 XDP_PACKET_HEADROOM, qm_fd_get_length(fd));
 
 	/* We reserve a fixed headroom of 256 bytes under the erratum and we
 	 * offer it all to XDP programs to use. If no room is left for the
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
index a4ade0b5adb0..12358f5d59d6 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
@@ -350,7 +350,7 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
 	struct bpf_prog *xdp_prog;
 	struct xdp_buff xdp;
 	u32 xdp_act = XDP_PASS;
-	int err;
+	int err, offset;
 
 	rcu_read_lock();
 
@@ -358,13 +358,10 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
 	if (!xdp_prog)
 		goto out;
 
-	xdp_init_buff(&xdp,
-		      DPAA2_ETH_RX_BUF_RAW_SIZE -
-		      (dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM),
-		      &ch->xdp_rxq);
-	xdp.data = vaddr + dpaa2_fd_get_offset(fd);
-	xdp.data_end = xdp.data + dpaa2_fd_get_len(fd);
-	xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM;
+	offset = dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM;
+	xdp_init_buff(&xdp, DPAA2_ETH_RX_BUF_RAW_SIZE - offset, &ch->xdp_rxq);
+	xdp_prepare_buff(&xdp, vaddr + offset, XDP_PACKET_HEADROOM,
+			 dpaa2_fd_get_len(fd));
 	xdp_set_data_meta_invalid(&xdp);
 
 	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 4dbbbd49c389..fcd1ca3343fb 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -2393,12 +2393,12 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 
 		/* retrieve a buffer from the ring */
 		if (!skb) {
-			xdp.data = page_address(rx_buffer->page) +
-				   rx_buffer->page_offset;
-			xdp.data_meta = xdp.data;
-			xdp.data_hard_start = xdp.data -
-					      i40e_rx_offset(rx_ring);
-			xdp.data_end = xdp.data + size;
+			unsigned int offset = i40e_rx_offset(rx_ring);
+			unsigned char *hard_start;
+
+			hard_start = page_address(rx_buffer->page) +
+				     rx_buffer->page_offset - offset;
+			xdp_prepare_buff(&xdp, hard_start, offset, size);
 #if (PAGE_SIZE > 4096)
 			/* At larger PAGE_SIZE, frame_sz depend on len size */
 			xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index d52d98d56367..a7a00060f520 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -1094,8 +1094,9 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 	while (likely(total_rx_pkts < (unsigned int)budget)) {
 		union ice_32b_rx_flex_desc *rx_desc;
 		struct ice_rx_buf *rx_buf;
+		unsigned int size, offset;
+		unsigned char *hard_start;
 		struct sk_buff *skb;
-		unsigned int size;
 		u16 stat_err_bits;
 		u16 vlan_tag = 0;
 		u8 rx_ptype;
@@ -1138,10 +1139,10 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 			goto construct_skb;
 		}
 
-		xdp.data = page_address(rx_buf->page) + rx_buf->page_offset;
-		xdp.data_hard_start = xdp.data - ice_rx_offset(rx_ring);
-		xdp.data_meta = xdp.data;
-		xdp.data_end = xdp.data + size;
+		offset = ice_rx_offset(rx_ring);
+		hard_start = page_address(rx_buf->page) + rx_buf->page_offset -
+			     offset;
+		xdp_prepare_buff(&xdp, hard_start, offset, size);
 #if (PAGE_SIZE > 4096)
 		/* At larger PAGE_SIZE, frame_sz depend on len size */
 		xdp.frame_sz = ice_rx_frame_truesize(rx_ring, size);
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 365dfc0e3b65..070b2bb4e9ca 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -8700,12 +8700,12 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
 
 		/* retrieve a buffer from the ring */
 		if (!skb) {
-			xdp.data = page_address(rx_buffer->page) +
-				   rx_buffer->page_offset;
-			xdp.data_meta = xdp.data;
-			xdp.data_hard_start = xdp.data -
-					      igb_rx_offset(rx_ring);
-			xdp.data_end = xdp.data + size;
+			unsigned int offset = igb_rx_offset(rx_ring);
+			unsigned char *hard_start;
+
+			hard_start = page_address(rx_buffer->page) +
+				     rx_buffer->page_offset - offset;
+			xdp_prepare_buff(&xdp, hard_start, offset, size);
 #if (PAGE_SIZE > 4096)
 			/* At larger PAGE_SIZE, frame_sz depend on len size */
 			xdp.frame_sz = igb_rx_frame_truesize(rx_ring, size);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index dcd49cfa36f7..e34054433c7a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2325,12 +2325,12 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 
 		/* retrieve a buffer from the ring */
 		if (!skb) {
-			xdp.data = page_address(rx_buffer->page) +
-				   rx_buffer->page_offset;
-			xdp.data_meta = xdp.data;
-			xdp.data_hard_start = xdp.data -
-					      ixgbe_rx_offset(rx_ring);
-			xdp.data_end = xdp.data + size;
+			unsigned int offset = ixgbe_rx_offset(rx_ring);
+			unsigned char *hard_start;
+
+			hard_start = page_address(rx_buffer->page) +
+				     rx_buffer->page_offset - offset;
+			xdp_prepare_buff(&xdp, hard_start, offset, size);
 #if (PAGE_SIZE > 4096)
 			/* At larger PAGE_SIZE, frame_sz depend on len size */
 			xdp.frame_sz = ixgbe_rx_frame_truesize(rx_ring, size);
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index 624efcd71569..51df79005ccb 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -1160,12 +1160,12 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,
 
 		/* retrieve a buffer from the ring */
 		if (!skb) {
-			xdp.data = page_address(rx_buffer->page) +
-				   rx_buffer->page_offset;
-			xdp.data_meta = xdp.data;
-			xdp.data_hard_start = xdp.data -
-					      ixgbevf_rx_offset(rx_ring);
-			xdp.data_end = xdp.data + size;
+			unsigned int offset = ixgbevf_rx_offset(rx_ring);
+			unsigned char *hard_start;
+
+			hard_start = page_address(rx_buffer->page) +
+				     rx_buffer->page_offset - offset;
+			xdp_prepare_buff(&xdp, hard_start, offset, size);
 #if (PAGE_SIZE > 4096)
 			/* At larger PAGE_SIZE, frame_sz depend on len size */
 			xdp.frame_sz = ixgbevf_rx_frame_truesize(rx_ring, size);
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index acbb9cb85ada..af6c9cf59809 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2263,10 +2263,8 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 
 	/* Prefetch header */
 	prefetch(data);
-
-	xdp->data_hard_start = data;
-	xdp->data = data + pp->rx_offset_correction + MVNETA_MH_SIZE;
-	xdp->data_end = xdp->data + data_len;
+	xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE,
+			 data_len);
 	xdp_set_data_meta_invalid(xdp);
 
 	sinfo = xdp_get_shared_info_from_buff(xdp);
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index ca05dfc05058..8c2197b96515 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3564,16 +3564,15 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 		if (xdp_prog) {
 			struct xdp_rxq_info *xdp_rxq;
 
-			xdp.data_hard_start = data;
-			xdp.data = data + MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM;
-			xdp.data_end = xdp.data + rx_bytes;
-
 			if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
 				xdp_rxq = &rxq->xdp_rxq_short;
 			else
 				xdp_rxq = &rxq->xdp_rxq_long;
 
 			xdp_init_buff(&xdp, PAGE_SIZE, xdp_rxq);
+			xdp_prepare_buff(&xdp, data,
+					 MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM,
+					 rx_bytes);
 			xdp_set_data_meta_invalid(&xdp);
 
 			ret = mvpp2_run_xdp(port, rxq, xdp_prog, &xdp, pp, &ps);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 815381b484ca..86c63dedc689 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -776,10 +776,9 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
 						priv->frag_info[0].frag_size,
 						DMA_FROM_DEVICE);
 
-			xdp.data_hard_start = va - frags[0].page_offset;
-			xdp.data = va;
+			xdp_prepare_buff(&xdp, va - frags[0].page_offset,
+					 frags[0].page_offset, length);
 			xdp_set_data_meta_invalid(&xdp);
-			xdp.data_end = xdp.data + length;
 			orig_data = xdp.data;
 
 			act = bpf_prog_run_xdp(xdp_prog, &xdp);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index c68628b1f30b..a2f4f0ce427f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1128,10 +1128,8 @@ static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom,
 				u32 len, struct xdp_buff *xdp)
 {
 	xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq);
-	xdp->data_hard_start = va;
-	xdp->data = va + headroom;
+	xdp_prepare_buff(xdp, va, headroom, len);
 	xdp_set_data_meta_invalid(xdp);
-	xdp->data_end = xdp->data + len;
 }
 
 static struct sk_buff *
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index 68e03e8257f2..5d0046c24b8c 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -1914,10 +1914,10 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget)
 			unsigned int dma_off;
 			int act;
 
-			xdp.data_hard_start = rxbuf->frag + NFP_NET_RX_BUF_HEADROOM;
-			xdp.data = orig_data;
-			xdp.data_meta = orig_data;
-			xdp.data_end = orig_data + pkt_len;
+			xdp_prepare_buff(&xdp,
+					 rxbuf->frag + NFP_NET_RX_BUF_HEADROOM,
+					 pkt_off - NFP_NET_RX_BUF_HEADROOM,
+					 pkt_len);
 
 			act = bpf_prog_run_xdp(xdp_prog, &xdp);
 
diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
index d40220043883..9c50df499046 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
@@ -1091,10 +1091,8 @@ static bool qede_rx_xdp(struct qede_dev *edev,
 	enum xdp_action act;
 
 	xdp_init_buff(&xdp, rxq->rx_buf_seg_size, &rxq->xdp_rxq);
-	xdp.data_hard_start = page_address(bd->data);
-	xdp.data = xdp.data_hard_start + *data_offset;
+	xdp_prepare_buff(&xdp, page_address(bd->data), *data_offset, *len);
 	xdp_set_data_meta_invalid(&xdp);
-	xdp.data_end = xdp.data + *len;
 
 	/* Queues always have a full reset currently, so for the time
 	 * being until there's atomic program replace just mark read
diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
index eaa6650955d1..9015a1639234 100644
--- a/drivers/net/ethernet/sfc/rx.c
+++ b/drivers/net/ethernet/sfc/rx.c
@@ -294,12 +294,10 @@ static bool efx_do_xdp(struct efx_nic *efx, struct efx_channel *channel,
 	       efx->rx_prefix_size);
 
 	xdp_init_buff(&xdp, efx->rx_page_buf_step, &rx_queue->xdp_rxq_info);
-	xdp.data = *ehp;
-	xdp.data_hard_start = xdp.data - EFX_XDP_HEADROOM;
-
+	xdp_prepare_buff(&xdp, *ehp - EFX_XDP_HEADROOM, EFX_XDP_HEADROOM,
+			 rx_buf->len);
 	/* No support yet for XDP metadata */
 	xdp_set_data_meta_invalid(&xdp);
-	xdp.data_end = xdp.data + rx_buf->len;
 
 	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
 	rcu_read_unlock();
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index 945ca9517bf9..80bb1a6612b1 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -1015,10 +1015,9 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
 					dma_dir);
 		prefetch(desc->addr);
 
-		xdp.data_hard_start = desc->addr;
-		xdp.data = desc->addr + NETSEC_RXBUF_HEADROOM;
+		xdp_prepare_buff(&xdp, desc->addr, NETSEC_RXBUF_HEADROOM,
+				 pkt_len);
 		xdp_set_data_meta_invalid(&xdp);
-		xdp.data_end = xdp.data + pkt_len;
 
 		if (xdp_prog) {
 			xdp_result = netsec_run_xdp(priv, xdp_prog, &xdp);
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 78a923391828..c08fd6a6be9b 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -392,22 +392,17 @@ static void cpsw_rx_handler(void *token, int len, int status)
 	}
 
 	if (priv->xdp_prog) {
-		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
+		int headroom = CPSW_HEADROOM, size = len;
 
+		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
 		if (status & CPDMA_RX_VLAN_ENCAP) {
-			xdp.data = pa + CPSW_HEADROOM +
-				   CPSW_RX_VLAN_ENCAP_HDR_SIZE;
-			xdp.data_end = xdp.data + len -
-				       CPSW_RX_VLAN_ENCAP_HDR_SIZE;
-		} else {
-			xdp.data = pa + CPSW_HEADROOM;
-			xdp.data_end = xdp.data + len;
+			headroom += CPSW_RX_VLAN_ENCAP_HDR_SIZE;
+			size -= CPSW_RX_VLAN_ENCAP_HDR_SIZE;
 		}
 
+		xdp_prepare_buff(&xdp, pa, headroom, size);
 		xdp_set_data_meta_invalid(&xdp);
 
-		xdp.data_hard_start = pa;
-
 		port = priv->emac_port + cpsw->data.dual_emac;
 		ret = cpsw_run_xdp(priv, ch, &xdp, page, port);
 		if (ret != CPSW_XDP_PASS)
diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
index 1b3385ec9645..c74c997d1cf2 100644
--- a/drivers/net/ethernet/ti/cpsw_new.c
+++ b/drivers/net/ethernet/ti/cpsw_new.c
@@ -335,22 +335,17 @@ static void cpsw_rx_handler(void *token, int len, int status)
 	}
 
 	if (priv->xdp_prog) {
-		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
+		int headroom = CPSW_HEADROOM, size = len;
 
+		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
 		if (status & CPDMA_RX_VLAN_ENCAP) {
-			xdp.data = pa + CPSW_HEADROOM +
-				   CPSW_RX_VLAN_ENCAP_HDR_SIZE;
-			xdp.data_end = xdp.data + len -
-				       CPSW_RX_VLAN_ENCAP_HDR_SIZE;
-		} else {
-			xdp.data = pa + CPSW_HEADROOM;
-			xdp.data_end = xdp.data + len;
+			headroom += CPSW_RX_VLAN_ENCAP_HDR_SIZE;
+			size -= CPSW_RX_VLAN_ENCAP_HDR_SIZE;
 		}
 
+		xdp_prepare_buff(&xdp, pa, headroom, size);
 		xdp_set_data_meta_invalid(&xdp);
 
-		xdp.data_hard_start = pa;
-
 		ret = cpsw_run_xdp(priv, ch, &xdp, page, priv->emac_port);
 		if (ret != CPSW_XDP_PASS)
 			goto requeue;
diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
index 14a7ee4c6899..93c202d6aff5 100644
--- a/drivers/net/hyperv/netvsc_bpf.c
+++ b/drivers/net/hyperv/netvsc_bpf.c
@@ -45,10 +45,8 @@ u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan,
 	}
 
 	xdp_init_buff(xdp, PAGE_SIZE, &nvchan->xdp_rxq);
-	xdp->data_hard_start = page_address(page);
-	xdp->data = xdp->data_hard_start + NETVSC_XDP_HDRM;
+	xdp_prepare_buff(xdp, page_address(page), NETVSC_XDP_HDRM, len);
 	xdp_set_data_meta_invalid(xdp);
-	xdp->data_end = xdp->data + len;
 
 	memcpy(xdp->data, data, len);
 
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index a82f7823d428..c7cbd058b345 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1600,10 +1600,8 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
 		u32 act;
 
 		xdp_init_buff(&xdp, buflen, &tfile->xdp_rxq);
-		xdp.data_hard_start = buf;
-		xdp.data = buf + pad;
+		xdp_prepare_buff(&xdp, buf, pad, len);
 		xdp_set_data_meta_invalid(&xdp);
-		xdp.data_end = xdp.data + len;
 
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		if (act == XDP_REDIRECT || act == XDP_TX) {
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 25f3601fb6dd..30a7f2ad39c3 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -710,11 +710,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 		skb = nskb;
 	}
 
-	xdp.data_hard_start = skb->head;
-	xdp.data = skb_mac_header(skb);
-	xdp.data_end = xdp.data + pktlen;
-	xdp.data_meta = xdp.data;
-
+	xdp_prepare_buff(&xdp, skb->head, skb->mac_header, pktlen);
 	/* SKB "head" area always have tailroom for skb_shared_info */
 	frame_sz = (void *)skb_end_pointer(skb) - xdp.data_hard_start;
 	frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index a22ce87bcd9c..e57b2d452cbc 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -690,10 +690,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
 		}
 
 		xdp_init_buff(&xdp, buflen, &rq->xdp_rxq);
-		xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
-		xdp.data = xdp.data_hard_start + xdp_headroom;
-		xdp.data_end = xdp.data + len;
-		xdp.data_meta = xdp.data;
+		xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
+				 xdp_headroom, len);
 		orig_data = xdp.data;
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		stats->xdp_packets++;
@@ -859,10 +857,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		 */
 		data = page_address(xdp_page) + offset;
 		xdp_init_buff(&xdp, frame_sz - vi->hdr_len, &rq->xdp_rxq);
-		xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
-		xdp.data = data + vi->hdr_len;
-		xdp.data_end = xdp.data + (len - vi->hdr_len);
-		xdp.data_meta = xdp.data;
+		xdp_prepare_buff(&xdp, data - VIRTIO_XDP_HEADROOM + vi->hdr_len,
+				 VIRTIO_XDP_HEADROOM, len - vi->hdr_len);
 
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		stats->xdp_packets++;
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 329397c60d84..61d3f5f8b7f3 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -866,10 +866,8 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
 
 	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
 		      &queue->xdp_rxq);
-	xdp->data_hard_start = page_address(pdata);
-	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
+	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM, len);
 	xdp_set_data_meta_invalid(xdp);
-	xdp->data_end = xdp->data + len;
 
 	act = bpf_prog_run_xdp(prog, xdp);
 	switch (act) {
diff --git a/include/net/xdp.h b/include/net/xdp.h
index 3fb3a9aa1b71..66d8a4b317a3 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -83,6 +83,18 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
 	xdp->rxq = rxq;
 }
 
+static inline void
+xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
+		 int headroom, int data_len)
+{
+	unsigned char *data = hard_start + headroom;
+
+	xdp->data_hard_start = hard_start;
+	xdp->data = data;
+	xdp->data_end = data + data_len;
+	xdp->data_meta = data;
+}
+
 /* Reserve memory area at end-of data area.
  *
  * This macro reserves tailroom in the XDP buffer by limiting the
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index a8fa5a9e4137..fe5a80d396e3 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -636,10 +636,7 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
 	if (IS_ERR(data))
 		return PTR_ERR(data);
 
-	xdp.data_hard_start = data;
-	xdp.data = data + headroom;
-	xdp.data_meta = xdp.data;
-	xdp.data_end = xdp.data + size;
+	xdp_prepare_buff(&xdp, data, headroom, size);
 
 	rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0);
 	xdp_init_buff(&xdp, headroom + max_data_sz + tailroom,
diff --git a/net/core/dev.c b/net/core/dev.c
index bac56afcf6bc..2997177876cc 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4592,7 +4592,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
 	__be16 orig_eth_type;
 	struct ethhdr *eth;
 	bool orig_bcast;
-	int hlen, off;
+	int off;
 
 	/* Reinjected packets coming from act_mirred or similar should
 	 * not get XDP generic processing.
@@ -4624,11 +4624,9 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
 	 * header.
 	 */
 	mac_len = skb->data - skb_mac_header(skb);
-	hlen = skb_headlen(skb) + mac_len;
-	xdp->data = skb->data - mac_len;
-	xdp->data_meta = xdp->data;
-	xdp->data_end = xdp->data + hlen;
-	xdp->data_hard_start = skb->data - skb_headroom(skb);
+	xdp_prepare_buff(xdp, skb->data - skb_headroom(skb),
+			 skb_headroom(skb) - mac_len,
+			 skb_headlen(skb) + mac_len);
 
 	/* SKB "head" area always have tailroom for skb_shared_info */
 	frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff
  2020-12-12 17:41 [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff Lorenzo Bianconi
  2020-12-12 17:41 ` [PATCH v3 bpf-next 1/2] net: xdp: introduce xdp_init_buff utility routine Lorenzo Bianconi
  2020-12-12 17:41 ` [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff " Lorenzo Bianconi
@ 2020-12-14 15:32 ` Martin Habets
  2020-12-14 17:53 ` Camelia Alexandra Groza
  3 siblings, 0 replies; 19+ messages in thread
From: Martin Habets @ 2020-12-14 15:32 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: bpf, netdev, davem, kuba, ast, daniel, brouer, lorenzo.bianconi,
	alexander.duyck, maciej.fijalkowski, saeed

On Sat, Dec 12, 2020 at 06:41:47PM +0100, Lorenzo Bianconi wrote:
> Introduce xdp_init_buff and xdp_prepare_buff utility routines to initialize
> xdp_buff data structure and remove duplicated code in all XDP capable
> drivers.
> 
> Changes since v2:
> - precompute xdp->data as hard_start + headroom and save it in a local
>   variable to reuse it for xdp->data_end and xdp->data_meta in
>   xdp_prepare_buff()
> 
> Changes since v1:
> - introduce xdp_prepare_buff utility routine
> 
> Lorenzo Bianconi (2):
>   net: xdp: introduce xdp_init_buff utility routine
>   net: xdp: introduce xdp_prepare_buff utility routine

For changes in drivers/net/ethernet/sfc:

Acked-by: Martin Habets <habetsm.xilinx@gmail.com>

>  drivers/net/ethernet/amazon/ena/ena_netdev.c  |  8 +++-----
>  drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |  7 ++-----
>  .../net/ethernet/cavium/thunder/nicvf_main.c  | 11 ++++++-----
>  .../net/ethernet/freescale/dpaa/dpaa_eth.c    | 10 ++++------
>  .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  | 13 +++++--------
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c   | 18 +++++++++---------
>  drivers/net/ethernet/intel/ice/ice_txrx.c     | 17 +++++++++--------
>  drivers/net/ethernet/intel/igb/igb_main.c     | 18 +++++++++---------
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 19 +++++++++----------
>  .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 19 +++++++++----------
>  drivers/net/ethernet/marvell/mvneta.c         |  9 +++------
>  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 13 +++++++------
>  drivers/net/ethernet/mellanox/mlx4/en_rx.c    |  8 +++-----
>  .../net/ethernet/mellanox/mlx5/core/en_rx.c   |  7 ++-----
>  .../ethernet/netronome/nfp/nfp_net_common.c   | 12 ++++++------
>  drivers/net/ethernet/qlogic/qede/qede_fp.c    |  7 ++-----
>  drivers/net/ethernet/sfc/rx.c                 |  9 +++------
>  drivers/net/ethernet/socionext/netsec.c       |  8 +++-----
>  drivers/net/ethernet/ti/cpsw.c                | 17 ++++++-----------
>  drivers/net/ethernet/ti/cpsw_new.c            | 17 ++++++-----------
>  drivers/net/hyperv/netvsc_bpf.c               |  7 ++-----
>  drivers/net/tun.c                             | 11 ++++-------
>  drivers/net/veth.c                            | 14 +++++---------
>  drivers/net/virtio_net.c                      | 18 ++++++------------
>  drivers/net/xen-netfront.c                    |  8 +++-----
>  include/net/xdp.h                             | 19 +++++++++++++++++++
>  net/bpf/test_run.c                            |  9 +++------
>  net/core/dev.c                                | 18 ++++++++----------
>  28 files changed, 156 insertions(+), 195 deletions(-)
> 
> -- 
> 2.29.2

-- 
Martin Habets <habetsm.xilinx@gmail.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff
  2020-12-12 17:41 [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff Lorenzo Bianconi
                   ` (2 preceding siblings ...)
  2020-12-14 15:32 ` [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff Martin Habets
@ 2020-12-14 17:53 ` Camelia Alexandra Groza
  3 siblings, 0 replies; 19+ messages in thread
From: Camelia Alexandra Groza @ 2020-12-14 17:53 UTC (permalink / raw)
  To: Lorenzo Bianconi, bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, lorenzo.bianconi,
	alexander.duyck, maciej.fijalkowski, saeed

> -----Original Message-----
> From: Lorenzo Bianconi <lorenzo@kernel.org>
> Sent: Saturday, December 12, 2020 19:42
> To: bpf@vger.kernel.org; netdev@vger.kernel.org
> Cc: davem@davemloft.net; kuba@kernel.org; ast@kernel.org;
> daniel@iogearbox.net; brouer@redhat.com; lorenzo.bianconi@redhat.com;
> alexander.duyck@gmail.com; maciej.fijalkowski@intel.com;
> saeed@kernel.org
> Subject: [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff
> 
> Introduce xdp_init_buff and xdp_prepare_buff utility routines to initialize
> xdp_buff data structure and remove duplicated code in all XDP capable
> drivers.
> 
> Changes since v2:
> - precompute xdp->data as hard_start + headroom and save it in a local
>   variable to reuse it for xdp->data_end and xdp->data_meta in
>   xdp_prepare_buff()
> 
> Changes since v1:
> - introduce xdp_prepare_buff utility routine
> 
> Lorenzo Bianconi (2):
>   net: xdp: introduce xdp_init_buff utility routine
>   net: xdp: introduce xdp_prepare_buff utility routine

For the drivers/net/ethernet/freescale/dpaa changes:
Acked-by: Camelia Groza <camelia.groza@nxp.com>

>  drivers/net/ethernet/amazon/ena/ena_netdev.c  |  8 +++-----
>  drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |  7 ++-----
>  .../net/ethernet/cavium/thunder/nicvf_main.c  | 11 ++++++-----
>  .../net/ethernet/freescale/dpaa/dpaa_eth.c    | 10 ++++------
>  .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  | 13 +++++--------
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c   | 18 +++++++++---------
>  drivers/net/ethernet/intel/ice/ice_txrx.c     | 17 +++++++++--------
>  drivers/net/ethernet/intel/igb/igb_main.c     | 18 +++++++++---------
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 19 +++++++++----------
>  .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 19 +++++++++----------
>  drivers/net/ethernet/marvell/mvneta.c         |  9 +++------
>  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 13 +++++++------
>  drivers/net/ethernet/mellanox/mlx4/en_rx.c    |  8 +++-----
>  .../net/ethernet/mellanox/mlx5/core/en_rx.c   |  7 ++-----
>  .../ethernet/netronome/nfp/nfp_net_common.c   | 12 ++++++------
>  drivers/net/ethernet/qlogic/qede/qede_fp.c    |  7 ++-----
>  drivers/net/ethernet/sfc/rx.c                 |  9 +++------
>  drivers/net/ethernet/socionext/netsec.c       |  8 +++-----
>  drivers/net/ethernet/ti/cpsw.c                | 17 ++++++-----------
>  drivers/net/ethernet/ti/cpsw_new.c            | 17 ++++++-----------
>  drivers/net/hyperv/netvsc_bpf.c               |  7 ++-----
>  drivers/net/tun.c                             | 11 ++++-------
>  drivers/net/veth.c                            | 14 +++++---------
>  drivers/net/virtio_net.c                      | 18 ++++++------------
>  drivers/net/xen-netfront.c                    |  8 +++-----
>  include/net/xdp.h                             | 19 +++++++++++++++++++
>  net/bpf/test_run.c                            |  9 +++------
>  net/core/dev.c                                | 18 ++++++++----------
>  28 files changed, 156 insertions(+), 195 deletions(-)
> 
> --
> 2.29.2


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-12 17:41 ` [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff " Lorenzo Bianconi
@ 2020-12-15 12:36   ` Maciej Fijalkowski
  2020-12-15 13:47     ` Lorenzo Bianconi
  0 siblings, 1 reply; 19+ messages in thread
From: Maciej Fijalkowski @ 2020-12-15 12:36 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: bpf, netdev, davem, kuba, ast, daniel, brouer, lorenzo.bianconi,
	alexander.duyck, saeed

On Sat, Dec 12, 2020 at 06:41:49PM +0100, Lorenzo Bianconi wrote:
> Introduce xdp_prepare_buff utility routine to initialize per-descriptor
> xdp_buff fields (e.g. xdp_buff pointers). Rely on xdp_prepare_buff() in
> all XDP capable drivers.
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
>  drivers/net/ethernet/amazon/ena/ena_netdev.c      |  5 ++---
>  drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c     |  4 +---
>  drivers/net/ethernet/cavium/thunder/nicvf_main.c  |  7 ++++---
>  drivers/net/ethernet/freescale/dpaa/dpaa_eth.c    |  6 ++----
>  drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c  | 13 +++++--------
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c       | 12 ++++++------
>  drivers/net/ethernet/intel/ice/ice_txrx.c         | 11 ++++++-----
>  drivers/net/ethernet/intel/igb/igb_main.c         | 12 ++++++------
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c     | 12 ++++++------
>  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 12 ++++++------
>  drivers/net/ethernet/marvell/mvneta.c             |  6 ++----
>  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c   |  7 +++----
>  drivers/net/ethernet/mellanox/mlx4/en_rx.c        |  5 ++---
>  drivers/net/ethernet/mellanox/mlx5/core/en_rx.c   |  4 +---
>  .../net/ethernet/netronome/nfp/nfp_net_common.c   |  8 ++++----
>  drivers/net/ethernet/qlogic/qede/qede_fp.c        |  4 +---
>  drivers/net/ethernet/sfc/rx.c                     |  6 ++----
>  drivers/net/ethernet/socionext/netsec.c           |  5 ++---
>  drivers/net/ethernet/ti/cpsw.c                    | 15 +++++----------
>  drivers/net/ethernet/ti/cpsw_new.c                | 15 +++++----------
>  drivers/net/hyperv/netvsc_bpf.c                   |  4 +---
>  drivers/net/tun.c                                 |  4 +---
>  drivers/net/veth.c                                |  6 +-----
>  drivers/net/virtio_net.c                          | 12 ++++--------
>  drivers/net/xen-netfront.c                        |  4 +---
>  include/net/xdp.h                                 | 12 ++++++++++++
>  net/bpf/test_run.c                                |  5 +----
>  net/core/dev.c                                    | 10 ++++------
>  28 files changed, 96 insertions(+), 130 deletions(-)
> 
> diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> index 338dce73927e..1cfd0c98677e 100644
> --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
> +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> @@ -1519,10 +1519,9 @@ static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp)
>  	int ret;
>  
>  	rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id];
> -	xdp->data = page_address(rx_info->page) + rx_info->page_offset;
> +	xdp_prepare_buff(xdp, page_address(rx_info->page),
> +			 rx_info->page_offset, rx_ring->ena_bufs[0].len);
>  	xdp_set_data_meta_invalid(xdp);
> -	xdp->data_hard_start = page_address(rx_info->page);
> -	xdp->data_end = xdp->data + rx_ring->ena_bufs[0].len;
>  	/* If for some reason we received a bigger packet than
>  	 * we expect, then we simply drop it
>  	 */
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> index b7942c3440c0..e1664b86a7b8 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> @@ -134,10 +134,8 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
>  
>  	txr = rxr->bnapi->tx_ring;
>  	xdp_init_buff(&xdp, PAGE_SIZE, &rxr->xdp_rxq);
> -	xdp.data_hard_start = *data_ptr - offset;
> -	xdp.data = *data_ptr;
> +	xdp_prepare_buff(&xdp, *data_ptr - offset, offset, *len);
>  	xdp_set_data_meta_invalid(&xdp);
> -	xdp.data_end = *data_ptr + *len;
>  	orig_data = xdp.data;
>  
>  	rcu_read_lock();
> diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
> index 9fc672f075f2..9bdac04359c6 100644
> --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
> +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
> @@ -530,6 +530,7 @@ static inline bool nicvf_xdp_rx(struct nicvf *nic, struct bpf_prog *prog,
>  				struct cqe_rx_t *cqe_rx, struct snd_queue *sq,
>  				struct rcv_queue *rq, struct sk_buff **skb)
>  {
> +	unsigned char *hard_start, *data;
>  	struct xdp_buff xdp;
>  	struct page *page;
>  	u32 action;
> @@ -549,10 +550,10 @@ static inline bool nicvf_xdp_rx(struct nicvf *nic, struct bpf_prog *prog,
>  
>  	xdp_init_buff(&xdp, RCV_FRAG_LEN + XDP_PACKET_HEADROOM,
>  		      &rq->xdp_rxq);
> -	xdp.data_hard_start = page_address(page);
> -	xdp.data = (void *)cpu_addr;
> +	hard_start = page_address(page);
> +	data = (unsigned char *)cpu_addr;
> +	xdp_prepare_buff(&xdp, hard_start, data - hard_start, len);
>  	xdp_set_data_meta_invalid(&xdp);
> -	xdp.data_end = xdp.data + len;
>  	orig_data = xdp.data;
>  
>  	rcu_read_lock();
> diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> index 93030000e0aa..86ee07c90154 100644
> --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> @@ -2538,10 +2538,8 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr,
>  
>  	xdp_init_buff(&xdp, DPAA_BP_RAW_SIZE - DPAA_TX_PRIV_DATA_SIZE,
>  		      &dpaa_fq->xdp_rxq);
> -	xdp.data = vaddr + fd_off;
> -	xdp.data_meta = xdp.data;
> -	xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM;
> -	xdp.data_end = xdp.data + qm_fd_get_length(fd);
> +	xdp_prepare_buff(&xdp, vaddr + fd_off - XDP_PACKET_HEADROOM,
> +			 XDP_PACKET_HEADROOM, qm_fd_get_length(fd));
>  
>  	/* We reserve a fixed headroom of 256 bytes under the erratum and we
>  	 * offer it all to XDP programs to use. If no room is left for the
> diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
> index a4ade0b5adb0..12358f5d59d6 100644
> --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
> +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
> @@ -350,7 +350,7 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
>  	struct bpf_prog *xdp_prog;
>  	struct xdp_buff xdp;
>  	u32 xdp_act = XDP_PASS;
> -	int err;
> +	int err, offset;
>  
>  	rcu_read_lock();
>  
> @@ -358,13 +358,10 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
>  	if (!xdp_prog)
>  		goto out;
>  
> -	xdp_init_buff(&xdp,
> -		      DPAA2_ETH_RX_BUF_RAW_SIZE -
> -		      (dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM),
> -		      &ch->xdp_rxq);
> -	xdp.data = vaddr + dpaa2_fd_get_offset(fd);
> -	xdp.data_end = xdp.data + dpaa2_fd_get_len(fd);
> -	xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM;
> +	offset = dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM;
> +	xdp_init_buff(&xdp, DPAA2_ETH_RX_BUF_RAW_SIZE - offset, &ch->xdp_rxq);
> +	xdp_prepare_buff(&xdp, vaddr + offset, XDP_PACKET_HEADROOM,
> +			 dpaa2_fd_get_len(fd));
>  	xdp_set_data_meta_invalid(&xdp);
>  
>  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> index 4dbbbd49c389..fcd1ca3343fb 100644
> --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> @@ -2393,12 +2393,12 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
>  
>  		/* retrieve a buffer from the ring */
>  		if (!skb) {
> -			xdp.data = page_address(rx_buffer->page) +
> -				   rx_buffer->page_offset;
> -			xdp.data_meta = xdp.data;
> -			xdp.data_hard_start = xdp.data -
> -					      i40e_rx_offset(rx_ring);
> -			xdp.data_end = xdp.data + size;
> +			unsigned int offset = i40e_rx_offset(rx_ring);

I now see that we could call the i40e_rx_offset() once per napi, so can
you pull this variable out and have it initialized a single time? Applies
to other intel drivers as well.

I also feel like it's sub-optimal for drivers that are calculating the
data_hard_start out of data (intel, bnxt, sfc and mlx4 have this approach)
due to additional add, but I don't have a solution for that. Would be
weird to have another helper. Not sure what other people think, but I have
in mind a "death by 1000 cuts" phrase :)

> +			unsigned char *hard_start;
> +
> +			hard_start = page_address(rx_buffer->page) +
> +				     rx_buffer->page_offset - offset;
> +			xdp_prepare_buff(&xdp, hard_start, offset, size);
>  #if (PAGE_SIZE > 4096)
>  			/* At larger PAGE_SIZE, frame_sz depend on len size */
>  			xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size);
> diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
> index d52d98d56367..a7a00060f520 100644
> --- a/drivers/net/ethernet/intel/ice/ice_txrx.c
> +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
> @@ -1094,8 +1094,9 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
>  	while (likely(total_rx_pkts < (unsigned int)budget)) {
>  		union ice_32b_rx_flex_desc *rx_desc;
>  		struct ice_rx_buf *rx_buf;
> +		unsigned int size, offset;
> +		unsigned char *hard_start;
>  		struct sk_buff *skb;
> -		unsigned int size;
>  		u16 stat_err_bits;
>  		u16 vlan_tag = 0;
>  		u8 rx_ptype;
> @@ -1138,10 +1139,10 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
>  			goto construct_skb;
>  		}
>  
> -		xdp.data = page_address(rx_buf->page) + rx_buf->page_offset;
> -		xdp.data_hard_start = xdp.data - ice_rx_offset(rx_ring);
> -		xdp.data_meta = xdp.data;
> -		xdp.data_end = xdp.data + size;
> +		offset = ice_rx_offset(rx_ring);
> +		hard_start = page_address(rx_buf->page) + rx_buf->page_offset -
> +			     offset;
> +		xdp_prepare_buff(&xdp, hard_start, offset, size);
>  #if (PAGE_SIZE > 4096)
>  		/* At larger PAGE_SIZE, frame_sz depend on len size */
>  		xdp.frame_sz = ice_rx_frame_truesize(rx_ring, size);
> diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
> index 365dfc0e3b65..070b2bb4e9ca 100644
> --- a/drivers/net/ethernet/intel/igb/igb_main.c
> +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> @@ -8700,12 +8700,12 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
>  
>  		/* retrieve a buffer from the ring */
>  		if (!skb) {
> -			xdp.data = page_address(rx_buffer->page) +
> -				   rx_buffer->page_offset;
> -			xdp.data_meta = xdp.data;
> -			xdp.data_hard_start = xdp.data -
> -					      igb_rx_offset(rx_ring);
> -			xdp.data_end = xdp.data + size;
> +			unsigned int offset = igb_rx_offset(rx_ring);
> +			unsigned char *hard_start;
> +
> +			hard_start = page_address(rx_buffer->page) +
> +				     rx_buffer->page_offset - offset;
> +			xdp_prepare_buff(&xdp, hard_start, offset, size);
>  #if (PAGE_SIZE > 4096)
>  			/* At larger PAGE_SIZE, frame_sz depend on len size */
>  			xdp.frame_sz = igb_rx_frame_truesize(rx_ring, size);
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> index dcd49cfa36f7..e34054433c7a 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> @@ -2325,12 +2325,12 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
>  
>  		/* retrieve a buffer from the ring */
>  		if (!skb) {
> -			xdp.data = page_address(rx_buffer->page) +
> -				   rx_buffer->page_offset;
> -			xdp.data_meta = xdp.data;
> -			xdp.data_hard_start = xdp.data -
> -					      ixgbe_rx_offset(rx_ring);
> -			xdp.data_end = xdp.data + size;
> +			unsigned int offset = ixgbe_rx_offset(rx_ring);
> +			unsigned char *hard_start;
> +
> +			hard_start = page_address(rx_buffer->page) +
> +				     rx_buffer->page_offset - offset;
> +			xdp_prepare_buff(&xdp, hard_start, offset, size);
>  #if (PAGE_SIZE > 4096)
>  			/* At larger PAGE_SIZE, frame_sz depend on len size */
>  			xdp.frame_sz = ixgbe_rx_frame_truesize(rx_ring, size);
> diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> index 624efcd71569..51df79005ccb 100644
> --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> @@ -1160,12 +1160,12 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,
>  
>  		/* retrieve a buffer from the ring */
>  		if (!skb) {
> -			xdp.data = page_address(rx_buffer->page) +
> -				   rx_buffer->page_offset;
> -			xdp.data_meta = xdp.data;
> -			xdp.data_hard_start = xdp.data -
> -					      ixgbevf_rx_offset(rx_ring);
> -			xdp.data_end = xdp.data + size;
> +			unsigned int offset = ixgbevf_rx_offset(rx_ring);
> +			unsigned char *hard_start;
> +
> +			hard_start = page_address(rx_buffer->page) +
> +				     rx_buffer->page_offset - offset;
> +			xdp_prepare_buff(&xdp, hard_start, offset, size);
>  #if (PAGE_SIZE > 4096)
>  			/* At larger PAGE_SIZE, frame_sz depend on len size */
>  			xdp.frame_sz = ixgbevf_rx_frame_truesize(rx_ring, size);
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index acbb9cb85ada..af6c9cf59809 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -2263,10 +2263,8 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
>  
>  	/* Prefetch header */
>  	prefetch(data);
> -
> -	xdp->data_hard_start = data;
> -	xdp->data = data + pp->rx_offset_correction + MVNETA_MH_SIZE;
> -	xdp->data_end = xdp->data + data_len;
> +	xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE,
> +			 data_len);
>  	xdp_set_data_meta_invalid(xdp);
>  
>  	sinfo = xdp_get_shared_info_from_buff(xdp);
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> index ca05dfc05058..8c2197b96515 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> @@ -3564,16 +3564,15 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
>  		if (xdp_prog) {
>  			struct xdp_rxq_info *xdp_rxq;
>  
> -			xdp.data_hard_start = data;
> -			xdp.data = data + MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM;
> -			xdp.data_end = xdp.data + rx_bytes;
> -
>  			if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
>  				xdp_rxq = &rxq->xdp_rxq_short;
>  			else
>  				xdp_rxq = &rxq->xdp_rxq_long;
>  
>  			xdp_init_buff(&xdp, PAGE_SIZE, xdp_rxq);
> +			xdp_prepare_buff(&xdp, data,
> +					 MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM,
> +					 rx_bytes);
>  			xdp_set_data_meta_invalid(&xdp);
>  
>  			ret = mvpp2_run_xdp(port, rxq, xdp_prog, &xdp, pp, &ps);
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> index 815381b484ca..86c63dedc689 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> @@ -776,10 +776,9 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
>  						priv->frag_info[0].frag_size,
>  						DMA_FROM_DEVICE);
>  
> -			xdp.data_hard_start = va - frags[0].page_offset;
> -			xdp.data = va;
> +			xdp_prepare_buff(&xdp, va - frags[0].page_offset,
> +					 frags[0].page_offset, length);
>  			xdp_set_data_meta_invalid(&xdp);
> -			xdp.data_end = xdp.data + length;
>  			orig_data = xdp.data;
>  
>  			act = bpf_prog_run_xdp(xdp_prog, &xdp);
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index c68628b1f30b..a2f4f0ce427f 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -1128,10 +1128,8 @@ static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom,
>  				u32 len, struct xdp_buff *xdp)
>  {
>  	xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq);
> -	xdp->data_hard_start = va;
> -	xdp->data = va + headroom;
> +	xdp_prepare_buff(xdp, va, headroom, len);
>  	xdp_set_data_meta_invalid(xdp);
> -	xdp->data_end = xdp->data + len;
>  }
>  
>  static struct sk_buff *
> diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
> index 68e03e8257f2..5d0046c24b8c 100644
> --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
> +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
> @@ -1914,10 +1914,10 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget)
>  			unsigned int dma_off;
>  			int act;
>  
> -			xdp.data_hard_start = rxbuf->frag + NFP_NET_RX_BUF_HEADROOM;
> -			xdp.data = orig_data;
> -			xdp.data_meta = orig_data;
> -			xdp.data_end = orig_data + pkt_len;
> +			xdp_prepare_buff(&xdp,
> +					 rxbuf->frag + NFP_NET_RX_BUF_HEADROOM,
> +					 pkt_off - NFP_NET_RX_BUF_HEADROOM,
> +					 pkt_len);
>  
>  			act = bpf_prog_run_xdp(xdp_prog, &xdp);
>  
> diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
> index d40220043883..9c50df499046 100644
> --- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
> +++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
> @@ -1091,10 +1091,8 @@ static bool qede_rx_xdp(struct qede_dev *edev,
>  	enum xdp_action act;
>  
>  	xdp_init_buff(&xdp, rxq->rx_buf_seg_size, &rxq->xdp_rxq);
> -	xdp.data_hard_start = page_address(bd->data);
> -	xdp.data = xdp.data_hard_start + *data_offset;
> +	xdp_prepare_buff(&xdp, page_address(bd->data), *data_offset, *len);
>  	xdp_set_data_meta_invalid(&xdp);
> -	xdp.data_end = xdp.data + *len;
>  
>  	/* Queues always have a full reset currently, so for the time
>  	 * being until there's atomic program replace just mark read
> diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
> index eaa6650955d1..9015a1639234 100644
> --- a/drivers/net/ethernet/sfc/rx.c
> +++ b/drivers/net/ethernet/sfc/rx.c
> @@ -294,12 +294,10 @@ static bool efx_do_xdp(struct efx_nic *efx, struct efx_channel *channel,
>  	       efx->rx_prefix_size);
>  
>  	xdp_init_buff(&xdp, efx->rx_page_buf_step, &rx_queue->xdp_rxq_info);
> -	xdp.data = *ehp;
> -	xdp.data_hard_start = xdp.data - EFX_XDP_HEADROOM;
> -
> +	xdp_prepare_buff(&xdp, *ehp - EFX_XDP_HEADROOM, EFX_XDP_HEADROOM,
> +			 rx_buf->len);
>  	/* No support yet for XDP metadata */
>  	xdp_set_data_meta_invalid(&xdp);
> -	xdp.data_end = xdp.data + rx_buf->len;
>  
>  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
>  	rcu_read_unlock();
> diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
> index 945ca9517bf9..80bb1a6612b1 100644
> --- a/drivers/net/ethernet/socionext/netsec.c
> +++ b/drivers/net/ethernet/socionext/netsec.c
> @@ -1015,10 +1015,9 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
>  					dma_dir);
>  		prefetch(desc->addr);
>  
> -		xdp.data_hard_start = desc->addr;
> -		xdp.data = desc->addr + NETSEC_RXBUF_HEADROOM;
> +		xdp_prepare_buff(&xdp, desc->addr, NETSEC_RXBUF_HEADROOM,
> +				 pkt_len);
>  		xdp_set_data_meta_invalid(&xdp);
> -		xdp.data_end = xdp.data + pkt_len;
>  
>  		if (xdp_prog) {
>  			xdp_result = netsec_run_xdp(priv, xdp_prog, &xdp);
> diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
> index 78a923391828..c08fd6a6be9b 100644
> --- a/drivers/net/ethernet/ti/cpsw.c
> +++ b/drivers/net/ethernet/ti/cpsw.c
> @@ -392,22 +392,17 @@ static void cpsw_rx_handler(void *token, int len, int status)
>  	}
>  
>  	if (priv->xdp_prog) {
> -		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
> +		int headroom = CPSW_HEADROOM, size = len;
>  
> +		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
>  		if (status & CPDMA_RX_VLAN_ENCAP) {
> -			xdp.data = pa + CPSW_HEADROOM +
> -				   CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> -			xdp.data_end = xdp.data + len -
> -				       CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> -		} else {
> -			xdp.data = pa + CPSW_HEADROOM;
> -			xdp.data_end = xdp.data + len;
> +			headroom += CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> +			size -= CPSW_RX_VLAN_ENCAP_HDR_SIZE;
>  		}
>  
> +		xdp_prepare_buff(&xdp, pa, headroom, size);
>  		xdp_set_data_meta_invalid(&xdp);
>  
> -		xdp.data_hard_start = pa;
> -
>  		port = priv->emac_port + cpsw->data.dual_emac;
>  		ret = cpsw_run_xdp(priv, ch, &xdp, page, port);
>  		if (ret != CPSW_XDP_PASS)
> diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
> index 1b3385ec9645..c74c997d1cf2 100644
> --- a/drivers/net/ethernet/ti/cpsw_new.c
> +++ b/drivers/net/ethernet/ti/cpsw_new.c
> @@ -335,22 +335,17 @@ static void cpsw_rx_handler(void *token, int len, int status)
>  	}
>  
>  	if (priv->xdp_prog) {
> -		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
> +		int headroom = CPSW_HEADROOM, size = len;
>  
> +		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
>  		if (status & CPDMA_RX_VLAN_ENCAP) {
> -			xdp.data = pa + CPSW_HEADROOM +
> -				   CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> -			xdp.data_end = xdp.data + len -
> -				       CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> -		} else {
> -			xdp.data = pa + CPSW_HEADROOM;
> -			xdp.data_end = xdp.data + len;
> +			headroom += CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> +			size -= CPSW_RX_VLAN_ENCAP_HDR_SIZE;
>  		}
>  
> +		xdp_prepare_buff(&xdp, pa, headroom, size);
>  		xdp_set_data_meta_invalid(&xdp);
>  
> -		xdp.data_hard_start = pa;
> -
>  		ret = cpsw_run_xdp(priv, ch, &xdp, page, priv->emac_port);
>  		if (ret != CPSW_XDP_PASS)
>  			goto requeue;
> diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
> index 14a7ee4c6899..93c202d6aff5 100644
> --- a/drivers/net/hyperv/netvsc_bpf.c
> +++ b/drivers/net/hyperv/netvsc_bpf.c
> @@ -45,10 +45,8 @@ u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan,
>  	}
>  
>  	xdp_init_buff(xdp, PAGE_SIZE, &nvchan->xdp_rxq);
> -	xdp->data_hard_start = page_address(page);
> -	xdp->data = xdp->data_hard_start + NETVSC_XDP_HDRM;
> +	xdp_prepare_buff(xdp, page_address(page), NETVSC_XDP_HDRM, len);
>  	xdp_set_data_meta_invalid(xdp);
> -	xdp->data_end = xdp->data + len;
>  
>  	memcpy(xdp->data, data, len);
>  
> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> index a82f7823d428..c7cbd058b345 100644
> --- a/drivers/net/tun.c
> +++ b/drivers/net/tun.c
> @@ -1600,10 +1600,8 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>  		u32 act;
>  
>  		xdp_init_buff(&xdp, buflen, &tfile->xdp_rxq);
> -		xdp.data_hard_start = buf;
> -		xdp.data = buf + pad;
> +		xdp_prepare_buff(&xdp, buf, pad, len);
>  		xdp_set_data_meta_invalid(&xdp);
> -		xdp.data_end = xdp.data + len;
>  
>  		act = bpf_prog_run_xdp(xdp_prog, &xdp);
>  		if (act == XDP_REDIRECT || act == XDP_TX) {
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 25f3601fb6dd..30a7f2ad39c3 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -710,11 +710,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
>  		skb = nskb;
>  	}
>  
> -	xdp.data_hard_start = skb->head;
> -	xdp.data = skb_mac_header(skb);
> -	xdp.data_end = xdp.data + pktlen;
> -	xdp.data_meta = xdp.data;
> -
> +	xdp_prepare_buff(&xdp, skb->head, skb->mac_header, pktlen);
>  	/* SKB "head" area always have tailroom for skb_shared_info */
>  	frame_sz = (void *)skb_end_pointer(skb) - xdp.data_hard_start;
>  	frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index a22ce87bcd9c..e57b2d452cbc 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -690,10 +690,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
>  		}
>  
>  		xdp_init_buff(&xdp, buflen, &rq->xdp_rxq);
> -		xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
> -		xdp.data = xdp.data_hard_start + xdp_headroom;
> -		xdp.data_end = xdp.data + len;
> -		xdp.data_meta = xdp.data;
> +		xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> +				 xdp_headroom, len);
>  		orig_data = xdp.data;
>  		act = bpf_prog_run_xdp(xdp_prog, &xdp);
>  		stats->xdp_packets++;
> @@ -859,10 +857,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>  		 */
>  		data = page_address(xdp_page) + offset;
>  		xdp_init_buff(&xdp, frame_sz - vi->hdr_len, &rq->xdp_rxq);
> -		xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
> -		xdp.data = data + vi->hdr_len;
> -		xdp.data_end = xdp.data + (len - vi->hdr_len);
> -		xdp.data_meta = xdp.data;
> +		xdp_prepare_buff(&xdp, data - VIRTIO_XDP_HEADROOM + vi->hdr_len,
> +				 VIRTIO_XDP_HEADROOM, len - vi->hdr_len);
>  
>  		act = bpf_prog_run_xdp(xdp_prog, &xdp);
>  		stats->xdp_packets++;
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 329397c60d84..61d3f5f8b7f3 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -866,10 +866,8 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
>  
>  	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
>  		      &queue->xdp_rxq);
> -	xdp->data_hard_start = page_address(pdata);
> -	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
> +	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM, len);
>  	xdp_set_data_meta_invalid(xdp);
> -	xdp->data_end = xdp->data + len;
>  
>  	act = bpf_prog_run_xdp(prog, xdp);
>  	switch (act) {
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index 3fb3a9aa1b71..66d8a4b317a3 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -83,6 +83,18 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
>  	xdp->rxq = rxq;
>  }
>  
> +static inline void
> +xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> +		 int headroom, int data_len)
> +{
> +	unsigned char *data = hard_start + headroom;
> +
> +	xdp->data_hard_start = hard_start;
> +	xdp->data = data;
> +	xdp->data_end = data + data_len;
> +	xdp->data_meta = data;
> +}
> +
>  /* Reserve memory area at end-of data area.
>   *
>   * This macro reserves tailroom in the XDP buffer by limiting the
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index a8fa5a9e4137..fe5a80d396e3 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -636,10 +636,7 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
>  	if (IS_ERR(data))
>  		return PTR_ERR(data);
>  
> -	xdp.data_hard_start = data;
> -	xdp.data = data + headroom;
> -	xdp.data_meta = xdp.data;
> -	xdp.data_end = xdp.data + size;
> +	xdp_prepare_buff(&xdp, data, headroom, size);
>  
>  	rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0);
>  	xdp_init_buff(&xdp, headroom + max_data_sz + tailroom,
> diff --git a/net/core/dev.c b/net/core/dev.c
> index bac56afcf6bc..2997177876cc 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4592,7 +4592,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
>  	__be16 orig_eth_type;
>  	struct ethhdr *eth;
>  	bool orig_bcast;
> -	int hlen, off;
> +	int off;
>  
>  	/* Reinjected packets coming from act_mirred or similar should
>  	 * not get XDP generic processing.
> @@ -4624,11 +4624,9 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
>  	 * header.
>  	 */
>  	mac_len = skb->data - skb_mac_header(skb);
> -	hlen = skb_headlen(skb) + mac_len;
> -	xdp->data = skb->data - mac_len;
> -	xdp->data_meta = xdp->data;
> -	xdp->data_end = xdp->data + hlen;
> -	xdp->data_hard_start = skb->data - skb_headroom(skb);
> +	xdp_prepare_buff(xdp, skb->data - skb_headroom(skb),
> +			 skb_headroom(skb) - mac_len,
> +			 skb_headlen(skb) + mac_len);
>  
>  	/* SKB "head" area always have tailroom for skb_shared_info */
>  	frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start;
> -- 
> 2.29.2
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-15 12:36   ` Maciej Fijalkowski
@ 2020-12-15 13:47     ` Lorenzo Bianconi
  2020-12-15 14:51       ` Daniel Borkmann
  2020-12-16  8:52       ` Jesper Dangaard Brouer
  0 siblings, 2 replies; 19+ messages in thread
From: Lorenzo Bianconi @ 2020-12-15 13:47 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Lorenzo Bianconi, bpf, netdev, davem, kuba, ast, daniel, brouer,
	alexander.duyck, saeed

[-- Attachment #1: Type: text/plain, Size: 22259 bytes --]

[...]
> >  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > index 4dbbbd49c389..fcd1ca3343fb 100644
> > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > @@ -2393,12 +2393,12 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
> >  
> >  		/* retrieve a buffer from the ring */
> >  		if (!skb) {
> > -			xdp.data = page_address(rx_buffer->page) +
> > -				   rx_buffer->page_offset;
> > -			xdp.data_meta = xdp.data;
> > -			xdp.data_hard_start = xdp.data -
> > -					      i40e_rx_offset(rx_ring);
> > -			xdp.data_end = xdp.data + size;
> > +			unsigned int offset = i40e_rx_offset(rx_ring);
> 
> I now see that we could call the i40e_rx_offset() once per napi, so can
> you pull this variable out and have it initialized a single time? Applies
> to other intel drivers as well.

ack, fine. I will fix in v4.

Regards,
Lorenzo

> 
> I also feel like it's sub-optimal for drivers that are calculating the
> data_hard_start out of data (intel, bnxt, sfc and mlx4 have this approach)
> due to additional add, but I don't have a solution for that. Would be
> weird to have another helper. Not sure what other people think, but I have
> in mind a "death by 1000 cuts" phrase :)
> 
> > +			unsigned char *hard_start;
> > +
> > +			hard_start = page_address(rx_buffer->page) +
> > +				     rx_buffer->page_offset - offset;
> > +			xdp_prepare_buff(&xdp, hard_start, offset, size);
> >  #if (PAGE_SIZE > 4096)
> >  			/* At larger PAGE_SIZE, frame_sz depend on len size */
> >  			xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size);
> > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
> > index d52d98d56367..a7a00060f520 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_txrx.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
> > @@ -1094,8 +1094,9 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
> >  	while (likely(total_rx_pkts < (unsigned int)budget)) {
> >  		union ice_32b_rx_flex_desc *rx_desc;
> >  		struct ice_rx_buf *rx_buf;
> > +		unsigned int size, offset;
> > +		unsigned char *hard_start;
> >  		struct sk_buff *skb;
> > -		unsigned int size;
> >  		u16 stat_err_bits;
> >  		u16 vlan_tag = 0;
> >  		u8 rx_ptype;
> > @@ -1138,10 +1139,10 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
> >  			goto construct_skb;
> >  		}
> >  
> > -		xdp.data = page_address(rx_buf->page) + rx_buf->page_offset;
> > -		xdp.data_hard_start = xdp.data - ice_rx_offset(rx_ring);
> > -		xdp.data_meta = xdp.data;
> > -		xdp.data_end = xdp.data + size;
> > +		offset = ice_rx_offset(rx_ring);
> > +		hard_start = page_address(rx_buf->page) + rx_buf->page_offset -
> > +			     offset;
> > +		xdp_prepare_buff(&xdp, hard_start, offset, size);
> >  #if (PAGE_SIZE > 4096)
> >  		/* At larger PAGE_SIZE, frame_sz depend on len size */
> >  		xdp.frame_sz = ice_rx_frame_truesize(rx_ring, size);
> > diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
> > index 365dfc0e3b65..070b2bb4e9ca 100644
> > --- a/drivers/net/ethernet/intel/igb/igb_main.c
> > +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> > @@ -8700,12 +8700,12 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
> >  
> >  		/* retrieve a buffer from the ring */
> >  		if (!skb) {
> > -			xdp.data = page_address(rx_buffer->page) +
> > -				   rx_buffer->page_offset;
> > -			xdp.data_meta = xdp.data;
> > -			xdp.data_hard_start = xdp.data -
> > -					      igb_rx_offset(rx_ring);
> > -			xdp.data_end = xdp.data + size;
> > +			unsigned int offset = igb_rx_offset(rx_ring);
> > +			unsigned char *hard_start;
> > +
> > +			hard_start = page_address(rx_buffer->page) +
> > +				     rx_buffer->page_offset - offset;
> > +			xdp_prepare_buff(&xdp, hard_start, offset, size);
> >  #if (PAGE_SIZE > 4096)
> >  			/* At larger PAGE_SIZE, frame_sz depend on len size */
> >  			xdp.frame_sz = igb_rx_frame_truesize(rx_ring, size);
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > index dcd49cfa36f7..e34054433c7a 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > @@ -2325,12 +2325,12 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
> >  
> >  		/* retrieve a buffer from the ring */
> >  		if (!skb) {
> > -			xdp.data = page_address(rx_buffer->page) +
> > -				   rx_buffer->page_offset;
> > -			xdp.data_meta = xdp.data;
> > -			xdp.data_hard_start = xdp.data -
> > -					      ixgbe_rx_offset(rx_ring);
> > -			xdp.data_end = xdp.data + size;
> > +			unsigned int offset = ixgbe_rx_offset(rx_ring);
> > +			unsigned char *hard_start;
> > +
> > +			hard_start = page_address(rx_buffer->page) +
> > +				     rx_buffer->page_offset - offset;
> > +			xdp_prepare_buff(&xdp, hard_start, offset, size);
> >  #if (PAGE_SIZE > 4096)
> >  			/* At larger PAGE_SIZE, frame_sz depend on len size */
> >  			xdp.frame_sz = ixgbe_rx_frame_truesize(rx_ring, size);
> > diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > index 624efcd71569..51df79005ccb 100644
> > --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > @@ -1160,12 +1160,12 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,
> >  
> >  		/* retrieve a buffer from the ring */
> >  		if (!skb) {
> > -			xdp.data = page_address(rx_buffer->page) +
> > -				   rx_buffer->page_offset;
> > -			xdp.data_meta = xdp.data;
> > -			xdp.data_hard_start = xdp.data -
> > -					      ixgbevf_rx_offset(rx_ring);
> > -			xdp.data_end = xdp.data + size;
> > +			unsigned int offset = ixgbevf_rx_offset(rx_ring);
> > +			unsigned char *hard_start;
> > +
> > +			hard_start = page_address(rx_buffer->page) +
> > +				     rx_buffer->page_offset - offset;
> > +			xdp_prepare_buff(&xdp, hard_start, offset, size);
> >  #if (PAGE_SIZE > 4096)
> >  			/* At larger PAGE_SIZE, frame_sz depend on len size */
> >  			xdp.frame_sz = ixgbevf_rx_frame_truesize(rx_ring, size);
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index acbb9cb85ada..af6c9cf59809 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -2263,10 +2263,8 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
> >  
> >  	/* Prefetch header */
> >  	prefetch(data);
> > -
> > -	xdp->data_hard_start = data;
> > -	xdp->data = data + pp->rx_offset_correction + MVNETA_MH_SIZE;
> > -	xdp->data_end = xdp->data + data_len;
> > +	xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE,
> > +			 data_len);
> >  	xdp_set_data_meta_invalid(xdp);
> >  
> >  	sinfo = xdp_get_shared_info_from_buff(xdp);
> > diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > index ca05dfc05058..8c2197b96515 100644
> > --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > @@ -3564,16 +3564,15 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
> >  		if (xdp_prog) {
> >  			struct xdp_rxq_info *xdp_rxq;
> >  
> > -			xdp.data_hard_start = data;
> > -			xdp.data = data + MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM;
> > -			xdp.data_end = xdp.data + rx_bytes;
> > -
> >  			if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
> >  				xdp_rxq = &rxq->xdp_rxq_short;
> >  			else
> >  				xdp_rxq = &rxq->xdp_rxq_long;
> >  
> >  			xdp_init_buff(&xdp, PAGE_SIZE, xdp_rxq);
> > +			xdp_prepare_buff(&xdp, data,
> > +					 MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM,
> > +					 rx_bytes);
> >  			xdp_set_data_meta_invalid(&xdp);
> >  
> >  			ret = mvpp2_run_xdp(port, rxq, xdp_prog, &xdp, pp, &ps);
> > diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> > index 815381b484ca..86c63dedc689 100644
> > --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> > +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> > @@ -776,10 +776,9 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
> >  						priv->frag_info[0].frag_size,
> >  						DMA_FROM_DEVICE);
> >  
> > -			xdp.data_hard_start = va - frags[0].page_offset;
> > -			xdp.data = va;
> > +			xdp_prepare_buff(&xdp, va - frags[0].page_offset,
> > +					 frags[0].page_offset, length);
> >  			xdp_set_data_meta_invalid(&xdp);
> > -			xdp.data_end = xdp.data + length;
> >  			orig_data = xdp.data;
> >  
> >  			act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > index c68628b1f30b..a2f4f0ce427f 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > @@ -1128,10 +1128,8 @@ static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom,
> >  				u32 len, struct xdp_buff *xdp)
> >  {
> >  	xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq);
> > -	xdp->data_hard_start = va;
> > -	xdp->data = va + headroom;
> > +	xdp_prepare_buff(xdp, va, headroom, len);
> >  	xdp_set_data_meta_invalid(xdp);
> > -	xdp->data_end = xdp->data + len;
> >  }
> >  
> >  static struct sk_buff *
> > diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
> > index 68e03e8257f2..5d0046c24b8c 100644
> > --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
> > +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
> > @@ -1914,10 +1914,10 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget)
> >  			unsigned int dma_off;
> >  			int act;
> >  
> > -			xdp.data_hard_start = rxbuf->frag + NFP_NET_RX_BUF_HEADROOM;
> > -			xdp.data = orig_data;
> > -			xdp.data_meta = orig_data;
> > -			xdp.data_end = orig_data + pkt_len;
> > +			xdp_prepare_buff(&xdp,
> > +					 rxbuf->frag + NFP_NET_RX_BUF_HEADROOM,
> > +					 pkt_off - NFP_NET_RX_BUF_HEADROOM,
> > +					 pkt_len);
> >  
> >  			act = bpf_prog_run_xdp(xdp_prog, &xdp);
> >  
> > diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
> > index d40220043883..9c50df499046 100644
> > --- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
> > +++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
> > @@ -1091,10 +1091,8 @@ static bool qede_rx_xdp(struct qede_dev *edev,
> >  	enum xdp_action act;
> >  
> >  	xdp_init_buff(&xdp, rxq->rx_buf_seg_size, &rxq->xdp_rxq);
> > -	xdp.data_hard_start = page_address(bd->data);
> > -	xdp.data = xdp.data_hard_start + *data_offset;
> > +	xdp_prepare_buff(&xdp, page_address(bd->data), *data_offset, *len);
> >  	xdp_set_data_meta_invalid(&xdp);
> > -	xdp.data_end = xdp.data + *len;
> >  
> >  	/* Queues always have a full reset currently, so for the time
> >  	 * being until there's atomic program replace just mark read
> > diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
> > index eaa6650955d1..9015a1639234 100644
> > --- a/drivers/net/ethernet/sfc/rx.c
> > +++ b/drivers/net/ethernet/sfc/rx.c
> > @@ -294,12 +294,10 @@ static bool efx_do_xdp(struct efx_nic *efx, struct efx_channel *channel,
> >  	       efx->rx_prefix_size);
> >  
> >  	xdp_init_buff(&xdp, efx->rx_page_buf_step, &rx_queue->xdp_rxq_info);
> > -	xdp.data = *ehp;
> > -	xdp.data_hard_start = xdp.data - EFX_XDP_HEADROOM;
> > -
> > +	xdp_prepare_buff(&xdp, *ehp - EFX_XDP_HEADROOM, EFX_XDP_HEADROOM,
> > +			 rx_buf->len);
> >  	/* No support yet for XDP metadata */
> >  	xdp_set_data_meta_invalid(&xdp);
> > -	xdp.data_end = xdp.data + rx_buf->len;
> >  
> >  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> >  	rcu_read_unlock();
> > diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
> > index 945ca9517bf9..80bb1a6612b1 100644
> > --- a/drivers/net/ethernet/socionext/netsec.c
> > +++ b/drivers/net/ethernet/socionext/netsec.c
> > @@ -1015,10 +1015,9 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
> >  					dma_dir);
> >  		prefetch(desc->addr);
> >  
> > -		xdp.data_hard_start = desc->addr;
> > -		xdp.data = desc->addr + NETSEC_RXBUF_HEADROOM;
> > +		xdp_prepare_buff(&xdp, desc->addr, NETSEC_RXBUF_HEADROOM,
> > +				 pkt_len);
> >  		xdp_set_data_meta_invalid(&xdp);
> > -		xdp.data_end = xdp.data + pkt_len;
> >  
> >  		if (xdp_prog) {
> >  			xdp_result = netsec_run_xdp(priv, xdp_prog, &xdp);
> > diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
> > index 78a923391828..c08fd6a6be9b 100644
> > --- a/drivers/net/ethernet/ti/cpsw.c
> > +++ b/drivers/net/ethernet/ti/cpsw.c
> > @@ -392,22 +392,17 @@ static void cpsw_rx_handler(void *token, int len, int status)
> >  	}
> >  
> >  	if (priv->xdp_prog) {
> > -		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
> > +		int headroom = CPSW_HEADROOM, size = len;
> >  
> > +		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
> >  		if (status & CPDMA_RX_VLAN_ENCAP) {
> > -			xdp.data = pa + CPSW_HEADROOM +
> > -				   CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> > -			xdp.data_end = xdp.data + len -
> > -				       CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> > -		} else {
> > -			xdp.data = pa + CPSW_HEADROOM;
> > -			xdp.data_end = xdp.data + len;
> > +			headroom += CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> > +			size -= CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> >  		}
> >  
> > +		xdp_prepare_buff(&xdp, pa, headroom, size);
> >  		xdp_set_data_meta_invalid(&xdp);
> >  
> > -		xdp.data_hard_start = pa;
> > -
> >  		port = priv->emac_port + cpsw->data.dual_emac;
> >  		ret = cpsw_run_xdp(priv, ch, &xdp, page, port);
> >  		if (ret != CPSW_XDP_PASS)
> > diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
> > index 1b3385ec9645..c74c997d1cf2 100644
> > --- a/drivers/net/ethernet/ti/cpsw_new.c
> > +++ b/drivers/net/ethernet/ti/cpsw_new.c
> > @@ -335,22 +335,17 @@ static void cpsw_rx_handler(void *token, int len, int status)
> >  	}
> >  
> >  	if (priv->xdp_prog) {
> > -		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
> > +		int headroom = CPSW_HEADROOM, size = len;
> >  
> > +		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
> >  		if (status & CPDMA_RX_VLAN_ENCAP) {
> > -			xdp.data = pa + CPSW_HEADROOM +
> > -				   CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> > -			xdp.data_end = xdp.data + len -
> > -				       CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> > -		} else {
> > -			xdp.data = pa + CPSW_HEADROOM;
> > -			xdp.data_end = xdp.data + len;
> > +			headroom += CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> > +			size -= CPSW_RX_VLAN_ENCAP_HDR_SIZE;
> >  		}
> >  
> > +		xdp_prepare_buff(&xdp, pa, headroom, size);
> >  		xdp_set_data_meta_invalid(&xdp);
> >  
> > -		xdp.data_hard_start = pa;
> > -
> >  		ret = cpsw_run_xdp(priv, ch, &xdp, page, priv->emac_port);
> >  		if (ret != CPSW_XDP_PASS)
> >  			goto requeue;
> > diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
> > index 14a7ee4c6899..93c202d6aff5 100644
> > --- a/drivers/net/hyperv/netvsc_bpf.c
> > +++ b/drivers/net/hyperv/netvsc_bpf.c
> > @@ -45,10 +45,8 @@ u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan,
> >  	}
> >  
> >  	xdp_init_buff(xdp, PAGE_SIZE, &nvchan->xdp_rxq);
> > -	xdp->data_hard_start = page_address(page);
> > -	xdp->data = xdp->data_hard_start + NETVSC_XDP_HDRM;
> > +	xdp_prepare_buff(xdp, page_address(page), NETVSC_XDP_HDRM, len);
> >  	xdp_set_data_meta_invalid(xdp);
> > -	xdp->data_end = xdp->data + len;
> >  
> >  	memcpy(xdp->data, data, len);
> >  
> > diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> > index a82f7823d428..c7cbd058b345 100644
> > --- a/drivers/net/tun.c
> > +++ b/drivers/net/tun.c
> > @@ -1600,10 +1600,8 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> >  		u32 act;
> >  
> >  		xdp_init_buff(&xdp, buflen, &tfile->xdp_rxq);
> > -		xdp.data_hard_start = buf;
> > -		xdp.data = buf + pad;
> > +		xdp_prepare_buff(&xdp, buf, pad, len);
> >  		xdp_set_data_meta_invalid(&xdp);
> > -		xdp.data_end = xdp.data + len;
> >  
> >  		act = bpf_prog_run_xdp(xdp_prog, &xdp);
> >  		if (act == XDP_REDIRECT || act == XDP_TX) {
> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> > index 25f3601fb6dd..30a7f2ad39c3 100644
> > --- a/drivers/net/veth.c
> > +++ b/drivers/net/veth.c
> > @@ -710,11 +710,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
> >  		skb = nskb;
> >  	}
> >  
> > -	xdp.data_hard_start = skb->head;
> > -	xdp.data = skb_mac_header(skb);
> > -	xdp.data_end = xdp.data + pktlen;
> > -	xdp.data_meta = xdp.data;
> > -
> > +	xdp_prepare_buff(&xdp, skb->head, skb->mac_header, pktlen);
> >  	/* SKB "head" area always have tailroom for skb_shared_info */
> >  	frame_sz = (void *)skb_end_pointer(skb) - xdp.data_hard_start;
> >  	frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index a22ce87bcd9c..e57b2d452cbc 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -690,10 +690,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
> >  		}
> >  
> >  		xdp_init_buff(&xdp, buflen, &rq->xdp_rxq);
> > -		xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
> > -		xdp.data = xdp.data_hard_start + xdp_headroom;
> > -		xdp.data_end = xdp.data + len;
> > -		xdp.data_meta = xdp.data;
> > +		xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > +				 xdp_headroom, len);
> >  		orig_data = xdp.data;
> >  		act = bpf_prog_run_xdp(xdp_prog, &xdp);
> >  		stats->xdp_packets++;
> > @@ -859,10 +857,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >  		 */
> >  		data = page_address(xdp_page) + offset;
> >  		xdp_init_buff(&xdp, frame_sz - vi->hdr_len, &rq->xdp_rxq);
> > -		xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
> > -		xdp.data = data + vi->hdr_len;
> > -		xdp.data_end = xdp.data + (len - vi->hdr_len);
> > -		xdp.data_meta = xdp.data;
> > +		xdp_prepare_buff(&xdp, data - VIRTIO_XDP_HEADROOM + vi->hdr_len,
> > +				 VIRTIO_XDP_HEADROOM, len - vi->hdr_len);
> >  
> >  		act = bpf_prog_run_xdp(xdp_prog, &xdp);
> >  		stats->xdp_packets++;
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index 329397c60d84..61d3f5f8b7f3 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -866,10 +866,8 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
> >  
> >  	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
> >  		      &queue->xdp_rxq);
> > -	xdp->data_hard_start = page_address(pdata);
> > -	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
> > +	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM, len);
> >  	xdp_set_data_meta_invalid(xdp);
> > -	xdp->data_end = xdp->data + len;
> >  
> >  	act = bpf_prog_run_xdp(prog, xdp);
> >  	switch (act) {
> > diff --git a/include/net/xdp.h b/include/net/xdp.h
> > index 3fb3a9aa1b71..66d8a4b317a3 100644
> > --- a/include/net/xdp.h
> > +++ b/include/net/xdp.h
> > @@ -83,6 +83,18 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
> >  	xdp->rxq = rxq;
> >  }
> >  
> > +static inline void
> > +xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> > +		 int headroom, int data_len)
> > +{
> > +	unsigned char *data = hard_start + headroom;
> > +
> > +	xdp->data_hard_start = hard_start;
> > +	xdp->data = data;
> > +	xdp->data_end = data + data_len;
> > +	xdp->data_meta = data;
> > +}
> > +
> >  /* Reserve memory area at end-of data area.
> >   *
> >   * This macro reserves tailroom in the XDP buffer by limiting the
> > diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> > index a8fa5a9e4137..fe5a80d396e3 100644
> > --- a/net/bpf/test_run.c
> > +++ b/net/bpf/test_run.c
> > @@ -636,10 +636,7 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
> >  	if (IS_ERR(data))
> >  		return PTR_ERR(data);
> >  
> > -	xdp.data_hard_start = data;
> > -	xdp.data = data + headroom;
> > -	xdp.data_meta = xdp.data;
> > -	xdp.data_end = xdp.data + size;
> > +	xdp_prepare_buff(&xdp, data, headroom, size);
> >  
> >  	rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0);
> >  	xdp_init_buff(&xdp, headroom + max_data_sz + tailroom,
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index bac56afcf6bc..2997177876cc 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -4592,7 +4592,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
> >  	__be16 orig_eth_type;
> >  	struct ethhdr *eth;
> >  	bool orig_bcast;
> > -	int hlen, off;
> > +	int off;
> >  
> >  	/* Reinjected packets coming from act_mirred or similar should
> >  	 * not get XDP generic processing.
> > @@ -4624,11 +4624,9 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
> >  	 * header.
> >  	 */
> >  	mac_len = skb->data - skb_mac_header(skb);
> > -	hlen = skb_headlen(skb) + mac_len;
> > -	xdp->data = skb->data - mac_len;
> > -	xdp->data_meta = xdp->data;
> > -	xdp->data_end = xdp->data + hlen;
> > -	xdp->data_hard_start = skb->data - skb_headroom(skb);
> > +	xdp_prepare_buff(xdp, skb->data - skb_headroom(skb),
> > +			 skb_headroom(skb) - mac_len,
> > +			 skb_headlen(skb) + mac_len);
> >  
> >  	/* SKB "head" area always have tailroom for skb_shared_info */
> >  	frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start;
> > -- 
> > 2.29.2
> > 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-15 13:47     ` Lorenzo Bianconi
@ 2020-12-15 14:51       ` Daniel Borkmann
  2020-12-15 15:06         ` Lorenzo Bianconi
  2020-12-16  8:52       ` Jesper Dangaard Brouer
  1 sibling, 1 reply; 19+ messages in thread
From: Daniel Borkmann @ 2020-12-15 14:51 UTC (permalink / raw)
  To: Lorenzo Bianconi, Maciej Fijalkowski
  Cc: Lorenzo Bianconi, bpf, netdev, davem, kuba, ast, brouer,
	alexander.duyck, saeed

On 12/15/20 2:47 PM, Lorenzo Bianconi wrote:
[...]
>>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>>> index 329397c60d84..61d3f5f8b7f3 100644
>>> --- a/drivers/net/xen-netfront.c
>>> +++ b/drivers/net/xen-netfront.c
>>> @@ -866,10 +866,8 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
>>>   
>>>   	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
>>>   		      &queue->xdp_rxq);
>>> -	xdp->data_hard_start = page_address(pdata);
>>> -	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
>>> +	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM, len);
>>>   	xdp_set_data_meta_invalid(xdp);
>>> -	xdp->data_end = xdp->data + len;
>>>   
>>>   	act = bpf_prog_run_xdp(prog, xdp);
>>>   	switch (act) {
>>> diff --git a/include/net/xdp.h b/include/net/xdp.h
>>> index 3fb3a9aa1b71..66d8a4b317a3 100644
>>> --- a/include/net/xdp.h
>>> +++ b/include/net/xdp.h
>>> @@ -83,6 +83,18 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
>>>   	xdp->rxq = rxq;
>>>   }
>>>   
>>> +static inline void

nit: maybe __always_inline

>>> +xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
>>> +		 int headroom, int data_len)
>>> +{
>>> +	unsigned char *data = hard_start + headroom;
>>> +
>>> +	xdp->data_hard_start = hard_start;
>>> +	xdp->data = data;
>>> +	xdp->data_end = data + data_len;
>>> +	xdp->data_meta = data;
>>> +}
>>> +
>>>   /* Reserve memory area at end-of data area.
>>>    *

For the drivers with xdp_set_data_meta_invalid(), we're basically setting xdp->data_meta
twice unless compiler is smart enough to optimize the first one away (did you double check?).
Given this is supposed to be a cleanup, why not integrate this logic as well so the
xdp_set_data_meta_invalid() doesn't get extra treatment?

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-15 14:51       ` Daniel Borkmann
@ 2020-12-15 15:06         ` Lorenzo Bianconi
  2020-12-15 15:13           ` Maciej Fijalkowski
  0 siblings, 1 reply; 19+ messages in thread
From: Lorenzo Bianconi @ 2020-12-15 15:06 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Maciej Fijalkowski, Lorenzo Bianconi, bpf, netdev, davem, kuba,
	ast, brouer, alexander.duyck, saeed

[-- Attachment #1: Type: text/plain, Size: 2509 bytes --]

> On 12/15/20 2:47 PM, Lorenzo Bianconi wrote:
> [...]
> > > > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > > > index 329397c60d84..61d3f5f8b7f3 100644
> > > > --- a/drivers/net/xen-netfront.c
> > > > +++ b/drivers/net/xen-netfront.c
> > > > @@ -866,10 +866,8 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
> > > >   	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
> > > >   		      &queue->xdp_rxq);
> > > > -	xdp->data_hard_start = page_address(pdata);
> > > > -	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
> > > > +	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM, len);
> > > >   	xdp_set_data_meta_invalid(xdp);
> > > > -	xdp->data_end = xdp->data + len;
> > > >   	act = bpf_prog_run_xdp(prog, xdp);
> > > >   	switch (act) {
> > > > diff --git a/include/net/xdp.h b/include/net/xdp.h
> > > > index 3fb3a9aa1b71..66d8a4b317a3 100644
> > > > --- a/include/net/xdp.h
> > > > +++ b/include/net/xdp.h
> > > > @@ -83,6 +83,18 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
> > > >   	xdp->rxq = rxq;
> > > >   }
> > > > +static inline void
> 
> nit: maybe __always_inline

ack, I will add in v4

> 
> > > > +xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> > > > +		 int headroom, int data_len)
> > > > +{
> > > > +	unsigned char *data = hard_start + headroom;
> > > > +
> > > > +	xdp->data_hard_start = hard_start;
> > > > +	xdp->data = data;
> > > > +	xdp->data_end = data + data_len;
> > > > +	xdp->data_meta = data;
> > > > +}
> > > > +
> > > >   /* Reserve memory area at end-of data area.
> > > >    *
> 
> For the drivers with xdp_set_data_meta_invalid(), we're basically setting xdp->data_meta
> twice unless compiler is smart enough to optimize the first one away (did you double check?).
> Given this is supposed to be a cleanup, why not integrate this logic as well so the
> xdp_set_data_meta_invalid() doesn't get extra treatment?

we discussed it before, but I am fine to add it in v4. Something like:

static __always_inline void
xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
		 int headroom, int data_len, bool meta_valid)
{
	unsigned char *data = hard_start + headroom;
	
	xdp->data_hard_start = hard_start;
	xdp->data = data;
	xdp->data_end = data + data_len;
	xdp->data_meta = meta_valid ? data : data + 1;
}

Regards,
Lorenzo

> 
> Thanks,
> Daniel
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-15 15:06         ` Lorenzo Bianconi
@ 2020-12-15 15:13           ` Maciej Fijalkowski
  2020-12-15 20:36             ` Lorenzo Bianconi
  2020-12-16  8:30             ` Jesper Dangaard Brouer
  0 siblings, 2 replies; 19+ messages in thread
From: Maciej Fijalkowski @ 2020-12-15 15:13 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: Daniel Borkmann, Lorenzo Bianconi, bpf, netdev, davem, kuba, ast,
	brouer, alexander.duyck, saeed

On Tue, Dec 15, 2020 at 04:06:20PM +0100, Lorenzo Bianconi wrote:
> > On 12/15/20 2:47 PM, Lorenzo Bianconi wrote:
> > [...]
> > > > > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > > > > index 329397c60d84..61d3f5f8b7f3 100644
> > > > > --- a/drivers/net/xen-netfront.c
> > > > > +++ b/drivers/net/xen-netfront.c
> > > > > @@ -866,10 +866,8 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
> > > > >   	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
> > > > >   		      &queue->xdp_rxq);
> > > > > -	xdp->data_hard_start = page_address(pdata);
> > > > > -	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
> > > > > +	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM, len);
> > > > >   	xdp_set_data_meta_invalid(xdp);
> > > > > -	xdp->data_end = xdp->data + len;
> > > > >   	act = bpf_prog_run_xdp(prog, xdp);
> > > > >   	switch (act) {
> > > > > diff --git a/include/net/xdp.h b/include/net/xdp.h
> > > > > index 3fb3a9aa1b71..66d8a4b317a3 100644
> > > > > --- a/include/net/xdp.h
> > > > > +++ b/include/net/xdp.h
> > > > > @@ -83,6 +83,18 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
> > > > >   	xdp->rxq = rxq;
> > > > >   }
> > > > > +static inline void
> > 
> > nit: maybe __always_inline
> 
> ack, I will add in v4
> 
> > 
> > > > > +xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> > > > > +		 int headroom, int data_len)
> > > > > +{
> > > > > +	unsigned char *data = hard_start + headroom;
> > > > > +
> > > > > +	xdp->data_hard_start = hard_start;
> > > > > +	xdp->data = data;
> > > > > +	xdp->data_end = data + data_len;
> > > > > +	xdp->data_meta = data;
> > > > > +}
> > > > > +
> > > > >   /* Reserve memory area at end-of data area.
> > > > >    *
> > 
> > For the drivers with xdp_set_data_meta_invalid(), we're basically setting xdp->data_meta
> > twice unless compiler is smart enough to optimize the first one away (did you double check?).
> > Given this is supposed to be a cleanup, why not integrate this logic as well so the
> > xdp_set_data_meta_invalid() doesn't get extra treatment?

That's what I was trying to say previously.

> 
> we discussed it before, but I am fine to add it in v4. Something like:
> 
> static __always_inline void
> xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> 		 int headroom, int data_len, bool meta_valid)
> {
> 	unsigned char *data = hard_start + headroom;
> 	
> 	xdp->data_hard_start = hard_start;
> 	xdp->data = data;
> 	xdp->data_end = data + data_len;
> 	xdp->data_meta = meta_valid ? data : data + 1;

This will introduce branch, so for intel drivers we're getting the
overhead of one add and a branch. I'm still opting for a separate helper.

static __always_inline void
xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
		 int headroom, int data_len)
{
	unsigned char *data = hard_start + headroom;

	xdp->data_hard_start = hard_start;
	xdp->data = data;
	xdp->data_end = data + data_len;
	xdp_set_data_meta_invalid(xdp);
}

static __always_inline void
xdp_prepare_buff_meta(struct xdp_buff *xdp, unsigned char *hard_start,
		      int headroom, int data_len)
{
	unsigned char *data = hard_start + headroom;

	xdp->data_hard_start = hard_start;
	xdp->data = data;
	xdp->data_end = data + data_len;
	xdp->data_meta = data;
}

> }
> 
> Regards,
> Lorenzo
> 
> > 
> > Thanks,
> > Daniel
> > 



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-15 15:13           ` Maciej Fijalkowski
@ 2020-12-15 20:36             ` Lorenzo Bianconi
  2020-12-16  8:30             ` Jesper Dangaard Brouer
  1 sibling, 0 replies; 19+ messages in thread
From: Lorenzo Bianconi @ 2020-12-15 20:36 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Lorenzo Bianconi, Daniel Borkmann, bpf, netdev, davem, kuba, ast,
	brouer, alexander.duyck, saeed

[-- Attachment #1: Type: text/plain, Size: 4204 bytes --]

> On Tue, Dec 15, 2020 at 04:06:20PM +0100, Lorenzo Bianconi wrote:
> > > On 12/15/20 2:47 PM, Lorenzo Bianconi wrote:
> > > [...]
> > > > > > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > > > > > index 329397c60d84..61d3f5f8b7f3 100644
> > > > > > --- a/drivers/net/xen-netfront.c
> > > > > > +++ b/drivers/net/xen-netfront.c
> > > > > > @@ -866,10 +866,8 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
> > > > > >   	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
> > > > > >   		      &queue->xdp_rxq);
> > > > > > -	xdp->data_hard_start = page_address(pdata);
> > > > > > -	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
> > > > > > +	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM, len);
> > > > > >   	xdp_set_data_meta_invalid(xdp);
> > > > > > -	xdp->data_end = xdp->data + len;
> > > > > >   	act = bpf_prog_run_xdp(prog, xdp);
> > > > > >   	switch (act) {
> > > > > > diff --git a/include/net/xdp.h b/include/net/xdp.h
> > > > > > index 3fb3a9aa1b71..66d8a4b317a3 100644
> > > > > > --- a/include/net/xdp.h
> > > > > > +++ b/include/net/xdp.h
> > > > > > @@ -83,6 +83,18 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
> > > > > >   	xdp->rxq = rxq;
> > > > > >   }
> > > > > > +static inline void
> > > 
> > > nit: maybe __always_inline
> > 
> > ack, I will add in v4
> > 
> > > 
> > > > > > +xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> > > > > > +		 int headroom, int data_len)
> > > > > > +{
> > > > > > +	unsigned char *data = hard_start + headroom;
> > > > > > +
> > > > > > +	xdp->data_hard_start = hard_start;
> > > > > > +	xdp->data = data;
> > > > > > +	xdp->data_end = data + data_len;
> > > > > > +	xdp->data_meta = data;
> > > > > > +}
> > > > > > +
> > > > > >   /* Reserve memory area at end-of data area.
> > > > > >    *
> > > 
> > > For the drivers with xdp_set_data_meta_invalid(), we're basically setting xdp->data_meta
> > > twice unless compiler is smart enough to optimize the first one away (did you double check?).
> > > Given this is supposed to be a cleanup, why not integrate this logic as well so the
> > > xdp_set_data_meta_invalid() doesn't get extra treatment?
> 
> That's what I was trying to say previously.
> 
> > 
> > we discussed it before, but I am fine to add it in v4. Something like:
> > 
> > static __always_inline void
> > xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> > 		 int headroom, int data_len, bool meta_valid)
> > {
> > 	unsigned char *data = hard_start + headroom;
> > 	
> > 	xdp->data_hard_start = hard_start;
> > 	xdp->data = data;
> > 	xdp->data_end = data + data_len;
> > 	xdp->data_meta = meta_valid ? data : data + 1;
> 
> This will introduce branch, so for intel drivers we're getting the
> overhead of one add and a branch. I'm still opting for a separate helper.
> 
> static __always_inline void
> xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> 		 int headroom, int data_len)
> {
> 	unsigned char *data = hard_start + headroom;
> 
> 	xdp->data_hard_start = hard_start;
> 	xdp->data = data;
> 	xdp->data_end = data + data_len;
> 	xdp_set_data_meta_invalid(xdp);
> }
> 
> static __always_inline void
> xdp_prepare_buff_meta(struct xdp_buff *xdp, unsigned char *hard_start,
> 		      int headroom, int data_len)
> {
> 	unsigned char *data = hard_start + headroom;
> 
> 	xdp->data_hard_start = hard_start;
> 	xdp->data = data;
> 	xdp->data_end = data + data_len;
> 	xdp->data_meta = data;
> }

yes, to follow-up the possible approaches we have here are:

- have 2 different helpers (xdp_prepare_buff_meta and xdp_prepare_buff) as
  suggested by Maciej
- move the data_meta initialization out of the helper and do it in each
  driver
- use the current approach and overwrite data_meta with
  xdp_set_data_meta_invalid() when necessary
- introduce a branch in order to have just one helper

what is the best for you?

Regards,
Lorenzo

> 
> > }
> > 
> > Regards,
> > Lorenzo
> > 
> > > 
> > > Thanks,
> > > Daniel
> > > 
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-15 15:13           ` Maciej Fijalkowski
  2020-12-15 20:36             ` Lorenzo Bianconi
@ 2020-12-16  8:30             ` Jesper Dangaard Brouer
  1 sibling, 0 replies; 19+ messages in thread
From: Jesper Dangaard Brouer @ 2020-12-16  8:30 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Lorenzo Bianconi, Daniel Borkmann, Lorenzo Bianconi, bpf, netdev,
	davem, kuba, ast, alexander.duyck, saeed, brouer

On Tue, 15 Dec 2020 16:13:44 +0100
Maciej Fijalkowski <maciej.fijalkowski@intel.com> wrote:

> On Tue, Dec 15, 2020 at 04:06:20PM +0100, Lorenzo Bianconi wrote:
> > > On 12/15/20 2:47 PM, Lorenzo Bianconi wrote:
> > > [...]  
> > > > > > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > > > > > index 329397c60d84..61d3f5f8b7f3 100644
> > > > > > --- a/drivers/net/xen-netfront.c
> > > > > > +++ b/drivers/net/xen-netfront.c
> > > > > > @@ -866,10 +866,8 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
> > > > > >   	xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
> > > > > >   		      &queue->xdp_rxq);
> > > > > > -	xdp->data_hard_start = page_address(pdata);
> > > > > > -	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
> > > > > > +	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM, len);
> > > > > >   	xdp_set_data_meta_invalid(xdp);
> > > > > > -	xdp->data_end = xdp->data + len;
> > > > > >   	act = bpf_prog_run_xdp(prog, xdp);
> > > > > >   	switch (act) {
> > > > > > diff --git a/include/net/xdp.h b/include/net/xdp.h
> > > > > > index 3fb3a9aa1b71..66d8a4b317a3 100644
> > > > > > --- a/include/net/xdp.h
> > > > > > +++ b/include/net/xdp.h
> > > > > > @@ -83,6 +83,18 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
> > > > > >   	xdp->rxq = rxq;
> > > > > >   }
> > > > > > +static inline void  
> > > 
> > > nit: maybe __always_inline  
> > 
> > ack, I will add in v4
> >   
> > >   
> > > > > > +xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> > > > > > +		 int headroom, int data_len)
> > > > > > +{
> > > > > > +	unsigned char *data = hard_start + headroom;
> > > > > > +
> > > > > > +	xdp->data_hard_start = hard_start;
> > > > > > +	xdp->data = data;
> > > > > > +	xdp->data_end = data + data_len;
> > > > > > +	xdp->data_meta = data;
> > > > > > +}
> > > > > > +
> > > > > >   /* Reserve memory area at end-of data area.
> > > > > >    *  
> > > 
> > > For the drivers with xdp_set_data_meta_invalid(), we're basically setting xdp->data_meta
> > > twice unless compiler is smart enough to optimize the first one away (did you double check?).
> > > Given this is supposed to be a cleanup, why not integrate this logic as well so the
> > > xdp_set_data_meta_invalid() doesn't get extra treatment?  
> 
> That's what I was trying to say previously.
> 
> > 
> > we discussed it before, but I am fine to add it in v4. Something like:
> > 
> > static __always_inline void
> > xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> > 		 int headroom, int data_len, bool meta_valid)
> > {
> > 	unsigned char *data = hard_start + headroom;
> > 	
> > 	xdp->data_hard_start = hard_start;
> > 	xdp->data = data;
> > 	xdp->data_end = data + data_len;
> > 	xdp->data_meta = meta_valid ? data : data + 1;  
> 
> This will introduce branch, so for intel drivers we're getting the
> overhead of one add and a branch. I'm still opting for a separate helper.

I should think, as this gets inlined the compiler should be able to
remove the branch.  I assume that the usage of 'meta_valid' will be a
const in the drivers.  Maybe we should have the API be 'const bool meta_valid'?


> static __always_inline void
> xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> 		 int headroom, int data_len)
> {
> 	unsigned char *data = hard_start + headroom;
> 
> 	xdp->data_hard_start = hard_start;
> 	xdp->data = data;
> 	xdp->data_end = data + data_len;
> 	xdp_set_data_meta_invalid(xdp);
> }
> 
> static __always_inline void
> xdp_prepare_buff_meta(struct xdp_buff *xdp, unsigned char *hard_start,
> 		      int headroom, int data_len)
> {
> 	unsigned char *data = hard_start + headroom;
> 
> 	xdp->data_hard_start = hard_start;
> 	xdp->data = data;
> 	xdp->data_end = data + data_len;
> 	xdp->data_meta = data;
> }

Thanks to you Maciej for reviewing this! :-)

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 1/2] net: xdp: introduce xdp_init_buff utility routine
  2020-12-12 17:41 ` [PATCH v3 bpf-next 1/2] net: xdp: introduce xdp_init_buff utility routine Lorenzo Bianconi
@ 2020-12-16  8:35   ` Jesper Dangaard Brouer
  2020-12-16 14:56     ` Lorenzo Bianconi
  0 siblings, 1 reply; 19+ messages in thread
From: Jesper Dangaard Brouer @ 2020-12-16  8:35 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: bpf, netdev, davem, kuba, ast, daniel, lorenzo.bianconi,
	alexander.duyck, maciej.fijalkowski, saeed, brouer

On Sat, 12 Dec 2020 18:41:48 +0100
Lorenzo Bianconi <lorenzo@kernel.org> wrote:

> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> index fcc262064766..b7942c3440c0 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> @@ -133,12 +133,11 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
>  	dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir);
>  
>  	txr = rxr->bnapi->tx_ring;
> +	xdp_init_buff(&xdp, PAGE_SIZE, &rxr->xdp_rxq);
>  	xdp.data_hard_start = *data_ptr - offset;
>  	xdp.data = *data_ptr;
>  	xdp_set_data_meta_invalid(&xdp);
>  	xdp.data_end = *data_ptr + *len;
> -	xdp.rxq = &rxr->xdp_rxq;
> -	xdp.frame_sz = PAGE_SIZE; /* BNXT_RX_PAGE_MODE(bp) when XDP enabled */
>  	orig_data = xdp.data;

I don't like loosing the comment here.  Other developers reading this
code might assume that size is always PAGE_SIZE, which is only the case
when XDP is enabled.  Lets save them from making this mistake.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-15 13:47     ` Lorenzo Bianconi
  2020-12-15 14:51       ` Daniel Borkmann
@ 2020-12-16  8:52       ` Jesper Dangaard Brouer
  2020-12-16 15:01         ` Lorenzo Bianconi
  1 sibling, 1 reply; 19+ messages in thread
From: Jesper Dangaard Brouer @ 2020-12-16  8:52 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: Maciej Fijalkowski, Lorenzo Bianconi, bpf, netdev, davem, kuba,
	ast, daniel, alexander.duyck, saeed, brouer

On Tue, 15 Dec 2020 14:47:10 +0100
Lorenzo Bianconi <lorenzo.bianconi@redhat.com> wrote:

> [...]
> > >  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > index 4dbbbd49c389..fcd1ca3343fb 100644
> > > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > @@ -2393,12 +2393,12 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
> > >  
> > >  		/* retrieve a buffer from the ring */
> > >  		if (!skb) {
> > > -			xdp.data = page_address(rx_buffer->page) +
> > > -				   rx_buffer->page_offset;
> > > -			xdp.data_meta = xdp.data;
> > > -			xdp.data_hard_start = xdp.data -
> > > -					      i40e_rx_offset(rx_ring);
> > > -			xdp.data_end = xdp.data + size;
> > > +			unsigned int offset = i40e_rx_offset(rx_ring);  
> > 
> > I now see that we could call the i40e_rx_offset() once per napi, so can
> > you pull this variable out and have it initialized a single time? Applies
> > to other intel drivers as well.  
>
> ack, fine. I will fix in v4.

Be careful with the Intel drivers.  They have two modes (at compile
time) depending on PAGE_SIZE in system.  In one of the modes (default
one) you can place init of xdp.frame_sz outside the NAPI loop and init a
single time.  In the other mode you cannot, and it becomes dynamic per
packet.  Intel review this carefully, please!

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 1/2] net: xdp: introduce xdp_init_buff utility routine
  2020-12-16  8:35   ` Jesper Dangaard Brouer
@ 2020-12-16 14:56     ` Lorenzo Bianconi
  0 siblings, 0 replies; 19+ messages in thread
From: Lorenzo Bianconi @ 2020-12-16 14:56 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Lorenzo Bianconi, bpf, netdev, davem, kuba, ast, daniel,
	alexander.duyck, maciej.fijalkowski, saeed

[-- Attachment #1: Type: text/plain, Size: 1369 bytes --]

> On Sat, 12 Dec 2020 18:41:48 +0100
> Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> 
> > diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> > index fcc262064766..b7942c3440c0 100644
> > --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> > +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
> > @@ -133,12 +133,11 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
> >  	dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir);
> >  
> >  	txr = rxr->bnapi->tx_ring;
> > +	xdp_init_buff(&xdp, PAGE_SIZE, &rxr->xdp_rxq);
> >  	xdp.data_hard_start = *data_ptr - offset;
> >  	xdp.data = *data_ptr;
> >  	xdp_set_data_meta_invalid(&xdp);
> >  	xdp.data_end = *data_ptr + *len;
> > -	xdp.rxq = &rxr->xdp_rxq;
> > -	xdp.frame_sz = PAGE_SIZE; /* BNXT_RX_PAGE_MODE(bp) when XDP enabled */
> >  	orig_data = xdp.data;
> 
> I don't like loosing the comment here.  Other developers reading this
> code might assume that size is always PAGE_SIZE, which is only the case
> when XDP is enabled.  Lets save them from making this mistake.

ack, I will add it back in v4.

Regards,
Lorenzo

> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-16  8:52       ` Jesper Dangaard Brouer
@ 2020-12-16 15:01         ` Lorenzo Bianconi
  2020-12-17 18:16           ` Saeed Mahameed
  0 siblings, 1 reply; 19+ messages in thread
From: Lorenzo Bianconi @ 2020-12-16 15:01 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Maciej Fijalkowski, Lorenzo Bianconi, bpf, netdev, davem, kuba,
	ast, daniel, alexander.duyck, saeed

[-- Attachment #1: Type: text/plain, Size: 1848 bytes --]

> On Tue, 15 Dec 2020 14:47:10 +0100
> Lorenzo Bianconi <lorenzo.bianconi@redhat.com> wrote:
> 
> > [...]
> > > >  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > index 4dbbbd49c389..fcd1ca3343fb 100644
> > > > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > @@ -2393,12 +2393,12 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
> > > >  
> > > >  		/* retrieve a buffer from the ring */
> > > >  		if (!skb) {
> > > > -			xdp.data = page_address(rx_buffer->page) +
> > > > -				   rx_buffer->page_offset;
> > > > -			xdp.data_meta = xdp.data;
> > > > -			xdp.data_hard_start = xdp.data -
> > > > -					      i40e_rx_offset(rx_ring);
> > > > -			xdp.data_end = xdp.data + size;
> > > > +			unsigned int offset = i40e_rx_offset(rx_ring);  
> > > 
> > > I now see that we could call the i40e_rx_offset() once per napi, so can
> > > you pull this variable out and have it initialized a single time? Applies
> > > to other intel drivers as well.  
> >
> > ack, fine. I will fix in v4.
> 
> Be careful with the Intel drivers.  They have two modes (at compile
> time) depending on PAGE_SIZE in system.  In one of the modes (default
> one) you can place init of xdp.frame_sz outside the NAPI loop and init a
> single time.  In the other mode you cannot, and it becomes dynamic per
> packet.  Intel review this carefully, please!

ack. Actully I kept the xdp.frame_sz configuration in the NAPI loop but
an Intel review will be nice.

Regards,
Lorenzo

> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-16 15:01         ` Lorenzo Bianconi
@ 2020-12-17 18:16           ` Saeed Mahameed
  2020-12-17 18:28             ` Maciej Fijalkowski
  0 siblings, 1 reply; 19+ messages in thread
From: Saeed Mahameed @ 2020-12-17 18:16 UTC (permalink / raw)
  To: Lorenzo Bianconi, Jesper Dangaard Brouer
  Cc: Maciej Fijalkowski, Lorenzo Bianconi, bpf, netdev, davem, kuba,
	ast, daniel, alexander.duyck

On Wed, 2020-12-16 at 16:01 +0100, Lorenzo Bianconi wrote:
> > On Tue, 15 Dec 2020 14:47:10 +0100
> > Lorenzo Bianconi <lorenzo.bianconi@redhat.com> wrote:
> > 
> > > [...]
> > > > >  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > index 4dbbbd49c389..fcd1ca3343fb 100644
> > > > > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > @@ -2393,12 +2393,12 @@ static int i40e_clean_rx_irq(struct
> > > > > i40e_ring *rx_ring, int budget)
> > > > >  
> > > > >  		/* retrieve a buffer from the ring */
> > > > >  		if (!skb) {
> > > > > -			xdp.data = page_address(rx_buffer-
> > > > > >page) +
> > > > > -				   rx_buffer->page_offset;
> > > > > -			xdp.data_meta = xdp.data;
> > > > > -			xdp.data_hard_start = xdp.data -
> > > > > -					      i40e_rx_offset(rx
> > > > > _ring);
> > > > > -			xdp.data_end = xdp.data + size;
> > > > > +			unsigned int offset =
> > > > > i40e_rx_offset(rx_ring);  
> > > > 
> > > > I now see that we could call the i40e_rx_offset() once per
> > > > napi, so can
> > > > you pull this variable out and have it initialized a single
> > > > time? Applies
> > > > to other intel drivers as well.  
> > > 

How is this related to this series? i suggest to keep this series clean
of vendor specific unrelated optimizations, this must be done in a
separate patchset.


> > > ack, fine. I will fix in v4.
> > 
> > Be careful with the Intel drivers.  They have two modes (at compile
> > time) depending on PAGE_SIZE in system.  In one of the modes
> > (default
> > one) you can place init of xdp.frame_sz outside the NAPI loop and
> > init a
> > single time.  In the other mode you cannot, and it becomes dynamic
> > per
> > packet.  Intel review this carefully, please!
> 
> ack. Actully I kept the xdp.frame_sz configuration in the NAPI loop
> but
> an Intel review will be nice.
> 
> Regards,
> Lorenzo
> 
> > -- 
> > Best regards,
> >   Jesper Dangaard Brouer
> >   MSc.CS, Principal Kernel Engineer at Red Hat
> >   LinkedIn: http://www.linkedin.com/in/brouer
> > 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-17 18:16           ` Saeed Mahameed
@ 2020-12-17 18:28             ` Maciej Fijalkowski
  2020-12-17 20:31               ` Saeed Mahameed
  0 siblings, 1 reply; 19+ messages in thread
From: Maciej Fijalkowski @ 2020-12-17 18:28 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Lorenzo Bianconi, Jesper Dangaard Brouer, Lorenzo Bianconi, bpf,
	netdev, davem, kuba, ast, daniel, alexander.duyck

On Thu, Dec 17, 2020 at 10:16:06AM -0800, Saeed Mahameed wrote:
> On Wed, 2020-12-16 at 16:01 +0100, Lorenzo Bianconi wrote:
> > > On Tue, 15 Dec 2020 14:47:10 +0100
> > > Lorenzo Bianconi <lorenzo.bianconi@redhat.com> wrote:
> > > 
> > > > [...]
> > > > > >  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > > b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > > index 4dbbbd49c389..fcd1ca3343fb 100644
> > > > > > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > > @@ -2393,12 +2393,12 @@ static int i40e_clean_rx_irq(struct
> > > > > > i40e_ring *rx_ring, int budget)
> > > > > >  
> > > > > >  		/* retrieve a buffer from the ring */
> > > > > >  		if (!skb) {
> > > > > > -			xdp.data = page_address(rx_buffer-
> > > > > > >page) +
> > > > > > -				   rx_buffer->page_offset;
> > > > > > -			xdp.data_meta = xdp.data;
> > > > > > -			xdp.data_hard_start = xdp.data -
> > > > > > -					      i40e_rx_offset(rx
> > > > > > _ring);
> > > > > > -			xdp.data_end = xdp.data + size;
> > > > > > +			unsigned int offset =
> > > > > > i40e_rx_offset(rx_ring);  
> > > > > 
> > > > > I now see that we could call the i40e_rx_offset() once per
> > > > > napi, so can
> > > > > you pull this variable out and have it initialized a single
> > > > > time? Applies
> > > > > to other intel drivers as well.  
> > > > 
> 
> How is this related to this series? i suggest to keep this series clean
> of vendor specific unrelated optimizations, this must be done in a
> separate patchset.

Well, Lorenzo explicitly is touching the thing that I referred to, so I
just ask if he can optimize it while he's at it.

Of course I'm fine with addressing this by myself once -next opens :)

> 
> 
> > > > ack, fine. I will fix in v4.
> > > 
> > > Be careful with the Intel drivers.  They have two modes (at compile
> > > time) depending on PAGE_SIZE in system.  In one of the modes
> > > (default
> > > one) you can place init of xdp.frame_sz outside the NAPI loop and
> > > init a
> > > single time.  In the other mode you cannot, and it becomes dynamic
> > > per
> > > packet.  Intel review this carefully, please!
> > 
> > ack. Actully I kept the xdp.frame_sz configuration in the NAPI loop
> > but
> > an Intel review will be nice.
> > 
> > Regards,
> > Lorenzo
> > 
> > > -- 
> > > Best regards,
> > >   Jesper Dangaard Brouer
> > >   MSc.CS, Principal Kernel Engineer at Red Hat
> > >   LinkedIn: http://www.linkedin.com/in/brouer
> > > 
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff utility routine
  2020-12-17 18:28             ` Maciej Fijalkowski
@ 2020-12-17 20:31               ` Saeed Mahameed
  0 siblings, 0 replies; 19+ messages in thread
From: Saeed Mahameed @ 2020-12-17 20:31 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Lorenzo Bianconi, Jesper Dangaard Brouer, Lorenzo Bianconi, bpf,
	netdev, davem, kuba, ast, daniel, alexander.duyck

On Thu, 2020-12-17 at 19:28 +0100, Maciej Fijalkowski wrote:
> On Thu, Dec 17, 2020 at 10:16:06AM -0800, Saeed Mahameed wrote:
> > On Wed, 2020-12-16 at 16:01 +0100, Lorenzo Bianconi wrote:
> > > > On Tue, 15 Dec 2020 14:47:10 +0100
> > > > Lorenzo Bianconi <lorenzo.bianconi@redhat.com> wrote:
> > > > 
> > > > > [...]
> > > > > > >  	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > > > b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > > > index 4dbbbd49c389..fcd1ca3343fb 100644
> > > > > > > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > > > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > > > > > > @@ -2393,12 +2393,12 @@ static int
> > > > > > > i40e_clean_rx_irq(struct
> > > > > > > i40e_ring *rx_ring, int budget)
> > > > > > >  
> > > > > > >  		/* retrieve a buffer from the ring */
> > > > > > >  		if (!skb) {
> > > > > > > -			xdp.data = page_address(rx_buffer-
> > > > > > > > page) +
> > > > > > > -				   rx_buffer->page_offset;
> > > > > > > -			xdp.data_meta = xdp.data;
> > > > > > > -			xdp.data_hard_start = xdp.data -
> > > > > > > -					      i40e_rx_offset(rx
> > > > > > > _ring);
> > > > > > > -			xdp.data_end = xdp.data + size;
> > > > > > > +			unsigned int offset =
> > > > > > > i40e_rx_offset(rx_ring);  
> > > > > > 
> > > > > > I now see that we could call the i40e_rx_offset() once per
> > > > > > napi, so can
> > > > > > you pull this variable out and have it initialized a single
> > > > > > time? Applies
> > > > > > to other intel drivers as well.  
> > 
> > How is this related to this series? i suggest to keep this series
> > clean
> > of vendor specific unrelated optimizations, this must be done in a
> > separate patchset.
> 
> Well, Lorenzo explicitly is touching the thing that I referred to, so
> I
> just ask if he can optimize it while he's at it.
> 
> Of course I'm fine with addressing this by myself once -next opens :)
> 
Oh, don't get me wrong I am ok with doing this now, and i can do it my
self if you want :), but it shouldn't be part of the this series, so we
won't confuse others who want to implement XDP in the future, that's
all.


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-12-17 20:31 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-12 17:41 [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff Lorenzo Bianconi
2020-12-12 17:41 ` [PATCH v3 bpf-next 1/2] net: xdp: introduce xdp_init_buff utility routine Lorenzo Bianconi
2020-12-16  8:35   ` Jesper Dangaard Brouer
2020-12-16 14:56     ` Lorenzo Bianconi
2020-12-12 17:41 ` [PATCH v3 bpf-next 2/2] net: xdp: introduce xdp_prepare_buff " Lorenzo Bianconi
2020-12-15 12:36   ` Maciej Fijalkowski
2020-12-15 13:47     ` Lorenzo Bianconi
2020-12-15 14:51       ` Daniel Borkmann
2020-12-15 15:06         ` Lorenzo Bianconi
2020-12-15 15:13           ` Maciej Fijalkowski
2020-12-15 20:36             ` Lorenzo Bianconi
2020-12-16  8:30             ` Jesper Dangaard Brouer
2020-12-16  8:52       ` Jesper Dangaard Brouer
2020-12-16 15:01         ` Lorenzo Bianconi
2020-12-17 18:16           ` Saeed Mahameed
2020-12-17 18:28             ` Maciej Fijalkowski
2020-12-17 20:31               ` Saeed Mahameed
2020-12-14 15:32 ` [PATCH v3 bpf-next 0/2] introduce xdp_init_buff/xdp_prepare_buff Martin Habets
2020-12-14 17:53 ` Camelia Alexandra Groza

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.