All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support
@ 2020-09-30 15:41 Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 01/12] xdp: introduce mb in xdp_buff/xdp_frame Lorenzo Bianconi
                   ` (13 more replies)
  0 siblings, 14 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 6405 bytes --]

This series introduce XDP multi-buffer support. The mvneta driver is
the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
please focus on how these new types of xdp_{buff,frame} packets
traverse the different layers and the layout design. It is on purpose
that BPF-helpers are kept simple, as we don't want to expose the
internal layout to allow later changes.

For now, to keep the design simple and to maintain performance, the XDP
BPF-prog (still) only have access to the first-buffer. It is left for
later (another patchset) to add payload access across multiple buffers.
This patchset should still allow for these future extensions. The goal
is to lift the XDP MTU restriction that comes with XDP, but maintain
same performance as before.

The main idea for the new multi-buffer layout is to reuse the same
layout used for non-linear SKB. This rely on the "skb_shared_info"
struct at the end of the first buffer to link together subsequent
buffers. Keeping the layout compatible with SKBs is also done to ease
and speedup creating an SKB from an xdp_{buff,frame}. Converting
xdp_frame to SKB and deliver it to the network stack is shown in cpumap
code (patch 12/12).

A multi-buffer bit (mb) has been introduced in xdp_{buff,frame} structure
to notify the bpf/network layer this is a xdp multi-buffer frame.

In order to provide to userspace some metdata about the non-linear
xdp_{buff,frame}, we introduced 2 bpf helpers:
- bpf_xdp_get_frag_count:
  get the number of fragments for a given xdp multi-buffer.
- bpf_xdp_get_frags_total_size:
  get the total size of fragments for a given xdp multi-buffer.

Typical use cases for this series are:
- Jumbo-frames
- Packet header split (please see Google’s use-case @ NetDevConf 0x14, [0])
- TSO

More info about the main idea behind this approach can be found here [1][2].

We carried out some throughput tests in order to verify we did not introduced
any performance regression adding xdp multi-buff support to mvneta:

offered load is ~ 1000Kpps, packet size is 64B

commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in mvneta_rx_swbm")
- xdp-pass:     ~162Kpps
- xdp-drop:     ~701Kpps
- xdp-tx:       ~185Kpps
- xdp-redirect: ~202Kpps

mvneta xdp multi-buff:
- xdp-pass:     ~163Kpps
- xdp-drop:     ~739Kpps
- xdp-tx:       ~182Kpps
- xdp-redirect: ~202Kpps

This series is based on "bpf: cpumap: remove rcpu pointer from cpu_map_build_skb signature"
https://patchwork.ozlabs.org/project/netdev/patch/33cb9b7dc447de3ea6fd6ce713ac41bca8794423.1601292015.git.lorenzo@kernel.org/

Changes since v2:
- add throughput measurements
- drop bpf_xdp_adjust_mb_header bpf helper
- introduce selftest for xdp multibuffer
- addressed comments on bpf_xdp_get_frag_count
- introduce xdp multi-buff support to cpumaps

Changes since v1:
- Fix use-after-free in xdp_return_{buff/frame}
- Introduce bpf helpers
- Introduce xdp_mb sample program
- access skb_shared_info->nr_frags only on the last fragment

Changes since RFC:
- squash multi-buffer bit initialization in a single patch
- add mvneta non-linear XDP buff support for tx side

[0] https://netdevconf.info/0x14/pub/slides/62/Implementing%20TCP%20RX%20zero%20copy.pdf
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
[2] https://netdevconf.info/0x14/pub/slides/10/add-xdp-on-driver.pdf (XDPmulti-buffers section)

Lorenzo Bianconi (10):
  xdp: introduce mb in xdp_buff/xdp_frame
  xdp: initialize xdp_buff mb bit to 0 in all XDP drivers
  net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
  xdp: add multi-buff support to xdp_return_{buff/frame}
  net: mvneta: add multi buffer support to XDP_TX
  bpf: move user_size out of bpf_test_init
  bpf: introduce multibuff support to bpf_prog_test_run_xdp()
  bpf: add xdp multi-buffer selftest
  net: mvneta: enable jumbo frames for XDP
  bpf: cpumap: introduce xdp multi-buff support

Sameeh Jubran (2):
  bpf: helpers: add multibuffer support
  samples/bpf: add bpf program that uses xdp mb helpers

 drivers/net/ethernet/amazon/ena/ena_netdev.c  |   1 +
 drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |   1 +
 .../net/ethernet/cavium/thunder/nicvf_main.c  |   1 +
 .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  |   1 +
 drivers/net/ethernet/intel/i40e/i40e_txrx.c   |   1 +
 drivers/net/ethernet/intel/ice/ice_txrx.c     |   1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   1 +
 .../net/ethernet/intel/ixgbevf/ixgbevf_main.c |   1 +
 drivers/net/ethernet/marvell/mvneta.c         | 131 +++++++------
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |   1 +
 drivers/net/ethernet/mellanox/mlx4/en_rx.c    |   1 +
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   |   1 +
 .../ethernet/netronome/nfp/nfp_net_common.c   |   1 +
 drivers/net/ethernet/qlogic/qede/qede_fp.c    |   1 +
 drivers/net/ethernet/sfc/rx.c                 |   1 +
 drivers/net/ethernet/socionext/netsec.c       |   1 +
 drivers/net/ethernet/ti/cpsw.c                |   1 +
 drivers/net/ethernet/ti/cpsw_new.c            |   1 +
 drivers/net/hyperv/netvsc_bpf.c               |   1 +
 drivers/net/tun.c                             |   2 +
 drivers/net/veth.c                            |   1 +
 drivers/net/virtio_net.c                      |   2 +
 drivers/net/xen-netfront.c                    |   1 +
 include/net/xdp.h                             |  31 ++-
 include/uapi/linux/bpf.h                      |  14 ++
 kernel/bpf/cpumap.c                           |  45 +----
 net/bpf/test_run.c                            |  45 ++++-
 net/core/dev.c                                |   1 +
 net/core/filter.c                             |  42 ++++
 net/core/xdp.c                                | 104 ++++++++++
 samples/bpf/Makefile                          |   3 +
 samples/bpf/xdp_mb_kern.c                     |  68 +++++++
 samples/bpf/xdp_mb_user.c                     | 182 ++++++++++++++++++
 tools/include/uapi/linux/bpf.h                |  14 ++
 .../testing/selftests/bpf/prog_tests/xdp_mb.c |  77 ++++++++
 .../selftests/bpf/progs/test_xdp_multi_buff.c |  24 +++
 36 files changed, 691 insertions(+), 114 deletions(-)
 create mode 100644 samples/bpf/xdp_mb_kern.c
 create mode 100644 samples/bpf/xdp_mb_user.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c

-- 
2.26.2


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 01/12] xdp: introduce mb in xdp_buff/xdp_frame
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
@ 2020-09-30 15:41 ` Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 02/12] xdp: initialize xdp_buff mb bit to 0 in all XDP drivers Lorenzo Bianconi
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Introduce multi-buffer bit (mb) in xdp_frame/xdp_buffer to specify
if shared_info area has been properly initialized for non-linear
xdp buffers

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 include/net/xdp.h | 8 ++++++--
 net/core/xdp.c    | 1 +
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/net/xdp.h b/include/net/xdp.h
index 3814fb631d52..42f439f9fcda 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -72,7 +72,8 @@ struct xdp_buff {
 	void *data_hard_start;
 	struct xdp_rxq_info *rxq;
 	struct xdp_txq_info *txq;
-	u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/
+	u32 frame_sz:31; /* frame size to deduce data_hard_end/reserved tailroom*/
+	u32 mb:1; /* xdp non-linear buffer */
 };
 
 /* Reserve memory area at end-of data area.
@@ -96,7 +97,8 @@ struct xdp_frame {
 	u16 len;
 	u16 headroom;
 	u32 metasize:8;
-	u32 frame_sz:24;
+	u32 frame_sz:23;
+	u32 mb:1; /* xdp non-linear frame */
 	/* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time,
 	 * while mem info is valid on remote CPU.
 	 */
@@ -141,6 +143,7 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp)
 	xdp->data_end = frame->data + frame->len;
 	xdp->data_meta = frame->data - frame->metasize;
 	xdp->frame_sz = frame->frame_sz;
+	xdp->mb = frame->mb;
 }
 
 static inline
@@ -167,6 +170,7 @@ int xdp_update_frame_from_buff(struct xdp_buff *xdp,
 	xdp_frame->headroom = headroom - sizeof(*xdp_frame);
 	xdp_frame->metasize = metasize;
 	xdp_frame->frame_sz = xdp->frame_sz;
+	xdp_frame->mb = xdp->mb;
 
 	return 0;
 }
diff --git a/net/core/xdp.c b/net/core/xdp.c
index 48aba933a5a8..884f140fc3be 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -454,6 +454,7 @@ struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp)
 	xdpf->headroom = 0;
 	xdpf->metasize = metasize;
 	xdpf->frame_sz = PAGE_SIZE;
+	xdpf->mb = xdp->mb;
 	xdpf->mem.type = MEM_TYPE_PAGE_ORDER0;
 
 	xsk_buff_free(xdp);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 02/12] xdp: initialize xdp_buff mb bit to 0 in all XDP drivers
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 01/12] xdp: introduce mb in xdp_buff/xdp_frame Lorenzo Bianconi
@ 2020-09-30 15:41 ` Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 03/12] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Initialize multi-buffer bit (mb) to 0 in all XDP-capable drivers.
This is a preliminary patch to enable xdp multi-buffer support.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c        | 1 +
 drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c       | 1 +
 drivers/net/ethernet/cavium/thunder/nicvf_main.c    | 1 +
 drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c    | 1 +
 drivers/net/ethernet/intel/i40e/i40e_txrx.c         | 1 +
 drivers/net/ethernet/intel/ice/ice_txrx.c           | 1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c       | 1 +
 drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c   | 1 +
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c     | 1 +
 drivers/net/ethernet/mellanox/mlx4/en_rx.c          | 1 +
 drivers/net/ethernet/mellanox/mlx5/core/en_rx.c     | 1 +
 drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 1 +
 drivers/net/ethernet/qlogic/qede/qede_fp.c          | 1 +
 drivers/net/ethernet/sfc/rx.c                       | 1 +
 drivers/net/ethernet/socionext/netsec.c             | 1 +
 drivers/net/ethernet/ti/cpsw.c                      | 1 +
 drivers/net/ethernet/ti/cpsw_new.c                  | 1 +
 drivers/net/hyperv/netvsc_bpf.c                     | 1 +
 drivers/net/tun.c                                   | 2 ++
 drivers/net/veth.c                                  | 1 +
 drivers/net/virtio_net.c                            | 2 ++
 drivers/net/xen-netfront.c                          | 1 +
 net/core/dev.c                                      | 1 +
 23 files changed, 25 insertions(+)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index e8131dadc22c..339319b97853 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -1595,6 +1595,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
 	res_budget = budget;
 	xdp.rxq = &rx_ring->xdp_rxq;
 	xdp.frame_sz = ENA_PAGE_SIZE;
+	xdp.mb = 0;
 
 	do {
 		xdp_verdict = XDP_PASS;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
index fcc262064766..344644b6dd4d 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
@@ -139,6 +139,7 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
 	xdp.data_end = *data_ptr + *len;
 	xdp.rxq = &rxr->xdp_rxq;
 	xdp.frame_sz = PAGE_SIZE; /* BNXT_RX_PAGE_MODE(bp) when XDP enabled */
+	xdp.mb = 0;
 	orig_data = xdp.data;
 
 	rcu_read_lock();
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index 0a94c396173b..7fdabaabab1b 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -553,6 +553,7 @@ static inline bool nicvf_xdp_rx(struct nicvf *nic, struct bpf_prog *prog,
 	xdp.data_end = xdp.data + len;
 	xdp.rxq = &rq->xdp_rxq;
 	xdp.frame_sz = RCV_FRAG_LEN + XDP_PACKET_HEADROOM;
+	xdp.mb = 0;
 	orig_data = xdp.data;
 
 	rcu_read_lock();
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
index fe4caf7aad7c..8410e713162e 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
@@ -366,6 +366,7 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
 
 	xdp.frame_sz = DPAA2_ETH_RX_BUF_RAW_SIZE -
 		(dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM);
+	xdp.mb = 0;
 
 	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
 
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index d43ce13a93c9..5df07bc98283 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -2332,6 +2332,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 	xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, 0);
 #endif
 	xdp.rxq = &rx_ring->xdp_rxq;
+	xdp.mb = 0;
 
 	while (likely(total_rx_packets < (unsigned int)budget)) {
 		struct i40e_rx_buffer *rx_buffer;
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index eae75260fe20..d641f513b8d9 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -1089,6 +1089,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 #if (PAGE_SIZE < 8192)
 	xdp.frame_sz = ice_rx_frame_truesize(rx_ring, 0);
 #endif
+	xdp.mb = 0;
 
 	/* start the loop to process Rx packets bounded by 'budget' */
 	while (likely(total_rx_pkts < (unsigned int)budget)) {
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index a190d5c616fc..39f9d2032b9d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2298,6 +2298,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 #if (PAGE_SIZE < 8192)
 	xdp.frame_sz = ixgbe_rx_frame_truesize(rx_ring, 0);
 #endif
+	xdp.mb = 0;
 
 	while (likely(total_rx_packets < budget)) {
 		union ixgbe_adv_rx_desc *rx_desc;
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index 82fce27f682b..1fbc740c266e 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -1129,6 +1129,7 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,
 	struct xdp_buff xdp;
 
 	xdp.rxq = &rx_ring->xdp_rxq;
+	xdp.mb = 0;
 
 	/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */
 #if (PAGE_SIZE < 8192)
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index f6616c8933ca..01661ade9009 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3558,6 +3558,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 			xdp.data = data + MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM;
 			xdp.data_end = xdp.data + rx_bytes;
 			xdp.frame_sz = PAGE_SIZE;
+			xdp.mb = 0;
 
 			if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
 				xdp.rxq = &rxq->xdp_rxq_short;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 99d7737e8ad6..de1ae36b068e 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -684,6 +684,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
 	xdp_prog = rcu_dereference(ring->xdp_prog);
 	xdp.rxq = &ring->xdp_rxq;
 	xdp.frame_sz = priv->frag_info[0].frag_stride;
+	xdp.mb = 0;
 	doorbell_pending = 0;
 
 	/* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 599f5b5ebc97..82c3e755dadd 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1133,6 +1133,7 @@ static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom,
 	xdp->data_end = xdp->data + len;
 	xdp->rxq = &rq->xdp_rxq;
 	xdp->frame_sz = rq->buff.frame0_sz;
+	xdp->mb = 0;
 }
 
 static struct sk_buff *
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index b150da43adb2..69fab1010752 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -1824,6 +1824,7 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget)
 	true_bufsz = xdp_prog ? PAGE_SIZE : dp->fl_bufsz;
 	xdp.frame_sz = PAGE_SIZE - NFP_NET_RX_BUF_HEADROOM;
 	xdp.rxq = &rx_ring->xdp_rxq;
+	xdp.mb = 0;
 	tx_ring = r_vec->xdp_ring;
 
 	while (pkts_polled < budget) {
diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
index a2494bf85007..14a54094ca08 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
@@ -1096,6 +1096,7 @@ static bool qede_rx_xdp(struct qede_dev *edev,
 	xdp.data_end = xdp.data + *len;
 	xdp.rxq = &rxq->xdp_rxq;
 	xdp.frame_sz = rxq->rx_buf_seg_size; /* PAGE_SIZE when XDP enabled */
+	xdp.mb = 0;
 
 	/* Queues always have a full reset currently, so for the time
 	 * being until there's atomic program replace just mark read
diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
index aaa112877561..286feb510c21 100644
--- a/drivers/net/ethernet/sfc/rx.c
+++ b/drivers/net/ethernet/sfc/rx.c
@@ -301,6 +301,7 @@ static bool efx_do_xdp(struct efx_nic *efx, struct efx_channel *channel,
 	xdp.data_end = xdp.data + rx_buf->len;
 	xdp.rxq = &rx_queue->xdp_rxq_info;
 	xdp.frame_sz = efx->rx_page_buf_step;
+	xdp.mb = 0;
 
 	xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
 	rcu_read_unlock();
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index 806eb651cea3..0f0567083a6c 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -947,6 +947,7 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
 
 	xdp.rxq = &dring->xdp_rxq;
 	xdp.frame_sz = PAGE_SIZE;
+	xdp.mb = 0;
 
 	rcu_read_lock();
 	xdp_prog = READ_ONCE(priv->xdp_prog);
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 9fd1f77190ad..558e0abb03c1 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -407,6 +407,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
 		xdp.data_hard_start = pa;
 		xdp.rxq = &priv->xdp_rxq[ch];
 		xdp.frame_sz = PAGE_SIZE;
+		xdp.mb = 0;
 
 		port = priv->emac_port + cpsw->data.dual_emac;
 		ret = cpsw_run_xdp(priv, ch, &xdp, page, port);
diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
index f779d2e1b5c5..7baab97e302a 100644
--- a/drivers/net/ethernet/ti/cpsw_new.c
+++ b/drivers/net/ethernet/ti/cpsw_new.c
@@ -350,6 +350,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
 		xdp.data_hard_start = pa;
 		xdp.rxq = &priv->xdp_rxq[ch];
 		xdp.frame_sz = PAGE_SIZE;
+		xdp.mb = 0;
 
 		ret = cpsw_run_xdp(priv, ch, &xdp, page, priv->emac_port);
 		if (ret != CPSW_XDP_PASS)
diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
index 440486d9c999..a4bafc64997f 100644
--- a/drivers/net/hyperv/netvsc_bpf.c
+++ b/drivers/net/hyperv/netvsc_bpf.c
@@ -50,6 +50,7 @@ u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan,
 	xdp->data_end = xdp->data + len;
 	xdp->rxq = &nvchan->xdp_rxq;
 	xdp->frame_sz = PAGE_SIZE;
+	xdp->mb = 0;
 
 	memcpy(xdp->data, data, len);
 
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index be69d272052f..d8380feb7626 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1641,6 +1641,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
 		xdp.data_end = xdp.data + len;
 		xdp.rxq = &tfile->xdp_rxq;
 		xdp.frame_sz = buflen;
+		xdp.mb = 0;
 
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		if (act == XDP_REDIRECT || act == XDP_TX) {
@@ -2388,6 +2389,7 @@ static int tun_xdp_one(struct tun_struct *tun,
 		xdp_set_data_meta_invalid(xdp);
 		xdp->rxq = &tfile->xdp_rxq;
 		xdp->frame_sz = buflen;
+		xdp->mb = 0;
 
 		act = bpf_prog_run_xdp(xdp_prog, xdp);
 		err = tun_xdp_act(tun, xdp_prog, xdp, act);
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 091e5b4ba042..e25af95a532d 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -711,6 +711,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	/* SKB "head" area always have tailroom for skb_shared_info */
 	xdp.frame_sz = (void *)skb_end_pointer(skb) - xdp.data_hard_start;
 	xdp.frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	xdp.mb = 0;
 
 	orig_data = xdp.data;
 	orig_data_end = xdp.data_end;
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7145c83c6c8c..3d39d7622840 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -690,6 +690,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
 		xdp.data_meta = xdp.data;
 		xdp.rxq = &rq->xdp_rxq;
 		xdp.frame_sz = buflen;
+		xdp.mb = 0;
 		orig_data = xdp.data;
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		stats->xdp_packets++;
@@ -860,6 +861,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		xdp.data_meta = xdp.data;
 		xdp.rxq = &rq->xdp_rxq;
 		xdp.frame_sz = frame_sz - vi->hdr_len;
+		xdp.mb = 0;
 
 		act = bpf_prog_run_xdp(xdp_prog, &xdp);
 		stats->xdp_packets++;
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 3e9895bec15f..00440ad34ca8 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -870,6 +870,7 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
 	xdp->data_end = xdp->data + len;
 	xdp->rxq = &queue->xdp_rxq;
 	xdp->frame_sz = XEN_PAGE_SIZE - XDP_PACKET_HEADROOM;
+	xdp->mb = 0;
 
 	act = bpf_prog_run_xdp(prog, xdp);
 	switch (act) {
diff --git a/net/core/dev.c b/net/core/dev.c
index 9d55bf5d1a65..1e78b028518d 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4640,6 +4640,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
 	/* SKB "head" area always have tailroom for skb_shared_info */
 	xdp->frame_sz  = (void *)skb_end_pointer(skb) - xdp->data_hard_start;
 	xdp->frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	xdp->mb = 0;
 
 	orig_data_end = xdp->data_end;
 	orig_data = xdp->data;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 03/12] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 01/12] xdp: introduce mb in xdp_buff/xdp_frame Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 02/12] xdp: initialize xdp_buff mb bit to 0 in all XDP drivers Lorenzo Bianconi
@ 2020-09-30 15:41 ` Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 04/12] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Update multi-buffer bit (mb) in xdp_buff to notify XDP/eBPF layer and
XDP remote drivers if this is a "non-linear" XDP buffer. Access
skb_shared_info only if xdp_buff mb is set

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 42 +++++++++++++++++----------
 1 file changed, 26 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index d095718355d3..a431e8478297 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2027,12 +2027,17 @@ static void
 mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		    struct xdp_buff *xdp, int sync_len, bool napi)
 {
-	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	struct skb_shared_info *sinfo;
 	int i;
 
+	if (likely(!xdp->mb))
+		goto out;
+
+	sinfo = xdp_get_shared_info_from_buff(xdp);
 	for (i = 0; i < sinfo->nr_frags; i++)
 		page_pool_put_full_page(rxq->page_pool,
 					skb_frag_page(&sinfo->frags[i]), napi);
+out:
 	page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data),
 			   sync_len, napi);
 }
@@ -2234,7 +2239,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	int data_len = -MVNETA_MH_SIZE, len;
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
-	struct skb_shared_info *sinfo;
 
 	if (*size > MVNETA_MAX_RX_BUF_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
@@ -2259,9 +2263,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	xdp->data = data + pp->rx_offset_correction + MVNETA_MH_SIZE;
 	xdp->data_end = xdp->data + data_len;
 	xdp_set_data_meta_invalid(xdp);
-
-	sinfo = xdp_get_shared_info_from_buff(xdp);
-	sinfo->nr_frags = 0;
 }
 
 static void
@@ -2272,9 +2273,9 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 			    struct page *page)
 {
 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	int data_len, len, nfrags = xdp->mb ? sinfo->nr_frags : 0;
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
-	int data_len, len;
 
 	if (*size > MVNETA_MAX_RX_BUF_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
@@ -2288,17 +2289,21 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 				rx_desc->buf_phys_addr,
 				len, dma_dir);
 
-	if (data_len > 0 && sinfo->nr_frags < MAX_SKB_FRAGS) {
-		skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags];
+	if (data_len > 0 && nfrags < MAX_SKB_FRAGS) {
+		skb_frag_t *frag = &sinfo->frags[nfrags];
 
 		skb_frag_off_set(frag, pp->rx_offset_correction);
 		skb_frag_size_set(frag, data_len);
 		__skb_frag_set_page(frag, page);
-		sinfo->nr_frags++;
-
-		rx_desc->buf_phys_addr = 0;
+		nfrags++;
+	} else {
+		page_pool_put_full_page(rxq->page_pool, page, true);
 	}
+
+	rx_desc->buf_phys_addr = 0;
+	sinfo->nr_frags = nfrags;
 	*size -= len;
+	xdp->mb = 1;
 }
 
 static struct sk_buff *
@@ -2306,7 +2311,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		      struct xdp_buff *xdp, u32 desc_status)
 {
 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
-	int i, num_frags = sinfo->nr_frags;
+	int i, num_frags = xdp->mb ? sinfo->nr_frags : 0;
 	struct sk_buff *skb;
 
 	skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
@@ -2319,6 +2324,9 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	skb_put(skb, xdp->data_end - xdp->data);
 	mvneta_rx_csum(pp, desc_status, skb);
 
+	if (likely(!xdp->mb))
+		return skb;
+
 	for (i = 0; i < num_frags; i++) {
 		skb_frag_t *frag = &sinfo->frags[i];
 
@@ -2338,13 +2346,14 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 {
 	int rx_proc = 0, rx_todo, refill, size = 0;
 	struct net_device *dev = pp->dev;
-	struct xdp_buff xdp_buf = {
-		.frame_sz = PAGE_SIZE,
-		.rxq = &rxq->xdp_rxq,
-	};
 	struct mvneta_stats ps = {};
 	struct bpf_prog *xdp_prog;
 	u32 desc_status, frame_sz;
+	struct xdp_buff xdp_buf;
+
+	xdp_buf.data_hard_start = NULL;
+	xdp_buf.frame_sz = PAGE_SIZE;
+	xdp_buf.rxq = &rxq->xdp_rxq;
 
 	/* Get number of received packets */
 	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
@@ -2377,6 +2386,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			frame_sz = size - ETH_FCS_LEN;
 			desc_status = rx_status;
 
+			xdp_buf.mb = 0;
 			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
 					     &size, page);
 		} else {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 04/12] xdp: add multi-buff support to xdp_return_{buff/frame}
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (2 preceding siblings ...)
  2020-09-30 15:41 ` [PATCH v3 net-next 03/12] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
@ 2020-09-30 15:41 ` Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 05/12] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Take into account if the received xdp_buff/xdp_frame is non-linear
recycling/returning the frame memory to the allocator

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 include/net/xdp.h | 18 ++++++++++++++++--
 net/core/xdp.c    | 39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/include/net/xdp.h b/include/net/xdp.h
index 42f439f9fcda..4d47076546ff 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -208,10 +208,24 @@ void __xdp_release_frame(void *data, struct xdp_mem_info *mem);
 static inline void xdp_release_frame(struct xdp_frame *xdpf)
 {
 	struct xdp_mem_info *mem = &xdpf->mem;
+	struct skb_shared_info *sinfo;
+	int i;
 
 	/* Curr only page_pool needs this */
-	if (mem->type == MEM_TYPE_PAGE_POOL)
-		__xdp_release_frame(xdpf->data, mem);
+	if (mem->type != MEM_TYPE_PAGE_POOL)
+		return;
+
+	if (likely(!xdpf->mb))
+		goto out;
+
+	sinfo = xdp_get_shared_info_from_frame(xdpf);
+	for (i = 0; i < sinfo->nr_frags; i++) {
+		struct page *page = skb_frag_page(&sinfo->frags[i]);
+
+		__xdp_release_frame(page_address(page), mem);
+	}
+out:
+	__xdp_release_frame(xdpf->data, mem);
 }
 
 int xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
diff --git a/net/core/xdp.c b/net/core/xdp.c
index 884f140fc3be..6d4fd4dddb00 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -370,18 +370,57 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct)
 
 void xdp_return_frame(struct xdp_frame *xdpf)
 {
+	struct skb_shared_info *sinfo;
+	int i;
+
+	if (likely(!xdpf->mb))
+		goto out;
+
+	sinfo = xdp_get_shared_info_from_frame(xdpf);
+	for (i = 0; i < sinfo->nr_frags; i++) {
+		struct page *page = skb_frag_page(&sinfo->frags[i]);
+
+		__xdp_return(page_address(page), &xdpf->mem, false);
+	}
+out:
 	__xdp_return(xdpf->data, &xdpf->mem, false);
 }
 EXPORT_SYMBOL_GPL(xdp_return_frame);
 
 void xdp_return_frame_rx_napi(struct xdp_frame *xdpf)
 {
+	struct skb_shared_info *sinfo;
+	int i;
+
+	if (likely(!xdpf->mb))
+		goto out;
+
+	sinfo = xdp_get_shared_info_from_frame(xdpf);
+	for (i = 0; i < sinfo->nr_frags; i++) {
+		struct page *page = skb_frag_page(&sinfo->frags[i]);
+
+		__xdp_return(page_address(page), &xdpf->mem, true);
+	}
+out:
 	__xdp_return(xdpf->data, &xdpf->mem, true);
 }
 EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi);
 
 void xdp_return_buff(struct xdp_buff *xdp)
 {
+	struct skb_shared_info *sinfo;
+	int i;
+
+	if (likely(!xdp->mb))
+		goto out;
+
+	sinfo = xdp_get_shared_info_from_buff(xdp);
+	for (i = 0; i < sinfo->nr_frags; i++) {
+		struct page *page = skb_frag_page(&sinfo->frags[i]);
+
+		__xdp_return(page_address(page), &xdp->rxq->mem, true);
+	}
+out:
 	__xdp_return(xdp->data, &xdp->rxq->mem, true);
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 05/12] net: mvneta: add multi buffer support to XDP_TX
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (3 preceding siblings ...)
  2020-09-30 15:41 ` [PATCH v3 net-next 04/12] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
@ 2020-09-30 15:41 ` Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support Lorenzo Bianconi
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Introduce the capability to map non-linear xdp buffer running
mvneta_xdp_submit_frame() for XDP_TX and XDP_REDIRECT

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 79 +++++++++++++++++----------
 1 file changed, 49 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index a431e8478297..f709650974ea 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1852,8 +1852,8 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp,
 			bytes_compl += buf->skb->len;
 			pkts_compl++;
 			dev_kfree_skb_any(buf->skb);
-		} else if (buf->type == MVNETA_TYPE_XDP_TX ||
-			   buf->type == MVNETA_TYPE_XDP_NDO) {
+		} else if ((buf->type == MVNETA_TYPE_XDP_TX ||
+			    buf->type == MVNETA_TYPE_XDP_NDO) && buf->xdpf) {
 			if (napi && buf->type == MVNETA_TYPE_XDP_TX)
 				xdp_return_frame_rx_napi(buf->xdpf);
 			else
@@ -2046,43 +2046,62 @@ static int
 mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq,
 			struct xdp_frame *xdpf, bool dma_map)
 {
-	struct mvneta_tx_desc *tx_desc;
-	struct mvneta_tx_buf *buf;
-	dma_addr_t dma_addr;
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf);
+	int i, num_frames = xdpf->mb ? sinfo->nr_frags + 1 : 1;
+	struct mvneta_tx_desc *tx_desc = NULL;
+	struct page *page;
 
-	if (txq->count >= txq->tx_stop_threshold)
+	if (txq->count + num_frames >= txq->tx_stop_threshold)
 		return MVNETA_XDP_DROPPED;
 
-	tx_desc = mvneta_txq_next_desc_get(txq);
+	for (i = 0; i < num_frames; i++) {
+		struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index];
+		skb_frag_t *frag = i ? &sinfo->frags[i - 1] : NULL;
+		int len = frag ? skb_frag_size(frag) : xdpf->len;
+		dma_addr_t dma_addr;
 
-	buf = &txq->buf[txq->txq_put_index];
-	if (dma_map) {
-		/* ndo_xdp_xmit */
-		dma_addr = dma_map_single(pp->dev->dev.parent, xdpf->data,
-					  xdpf->len, DMA_TO_DEVICE);
-		if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) {
-			mvneta_txq_desc_put(txq);
-			return MVNETA_XDP_DROPPED;
+		tx_desc = mvneta_txq_next_desc_get(txq);
+		if (dma_map) {
+			/* ndo_xdp_xmit */
+			void *data;
+
+			data = frag ? skb_frag_address(frag) : xdpf->data;
+			dma_addr = dma_map_single(pp->dev->dev.parent, data,
+						  len, DMA_TO_DEVICE);
+			if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) {
+				for (; i >= 0; i--)
+					mvneta_txq_desc_put(txq);
+				return MVNETA_XDP_DROPPED;
+			}
+			buf->type = MVNETA_TYPE_XDP_NDO;
+		} else {
+			page = frag ? skb_frag_page(frag)
+				    : virt_to_page(xdpf->data);
+			dma_addr = page_pool_get_dma_addr(page);
+			if (frag)
+				dma_addr += skb_frag_off(frag);
+			else
+				dma_addr += sizeof(*xdpf) + xdpf->headroom;
+			dma_sync_single_for_device(pp->dev->dev.parent,
+						   dma_addr, len,
+						   DMA_BIDIRECTIONAL);
+			buf->type = MVNETA_TYPE_XDP_TX;
 		}
-		buf->type = MVNETA_TYPE_XDP_NDO;
-	} else {
-		struct page *page = virt_to_page(xdpf->data);
+		buf->xdpf = i ? NULL : xdpf;
 
-		dma_addr = page_pool_get_dma_addr(page) +
-			   sizeof(*xdpf) + xdpf->headroom;
-		dma_sync_single_for_device(pp->dev->dev.parent, dma_addr,
-					   xdpf->len, DMA_BIDIRECTIONAL);
-		buf->type = MVNETA_TYPE_XDP_TX;
+		if (!i)
+			tx_desc->command = MVNETA_TXD_F_DESC;
+		tx_desc->buf_phys_addr = dma_addr;
+		tx_desc->data_size = len;
+
+		mvneta_txq_inc_put(txq);
 	}
-	buf->xdpf = xdpf;
 
-	tx_desc->command = MVNETA_TXD_FLZ_DESC;
-	tx_desc->buf_phys_addr = dma_addr;
-	tx_desc->data_size = xdpf->len;
+	/*last descriptor */
+	tx_desc->command |= MVNETA_TXD_L_DESC | MVNETA_TXD_Z_PAD;
 
-	mvneta_txq_inc_put(txq);
-	txq->pending++;
-	txq->count++;
+	txq->pending += num_frames;
+	txq->count += num_frames;
 
 	return MVNETA_XDP_TX;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (4 preceding siblings ...)
  2020-09-30 15:41 ` [PATCH v3 net-next 05/12] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
@ 2020-09-30 15:41 ` Lorenzo Bianconi
  2020-09-30 19:11   ` Alexei Starovoitov
  2020-09-30 15:41 ` [PATCH v3 net-next 07/12] samples/bpf: add bpf program that uses xdp mb helpers Lorenzo Bianconi
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

From: Sameeh Jubran <sameehj@amazon.com>

The implementation is based on this [0] draft by Jesper D. Brouer.

Provided two new helpers:

* bpf_xdp_get_frag_count()
* bpf_xdp_get_frags_total_size()

[0] xdp mb design - https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 include/uapi/linux/bpf.h       | 14 ++++++++++++
 net/core/filter.c              | 42 ++++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h | 14 ++++++++++++
 3 files changed, 70 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index a22812561064..6f97dce8cccf 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3586,6 +3586,18 @@ union bpf_attr {
  * 		the data in *dst*. This is a wrapper of **copy_from_user**\ ().
  * 	Return
  * 		0 on success, or a negative error in case of failure.
+ *
+ * int bpf_xdp_get_frag_count(struct xdp_buff *xdp_md)
+ *	Description
+ *		Get the number of fragments for a given xdp multi-buffer.
+ *	Return
+ *		The number of fragments
+ *
+ * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md)
+ *	Description
+ *		Get the total size of fragments for a given xdp multi-buffer.
+ *	Return
+ *		The total size of fragments for a given xdp multi-buffer.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -3737,6 +3749,8 @@ union bpf_attr {
 	FN(inode_storage_delete),	\
 	FN(d_path),			\
 	FN(copy_from_user),		\
+	FN(xdp_get_frag_count),		\
+	FN(xdp_get_frags_total_size),	\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/net/core/filter.c b/net/core/filter.c
index 706f8db0ccf8..7f33cfae219c 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3475,6 +3475,44 @@ static const struct bpf_func_proto bpf_xdp_adjust_head_proto = {
 	.arg2_type	= ARG_ANYTHING,
 };
 
+BPF_CALL_1(bpf_xdp_get_frag_count, struct  xdp_buff*, xdp)
+{
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+
+	return xdp->mb ? sinfo->nr_frags : 0;
+}
+
+const struct bpf_func_proto bpf_xdp_get_frag_count_proto = {
+	.func		= bpf_xdp_get_frag_count,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
+BPF_CALL_1(bpf_xdp_get_frags_total_size, struct  xdp_buff*, xdp)
+{
+	struct skb_shared_info *sinfo;
+	int nfrags, i, size = 0;
+
+	if (likely(!xdp->mb))
+		return 0;
+
+	sinfo = xdp_get_shared_info_from_buff(xdp);
+	nfrags = min_t(u8, sinfo->nr_frags, MAX_SKB_FRAGS);
+
+	for (i = 0; i < nfrags; i++)
+		size += skb_frag_size(&sinfo->frags[i]);
+
+	return size;
+}
+
+const struct bpf_func_proto bpf_xdp_get_frags_total_size_proto = {
+	.func		= bpf_xdp_get_frags_total_size,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
 BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset)
 {
 	void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */
@@ -6824,6 +6862,10 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_xdp_redirect_map_proto;
 	case BPF_FUNC_xdp_adjust_tail:
 		return &bpf_xdp_adjust_tail_proto;
+	case BPF_FUNC_xdp_get_frag_count:
+		return &bpf_xdp_get_frag_count_proto;
+	case BPF_FUNC_xdp_get_frags_total_size:
+		return &bpf_xdp_get_frags_total_size_proto;
 	case BPF_FUNC_fib_lookup:
 		return &bpf_xdp_fib_lookup_proto;
 #ifdef CONFIG_INET
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index a22812561064..6f97dce8cccf 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3586,6 +3586,18 @@ union bpf_attr {
  * 		the data in *dst*. This is a wrapper of **copy_from_user**\ ().
  * 	Return
  * 		0 on success, or a negative error in case of failure.
+ *
+ * int bpf_xdp_get_frag_count(struct xdp_buff *xdp_md)
+ *	Description
+ *		Get the number of fragments for a given xdp multi-buffer.
+ *	Return
+ *		The number of fragments
+ *
+ * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md)
+ *	Description
+ *		Get the total size of fragments for a given xdp multi-buffer.
+ *	Return
+ *		The total size of fragments for a given xdp multi-buffer.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -3737,6 +3749,8 @@ union bpf_attr {
 	FN(inode_storage_delete),	\
 	FN(d_path),			\
 	FN(copy_from_user),		\
+	FN(xdp_get_frag_count),		\
+	FN(xdp_get_frags_total_size),	\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 07/12] samples/bpf: add bpf program that uses xdp mb helpers
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (5 preceding siblings ...)
  2020-09-30 15:41 ` [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support Lorenzo Bianconi
@ 2020-09-30 15:41 ` Lorenzo Bianconi
  2020-09-30 15:41 ` [PATCH v3 net-next 08/12] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

From: Sameeh Jubran <sameehj@amazon.com>

The bpf program returns XDP_PASS for every packet and calculates the
total number of bytes in its linear and paged parts.

The program is executed with:
./xdp_mb [if name]

and has the following output format:
[if index]: [rx packet count] pkt/sec, [number of bytes] bytes/sec

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 samples/bpf/Makefile      |   3 +
 samples/bpf/xdp_mb_kern.c |  68 ++++++++++++++
 samples/bpf/xdp_mb_user.c | 182 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 253 insertions(+)
 create mode 100644 samples/bpf/xdp_mb_kern.c
 create mode 100644 samples/bpf/xdp_mb_user.c

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 4f1ed0e3cf9f..12e32516f02a 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -54,6 +54,7 @@ tprogs-y += task_fd_query
 tprogs-y += xdp_sample_pkts
 tprogs-y += ibumad
 tprogs-y += hbm
+tprogs-y += xdp_mb
 
 # Libbpf dependencies
 LIBBPF = $(TOOLS_PATH)/lib/bpf/libbpf.a
@@ -111,6 +112,7 @@ task_fd_query-objs := bpf_load.o task_fd_query_user.o $(TRACE_HELPERS)
 xdp_sample_pkts-objs := xdp_sample_pkts_user.o $(TRACE_HELPERS)
 ibumad-objs := bpf_load.o ibumad_user.o $(TRACE_HELPERS)
 hbm-objs := bpf_load.o hbm.o $(CGROUP_HELPERS)
+xdp_mb-objs := xdp_mb_user.o
 
 # Tell kbuild to always build the programs
 always-y := $(tprogs-y)
@@ -172,6 +174,7 @@ always-y += ibumad_kern.o
 always-y += hbm_out_kern.o
 always-y += hbm_edt_kern.o
 always-y += xdpsock_kern.o
+always-y += xdp_mb_kern.o
 
 ifeq ($(ARCH), arm)
 # Strip all except -D__LINUX_ARM_ARCH__ option needed to handle linux
diff --git a/samples/bpf/xdp_mb_kern.c b/samples/bpf/xdp_mb_kern.c
new file mode 100644
index 000000000000..554c3b9a3243
--- /dev/null
+++ b/samples/bpf/xdp_mb_kern.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/*
+ * Copyright 2020 Amazon.com, Inc. or its affiliates. All rights reserved.
+ */
+#define KBUILD_MODNAME "foo"
+#include <uapi/linux/bpf.h>
+#include <linux/in.h>
+#include <linux/if_ether.h>
+#include <linux/if_packet.h>
+#include <linux/if_vlan.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <bpf/bpf_helpers.h>
+
+/* count RX packets */
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, 1);
+} rx_cnt SEC(".maps");
+
+/* count RX fragments */
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, 1);
+} rx_frags SEC(".maps");
+
+/* count total number of bytes */
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, 1);
+} tot_len SEC(".maps");
+
+SEC("xdp_mb")
+int xdp_mb_prog(struct xdp_md *ctx)
+{
+	void *data_end = (void *)(long)ctx->data_end;
+	void *data = (void *)(long)ctx->data;
+	u32 frag_offset = 0, frag_size = 0;
+	u32 key = 0, nfrags;
+	long *value;
+	int i, len;
+
+	value = bpf_map_lookup_elem(&rx_cnt, &key);
+	if (value)
+		*value += 1;
+
+	len = data_end - data;
+	nfrags = bpf_xdp_get_frag_count(ctx);
+	len += bpf_xdp_get_frags_total_size(ctx);
+
+	value = bpf_map_lookup_elem(&tot_len, &key);
+	if (value)
+		*value += len;
+
+	value = bpf_map_lookup_elem(&rx_frags, &key);
+	if (value)
+		*value += nfrags;
+
+	return XDP_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/samples/bpf/xdp_mb_user.c b/samples/bpf/xdp_mb_user.c
new file mode 100644
index 000000000000..6f555e94b748
--- /dev/null
+++ b/samples/bpf/xdp_mb_user.c
@@ -0,0 +1,182 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/*
+ * Copyright 2020 Amazon.com, Inc. or its affiliates. All rights reserved.
+ */
+#include <linux/bpf.h>
+#include <linux/if_link.h>
+#include <assert.h>
+#include <errno.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <libgen.h>
+#include <sys/resource.h>
+#include <net/if.h>
+
+#include "bpf_util.h"
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+
+static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST | XDP_FLAGS_DRV_MODE;
+static __u32 prog_id;
+static int rx_cnt_fd, tot_len_fd, rx_frags_fd;
+static int ifindex;
+
+static void int_exit(int sig)
+{
+	__u32 curr_prog_id = 0;
+
+	if (bpf_get_link_xdp_id(ifindex, &curr_prog_id, xdp_flags)) {
+		printf("bpf_get_link_xdp_id failed\n");
+		exit(1);
+	}
+	if (prog_id == curr_prog_id)
+		bpf_set_link_xdp_fd(ifindex, -1, xdp_flags);
+	else if (!curr_prog_id)
+		printf("couldn't find a prog id on a given interface\n");
+	else
+		printf("program on interface changed, not removing\n");
+	exit(0);
+}
+
+/* count total packets and bytes per second */
+static void poll_stats(int interval)
+{
+	unsigned int nr_cpus = bpf_num_possible_cpus();
+	__u64 rx_frags_cnt[nr_cpus], rx_frags_cnt_prev[nr_cpus];
+	__u64 tot_len[nr_cpus], tot_len_prev[nr_cpus];
+	__u64 rx_cnt[nr_cpus], rx_cnt_prev[nr_cpus];
+	int i;
+
+	memset(rx_frags_cnt_prev, 0, sizeof(rx_frags_cnt_prev));
+	memset(tot_len_prev, 0, sizeof(tot_len_prev));
+	memset(rx_cnt_prev, 0, sizeof(rx_cnt_prev));
+
+	while (1) {
+		__u64 n_rx_pkts = 0, rx_frags = 0, rx_len = 0;
+		__u32 key = 0;
+
+		sleep(interval);
+
+		/* fetch rx cnt */
+		assert(bpf_map_lookup_elem(rx_cnt_fd, &key, rx_cnt) == 0);
+		for (i = 0; i < nr_cpus; i++)
+			n_rx_pkts += (rx_cnt[i] - rx_cnt_prev[i]);
+		memcpy(rx_cnt_prev, rx_cnt, sizeof(rx_cnt));
+
+		/* fetch rx frags */
+		assert(bpf_map_lookup_elem(rx_frags_fd, &key, rx_frags_cnt) == 0);
+		for (i = 0; i < nr_cpus; i++)
+			rx_frags += (rx_frags_cnt[i] - rx_frags_cnt_prev[i]);
+		memcpy(rx_frags_cnt_prev, rx_frags_cnt, sizeof(rx_frags_cnt));
+
+		/* count total bytes of packets */
+		assert(bpf_map_lookup_elem(tot_len_fd, &key, tot_len) == 0);
+		for (i = 0; i < nr_cpus; i++)
+			rx_len += (tot_len[i] - tot_len_prev[i]);
+		memcpy(tot_len_prev, tot_len, sizeof(tot_len));
+
+		if (n_rx_pkts)
+			printf("ifindex %i: %10llu pkt/s, %10llu frags/s, %10llu bytes/s\n",
+			       ifindex, n_rx_pkts / interval, rx_frags / interval,
+			       rx_len / interval);
+	}
+}
+
+static void usage(const char *prog)
+{
+	fprintf(stderr,
+		"%s: %s [OPTS] IFACE\n\n"
+		"OPTS:\n"
+		"    -F    force loading prog\n",
+		__func__, prog);
+}
+
+int main(int argc, char **argv)
+{
+	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
+	struct bpf_prog_load_attr prog_load_attr = {
+		.prog_type	= BPF_PROG_TYPE_XDP,
+	};
+	int prog_fd, opt;
+	struct bpf_prog_info info = {};
+	__u32 info_len = sizeof(info);
+	const char *optstr = "F";
+	struct bpf_program *prog;
+	struct bpf_object *obj;
+	char filename[256];
+	int err;
+
+	while ((opt = getopt(argc, argv, optstr)) != -1) {
+		switch (opt) {
+		case 'F':
+			xdp_flags &= ~XDP_FLAGS_UPDATE_IF_NOEXIST;
+			break;
+		default:
+			usage(basename(argv[0]));
+			return 1;
+		}
+	}
+
+	if (optind == argc) {
+		usage(basename(argv[0]));
+		return 1;
+	}
+
+	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
+		perror("setrlimit(RLIMIT_MEMLOCK)");
+		return 1;
+	}
+
+	ifindex = if_nametoindex(argv[optind]);
+	if (!ifindex) {
+		perror("if_nametoindex");
+		return 1;
+	}
+
+	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+	prog_load_attr.file = filename;
+
+	if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd))
+		return 1;
+
+	prog = bpf_program__next(NULL, obj);
+	if (!prog) {
+		printf("finding a prog in obj file failed\n");
+		return 1;
+	}
+
+	if (!prog_fd) {
+		printf("bpf_prog_load_xattr: %s\n", strerror(errno));
+		return 1;
+	}
+
+	rx_cnt_fd = bpf_object__find_map_fd_by_name(obj, "rx_cnt");
+	rx_frags_fd = bpf_object__find_map_fd_by_name(obj, "rx_frags");
+	tot_len_fd = bpf_object__find_map_fd_by_name(obj, "tot_len");
+	if (rx_cnt_fd < 0 || rx_frags_fd < 0 || tot_len_fd < 0) {
+		printf("bpf_object__find_map_fd_by_name failed\n");
+		return 1;
+	}
+
+	if (bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags) < 0) {
+		printf("ERROR: link set xdp fd failed on %d\n", ifindex);
+		return 1;
+	}
+
+	err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len);
+	if (err) {
+		printf("can't get prog info - %s\n", strerror(errno));
+		return err;
+	}
+	prog_id = info.id;
+
+	signal(SIGINT, int_exit);
+	signal(SIGTERM, int_exit);
+
+	poll_stats(1);
+
+	return 0;
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 08/12] bpf: move user_size out of bpf_test_init
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (6 preceding siblings ...)
  2020-09-30 15:41 ` [PATCH v3 net-next 07/12] samples/bpf: add bpf program that uses xdp mb helpers Lorenzo Bianconi
@ 2020-09-30 15:41 ` Lorenzo Bianconi
  2020-09-30 15:42 ` [PATCH v3 net-next 09/12] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:41 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Rely on data_size_in in bpf_test_init routine signature. This is a
preliminary patch to introduce xdp multi-buff selftest

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 net/bpf/test_run.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index a66f211726e7..5608d5a902ff 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -170,11 +170,10 @@ __diag_pop();
 
 ALLOW_ERROR_INJECTION(bpf_modify_return_test, ERRNO);
 
-static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
-			   u32 headroom, u32 tailroom)
+static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
+			   u32 size, u32 headroom, u32 tailroom)
 {
 	void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
-	u32 user_size = kattr->test.data_size_in;
 	void *data;
 
 	if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom)
@@ -410,7 +409,8 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
 	void *data;
 	int ret;
 
-	data = bpf_test_init(kattr, size, NET_SKB_PAD + NET_IP_ALIGN,
+	data = bpf_test_init(kattr, kattr->test.data_size_in,
+			     size, NET_SKB_PAD + NET_IP_ALIGN,
 			     SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
 	if (IS_ERR(data))
 		return PTR_ERR(data);
@@ -547,7 +547,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
 	/* XDP have extra tailroom as (most) drivers use full page */
 	max_data_sz = 4096 - headroom - tailroom;
 
-	data = bpf_test_init(kattr, max_data_sz, headroom, tailroom);
+	data = bpf_test_init(kattr, kattr->test.data_size_in,
+			     max_data_sz, headroom, tailroom);
 	if (IS_ERR(data))
 		return PTR_ERR(data);
 
@@ -610,7 +611,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 	if (size < ETH_HLEN)
 		return -EINVAL;
 
-	data = bpf_test_init(kattr, size, 0, 0);
+	data = bpf_test_init(kattr, kattr->test.data_size_in, size, 0, 0);
 	if (IS_ERR(data))
 		return PTR_ERR(data);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 09/12] bpf: introduce multibuff support to bpf_prog_test_run_xdp()
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (7 preceding siblings ...)
  2020-09-30 15:41 ` [PATCH v3 net-next 08/12] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
@ 2020-09-30 15:42 ` Lorenzo Bianconi
  2020-09-30 15:42 ` [PATCH v3 net-next 10/12] bpf: add xdp multi-buffer selftest Lorenzo Bianconi
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:42 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Introduce the capability to allocate a xdp multi-buff in
bpf_prog_test_run_xdp routine. This is a preliminary patch to introduce
the selftests for new xdp multi-buff ebpf helpers

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 net/bpf/test_run.c | 36 ++++++++++++++++++++++++++++++------
 1 file changed, 30 insertions(+), 6 deletions(-)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 5608d5a902ff..7268542b0f3c 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -532,23 +532,22 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
 {
 	u32 tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
 	u32 headroom = XDP_PACKET_HEADROOM;
-	u32 size = kattr->test.data_size_in;
 	u32 repeat = kattr->test.repeat;
 	struct netdev_rx_queue *rxqueue;
+	struct skb_shared_info *sinfo;
 	struct xdp_buff xdp = {};
+	u32 max_data_sz, size;
 	u32 retval, duration;
-	u32 max_data_sz;
 	void *data;
-	int ret;
+	int i, ret;
 
 	if (kattr->test.ctx_in || kattr->test.ctx_out)
 		return -EINVAL;
 
-	/* XDP have extra tailroom as (most) drivers use full page */
 	max_data_sz = 4096 - headroom - tailroom;
+	size = min_t(u32, kattr->test.data_size_in, max_data_sz);
 
-	data = bpf_test_init(kattr, kattr->test.data_size_in,
-			     max_data_sz, headroom, tailroom);
+	data = bpf_test_init(kattr, size, max_data_sz, headroom, tailroom);
 	if (IS_ERR(data))
 		return PTR_ERR(data);
 
@@ -558,6 +557,28 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
 	xdp.data_end = xdp.data + size;
 	xdp.frame_sz = headroom + max_data_sz + tailroom;
 
+	sinfo = xdp_get_shared_info_from_buff(&xdp);
+	if (unlikely(kattr->test.data_size_in > size)) {
+		for (; size < kattr->test.data_size_in; size += PAGE_SIZE) {
+			skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags];
+			struct page *page;
+			int data_len;
+
+			page = alloc_page(GFP_KERNEL);
+			if (!page) {
+				ret = -ENOMEM;
+				goto out;
+			}
+
+			__skb_frag_set_page(frag, page);
+			data_len = min_t(int, kattr->test.data_size_in - size,
+					 PAGE_SIZE);
+			skb_frag_size_set(frag, data_len);
+			sinfo->nr_frags++;
+		}
+		xdp.mb = 1;
+	}
+
 	rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0);
 	xdp.rxq = &rxqueue->xdp_rxq;
 	bpf_prog_change_xdp(NULL, prog);
@@ -569,7 +590,10 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
 	ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration);
 out:
 	bpf_prog_change_xdp(prog, NULL);
+	for (i = 0; i < sinfo->nr_frags; i++)
+		__free_page(skb_frag_page(&sinfo->frags[i]));
 	kfree(data);
+
 	return ret;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 10/12] bpf: add xdp multi-buffer selftest
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (8 preceding siblings ...)
  2020-09-30 15:42 ` [PATCH v3 net-next 09/12] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
@ 2020-09-30 15:42 ` Lorenzo Bianconi
  2020-10-01  7:43   ` Eelco Chaudron
  2020-09-30 15:42 ` [PATCH v3 net-next 11/12] net: mvneta: enable jumbo frames for XDP Lorenzo Bianconi
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:42 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Introduce xdp multi-buffer selftest for the following ebpf helpers:
- bpf_xdp_get_frags_total_size
- bpf_xdp_get_frag_count

Co-developed-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 .../testing/selftests/bpf/prog_tests/xdp_mb.c | 77 +++++++++++++++++++
 .../selftests/bpf/progs/test_xdp_multi_buff.c | 24 ++++++
 2 files changed, 101 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_mb.c b/tools/testing/selftests/bpf/prog_tests/xdp_mb.c
new file mode 100644
index 000000000000..8cfe7253bf2a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_mb.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <unistd.h>
+#include <linux/kernel.h>
+#include <test_progs.h>
+#include <network_helpers.h>
+
+#include "test_xdp_multi_buff.skel.h"
+
+static void test_xdp_mb_check_len(void)
+{
+	int test_sizes[] = { 128, 4096, 9000 };
+	struct test_xdp_multi_buff *pkt_skel;
+	char *pkt_in = NULL, *pkt_out = NULL;
+	__u32 duration = 0, retval, size;
+	int err, pkt_fd, i;
+
+	/* Load XDP program */
+	pkt_skel = test_xdp_multi_buff__open_and_load();
+	if (CHECK(!pkt_skel, "pkt_skel_load", "test_xdp_mb skeleton failed\n"))
+		goto out;
+
+	/* Allocate resources */
+	pkt_out = malloc(test_sizes[ARRAY_SIZE(test_sizes) - 1]);
+	pkt_in = malloc(test_sizes[ARRAY_SIZE(test_sizes) - 1]);
+	if (CHECK(!pkt_in || !pkt_out, "malloc",
+		  "Failed malloc, in = %p, out %p\n", pkt_in, pkt_out))
+		goto out;
+
+	pkt_fd = bpf_program__fd(pkt_skel->progs._xdp_check_mb_len);
+	if (pkt_fd < 0)
+		goto out;
+
+	/* Run test for specific set of packets */
+	for (i = 0; i < ARRAY_SIZE(test_sizes); i++) {
+		int frag_count;
+
+		/* Run test program */
+		err = bpf_prog_test_run(pkt_fd, 1, &pkt_in, test_sizes[i],
+					pkt_out, &size, &retval, &duration);
+
+		if (CHECK(err || retval != XDP_PASS, // || size != test_sizes[i],
+			  "test_run", "err %d errno %d retval %d size %d[%d]\n",
+			  err, errno, retval, size, test_sizes[i]))
+			goto out;
+
+		/* Verify test results */
+		frag_count = DIV_ROUND_UP(
+			test_sizes[i] - pkt_skel->data->test_result_xdp_len,
+			getpagesize());
+
+		if (CHECK(pkt_skel->data->test_result_frag_count != frag_count,
+			  "result", "frag_count = %llu != %u\n",
+			  pkt_skel->data->test_result_frag_count, frag_count))
+			goto out;
+
+		if (CHECK(pkt_skel->data->test_result_frag_len != test_sizes[i] -
+			  pkt_skel->data->test_result_xdp_len,
+			  "result", "frag_len = %llu != %llu\n",
+			  pkt_skel->data->test_result_frag_len,
+			  test_sizes[i] - pkt_skel->data->test_result_xdp_len))
+			goto out;
+	}
+out:
+	if (pkt_out)
+		free(pkt_out);
+	if (pkt_in)
+		free(pkt_in);
+
+	test_xdp_multi_buff__destroy(pkt_skel);
+}
+
+void test_xdp_mb(void)
+{
+	if (test__start_subtest("xdp_mb_check_len_frags"))
+		test_xdp_mb_check_len();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c b/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
new file mode 100644
index 000000000000..1a46e0925282
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <linux/if_ether.h>
+#include <bpf/bpf_helpers.h>
+#include <stdint.h>
+
+__u64 test_result_frag_len = UINT64_MAX;
+__u64 test_result_frag_count = UINT64_MAX;
+__u64 test_result_xdp_len = UINT64_MAX;
+
+SEC("xdp_check_mb_len")
+int _xdp_check_mb_len(struct xdp_md *xdp)
+{
+	void *data_end = (void *)(long)xdp->data_end;
+	void *data = (void *)(long)xdp->data;
+
+	test_result_xdp_len = (__u64)(data_end - data);
+	test_result_frag_len = bpf_xdp_get_frags_total_size(xdp);
+	test_result_frag_count = bpf_xdp_get_frag_count(xdp);
+	return XDP_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 11/12] net: mvneta: enable jumbo frames for XDP
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (9 preceding siblings ...)
  2020-09-30 15:42 ` [PATCH v3 net-next 10/12] bpf: add xdp multi-buffer selftest Lorenzo Bianconi
@ 2020-09-30 15:42 ` Lorenzo Bianconi
  2020-09-30 15:42 ` [PATCH v3 net-next 12/12] bpf: cpumap: introduce xdp multi-buff support Lorenzo Bianconi
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:42 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Enable the capability to receive jumbo frames even if the interface is
running in XDP mode

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index f709650974ea..e3352ed13ea8 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -3743,11 +3743,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 		mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8);
 	}
 
-	if (pp->xdp_prog && mtu > MVNETA_MAX_RX_BUF_SIZE) {
-		netdev_info(dev, "Illegal MTU value %d for XDP mode\n", mtu);
-		return -EINVAL;
-	}
-
 	dev->mtu = mtu;
 
 	if (!netif_running(dev)) {
@@ -4445,11 +4440,6 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog,
 	struct mvneta_port *pp = netdev_priv(dev);
 	struct bpf_prog *old_prog;
 
-	if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) {
-		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
-		return -EOPNOTSUPP;
-	}
-
 	if (pp->bm_priv) {
 		NL_SET_ERR_MSG_MOD(extack,
 				   "Hardware Buffer Management not supported on XDP");
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 12/12] bpf: cpumap: introduce xdp multi-buff support
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (10 preceding siblings ...)
  2020-09-30 15:42 ` [PATCH v3 net-next 11/12] net: mvneta: enable jumbo frames for XDP Lorenzo Bianconi
@ 2020-09-30 15:42 ` Lorenzo Bianconi
  2020-09-30 16:31 ` [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Jakub Kicinski
  2020-09-30 19:47 ` John Fastabend
  13 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 15:42 UTC (permalink / raw)
  To: netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Introduce __xdp_build_skb_from_frame and xdp_build_skb_from_frame
utility routine to build the skb from xdp_frame.
Add xdp multi-buff support to cpumap

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 include/net/xdp.h   |  5 ++++
 kernel/bpf/cpumap.c | 45 +------------------------------
 net/core/xdp.c      | 64 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 70 insertions(+), 44 deletions(-)

diff --git a/include/net/xdp.h b/include/net/xdp.h
index 4d47076546ff..8d9224ef75ee 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -134,6 +134,11 @@ void xdp_warn(const char *msg, const char *func, const int line);
 #define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__)
 
 struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp);
+struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf,
+					   struct sk_buff *skb,
+					   struct net_device *dev);
+struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf,
+					 struct net_device *dev);
 
 static inline
 void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp)
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index c61a23b564aa..fa07b4226836 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -155,49 +155,6 @@ static void cpu_map_kthread_stop(struct work_struct *work)
 	kthread_stop(rcpu->kthread);
 }
 
-static struct sk_buff *cpu_map_build_skb(struct xdp_frame *xdpf,
-					 struct sk_buff *skb)
-{
-	unsigned int hard_start_headroom;
-	unsigned int frame_size;
-	void *pkt_data_start;
-
-	/* Part of headroom was reserved to xdpf */
-	hard_start_headroom = sizeof(struct xdp_frame) +  xdpf->headroom;
-
-	/* Memory size backing xdp_frame data already have reserved
-	 * room for build_skb to place skb_shared_info in tailroom.
-	 */
-	frame_size = xdpf->frame_sz;
-
-	pkt_data_start = xdpf->data - hard_start_headroom;
-	skb = build_skb_around(skb, pkt_data_start, frame_size);
-	if (unlikely(!skb))
-		return NULL;
-
-	skb_reserve(skb, hard_start_headroom);
-	__skb_put(skb, xdpf->len);
-	if (xdpf->metasize)
-		skb_metadata_set(skb, xdpf->metasize);
-
-	/* Essential SKB info: protocol and skb->dev */
-	skb->protocol = eth_type_trans(skb, xdpf->dev_rx);
-
-	/* Optional SKB info, currently missing:
-	 * - HW checksum info		(skb->ip_summed)
-	 * - HW RX hash			(skb_set_hash)
-	 * - RX ring dev queue index	(skb_record_rx_queue)
-	 */
-
-	/* Until page_pool get SKB return path, release DMA here */
-	xdp_release_frame(xdpf);
-
-	/* Allow SKB to reuse area used by xdp_frame */
-	xdp_scrub_frame(xdpf);
-
-	return skb;
-}
-
 static void __cpu_map_ring_cleanup(struct ptr_ring *ring)
 {
 	/* The tear-down procedure should have made sure that queue is
@@ -364,7 +321,7 @@ static int cpu_map_kthread_run(void *data)
 			struct sk_buff *skb = skbs[i];
 			int ret;
 
-			skb = cpu_map_build_skb(xdpf, skb);
+			skb = __xdp_build_skb_from_frame(xdpf, skb, xdpf->dev_rx);
 			if (!skb) {
 				xdp_return_frame(xdpf);
 				continue;
diff --git a/net/core/xdp.c b/net/core/xdp.c
index 6d4fd4dddb00..a6bdefed92e6 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -507,3 +507,67 @@ void xdp_warn(const char *msg, const char *func, const int line)
 	WARN(1, "XDP_WARN: %s(line:%d): %s\n", func, line, msg);
 };
 EXPORT_SYMBOL_GPL(xdp_warn);
+
+struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf,
+					   struct sk_buff *skb,
+					   struct net_device *dev)
+{
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf);
+	unsigned int headroom = sizeof(*xdpf) +  xdpf->headroom;
+	int i, num_frags = xdpf->mb ? sinfo->nr_frags : 0;
+	void *hard_start = xdpf->data - headroom;
+
+	skb = build_skb_around(skb, hard_start, xdpf->frame_sz);
+	if (unlikely(!skb))
+		return NULL;
+
+	skb_reserve(skb, headroom);
+	__skb_put(skb, xdpf->len);
+	if (xdpf->metasize)
+		skb_metadata_set(skb, xdpf->metasize);
+
+	if (likely(!num_frags))
+		goto out;
+
+	for (i = 0; i < num_frags; i++) {
+		skb_frag_t *frag = &sinfo->frags[i];
+
+		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+				skb_frag_page(frag), skb_frag_off(frag),
+				skb_frag_size(frag), xdpf->frame_sz);
+	}
+
+out:
+	/* Essential SKB info: protocol and skb->dev */
+	skb->protocol = eth_type_trans(skb, dev);
+
+	/* Optional SKB info, currently missing:
+	 * - HW checksum info		(skb->ip_summed)
+	 * - HW RX hash			(skb_set_hash)
+	 * - RX ring dev queue index	(skb_record_rx_queue)
+	 */
+
+	/* Until page_pool get SKB return path, release DMA here */
+	xdp_release_frame(xdpf);
+
+	/* Allow SKB to reuse area used by xdp_frame */
+	xdp_scrub_frame(xdpf);
+
+	return skb;
+}
+EXPORT_SYMBOL_GPL(__xdp_build_skb_from_frame);
+
+struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf,
+					 struct net_device *dev)
+{
+	struct sk_buff *skb;
+
+	skb = kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC);
+	if (unlikely(!skb))
+		return NULL;
+
+	memset(skb, 0, offsetof(struct sk_buff, tail));
+
+	return __xdp_build_skb_from_frame(xdpf, skb, dev);
+}
+EXPORT_SYMBOL_GPL(xdp_build_skb_from_frame);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (11 preceding siblings ...)
  2020-09-30 15:42 ` [PATCH v3 net-next 12/12] bpf: cpumap: introduce xdp multi-buff support Lorenzo Bianconi
@ 2020-09-30 16:31 ` Jakub Kicinski
  2020-09-30 16:39   ` Lorenzo Bianconi
  2020-09-30 19:47 ` John Fastabend
  13 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2020-09-30 16:31 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, bpf, davem, sameehj, john.fastabend, daniel, ast,
	shayagr, brouer, echaudro, lorenzo.bianconi, dsahern

On Wed, 30 Sep 2020 17:41:51 +0200 Lorenzo Bianconi wrote:
> This series introduce XDP multi-buffer support. The mvneta driver is
> the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> please focus on how these new types of xdp_{buff,frame} packets
> traverse the different layers and the layout design. It is on purpose
> that BPF-helpers are kept simple, as we don't want to expose the
> internal layout to allow later changes.

This does not apply cleanly to net-next 🤔

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support
  2020-09-30 16:31 ` [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Jakub Kicinski
@ 2020-09-30 16:39   ` Lorenzo Bianconi
  2020-09-30 21:40     ` Jakub Kicinski
  0 siblings, 1 reply; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-09-30 16:39 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, bpf, davem, sameehj, john.fastabend, daniel, ast,
	shayagr, brouer, echaudro, lorenzo.bianconi, dsahern

[-- Attachment #1: Type: text/plain, Size: 833 bytes --]

> On Wed, 30 Sep 2020 17:41:51 +0200 Lorenzo Bianconi wrote:
> > This series introduce XDP multi-buffer support. The mvneta driver is
> > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> > please focus on how these new types of xdp_{buff,frame} packets
> > traverse the different layers and the layout design. It is on purpose
> > that BPF-helpers are kept simple, as we don't want to expose the
> > internal layout to allow later changes.
> 
> This does not apply cleanly to net-next 🤔

Hi Jakub,

patch 12/12 ("bpf: cpumap: introduce xdp multi-buff support") is based on commit
efa90b50934c ("cpumap: Remove rcpu pointer from cpu_map_build_skb signature")
already in bpf-next. I though it was important to add patch 12/12 to the
series. Do you have other conflicts?

Regards,
Lorenzo

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support
  2020-09-30 15:41 ` [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support Lorenzo Bianconi
@ 2020-09-30 19:11   ` Alexei Starovoitov
  2020-10-01  9:47     ` Jesper Dangaard Brouer
  2020-10-01 15:05     ` Lorenzo Bianconi
  0 siblings, 2 replies; 24+ messages in thread
From: Alexei Starovoitov @ 2020-09-30 19:11 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, bpf, davem, sameehj, kuba, john.fastabend, daniel, ast,
	shayagr, brouer, echaudro, lorenzo.bianconi, dsahern

On Wed, Sep 30, 2020 at 05:41:57PM +0200, Lorenzo Bianconi wrote:
> From: Sameeh Jubran <sameehj@amazon.com>
> 
> The implementation is based on this [0] draft by Jesper D. Brouer.
> 
> Provided two new helpers:
> 
> * bpf_xdp_get_frag_count()
> * bpf_xdp_get_frags_total_size()
> 
> [0] xdp mb design - https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
> Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
>  include/uapi/linux/bpf.h       | 14 ++++++++++++
>  net/core/filter.c              | 42 ++++++++++++++++++++++++++++++++++
>  tools/include/uapi/linux/bpf.h | 14 ++++++++++++
>  3 files changed, 70 insertions(+)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index a22812561064..6f97dce8cccf 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -3586,6 +3586,18 @@ union bpf_attr {
>   * 		the data in *dst*. This is a wrapper of **copy_from_user**\ ().
>   * 	Return
>   * 		0 on success, or a negative error in case of failure.
> + *
> + * int bpf_xdp_get_frag_count(struct xdp_buff *xdp_md)
> + *	Description
> + *		Get the number of fragments for a given xdp multi-buffer.
> + *	Return
> + *		The number of fragments
> + *
> + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md)
> + *	Description
> + *		Get the total size of fragments for a given xdp multi-buffer.
> + *	Return
> + *		The total size of fragments for a given xdp multi-buffer.
>   */
>  #define __BPF_FUNC_MAPPER(FN)		\
>  	FN(unspec),			\
> @@ -3737,6 +3749,8 @@ union bpf_attr {
>  	FN(inode_storage_delete),	\
>  	FN(d_path),			\
>  	FN(copy_from_user),		\
> +	FN(xdp_get_frag_count),		\
> +	FN(xdp_get_frags_total_size),	\
>  	/* */

Please route the set via bpf-next otherwise merge conflicts will be severe.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support
  2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
                   ` (12 preceding siblings ...)
  2020-09-30 16:31 ` [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Jakub Kicinski
@ 2020-09-30 19:47 ` John Fastabend
  2020-10-01  9:04   ` Lorenzo Bianconi
  13 siblings, 1 reply; 24+ messages in thread
From: John Fastabend @ 2020-09-30 19:47 UTC (permalink / raw)
  To: Lorenzo Bianconi, netdev
  Cc: bpf, davem, sameehj, kuba, john.fastabend, daniel, ast, shayagr,
	brouer, echaudro, lorenzo.bianconi, dsahern

Lorenzo Bianconi wrote:
> This series introduce XDP multi-buffer support. The mvneta driver is
> the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> please focus on how these new types of xdp_{buff,frame} packets
> traverse the different layers and the layout design. It is on purpose
> that BPF-helpers are kept simple, as we don't want to expose the
> internal layout to allow later changes.
> 
> For now, to keep the design simple and to maintain performance, the XDP
> BPF-prog (still) only have access to the first-buffer. It is left for
> later (another patchset) to add payload access across multiple buffers.
> This patchset should still allow for these future extensions. The goal
> is to lift the XDP MTU restriction that comes with XDP, but maintain
> same performance as before.
> 
> The main idea for the new multi-buffer layout is to reuse the same
> layout used for non-linear SKB. This rely on the "skb_shared_info"
> struct at the end of the first buffer to link together subsequent
> buffers. Keeping the layout compatible with SKBs is also done to ease
> and speedup creating an SKB from an xdp_{buff,frame}. Converting
> xdp_frame to SKB and deliver it to the network stack is shown in cpumap
> code (patch 12/12).

Couple questions I think we want in the cover letter. How I read above
is if mb is enabled every frame received at the end of the first buffer
there will be skb_shared_info field.

First just to be clear a driver may have mb support but the mb bit
should only be used per frame so a frame with only a single buffer
will not have any extra cost even when driver/network layer support
mb. This way I can receive both multibuffer and single buffer frames
in the same stack without extra overhead on single buffer frames. I
think we want to put the details here in the cover letter so we don't
have to read mvneta driver to learn these details. I'll admit we've
sort of flung features like this with minimal descriptions in the
past, but this is important so lets get it described here.

Or put the details in the patch commits those are pretty terse for
a new feature that has impacts for all xdp driver writers.
> 
> In order to provide to userspace some metdata about the non-linear
> xdp_{buff,frame}, we introduced 2 bpf helpers:
> - bpf_xdp_get_frag_count:
>   get the number of fragments for a given xdp multi-buffer.
> - bpf_xdp_get_frags_total_size:
>   get the total size of fragments for a given xdp multi-buffer.
> 
> Typical use cases for this series are:
> - Jumbo-frames
> - Packet header split (please see Google���s use-case @ NetDevConf 0x14, [0])
> - TSO
> 
> More info about the main idea behind this approach can be found here [1][2].
> 
> We carried out some throughput tests in order to verify we did not introduced
> any performance regression adding xdp multi-buff support to mvneta:
> 
> offered load is ~ 1000Kpps, packet size is 64B
> 
> commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in mvneta_rx_swbm")
> - xdp-pass:     ~162Kpps
> - xdp-drop:     ~701Kpps
> - xdp-tx:       ~185Kpps
> - xdp-redirect: ~202Kpps
> 
> mvneta xdp multi-buff:
> - xdp-pass:     ~163Kpps
> - xdp-drop:     ~739Kpps
> - xdp-tx:       ~182Kpps
> - xdp-redirect: ~202Kpps

But these are fairly low rates?  Also why can't we push line rate
here on xdp-tx and xdp-redirect, 1gbps should be no problem unless
we have a very small core or something? Finally, can you explain
why the huge hit between xdp-drop and xdp-tx?

I'm a bit wary of touching the end of a buffer on 40/100Gbps nic
with DDIO and getting a cache miss. Do you have some argument why
this wouldn't be the case? Do we need someone to step up with a
10/40/100gbps nic and implement the feature as well so we can verify
this?

> 
> This series is based on "bpf: cpumap: remove rcpu pointer from cpu_map_build_skb signature"
> https://patchwork.ozlabs.org/project/netdev/patch/33cb9b7dc447de3ea6fd6ce713ac41bca8794423.1601292015.git.lorenzo@kernel.org/
> 
> Changes since v2:
> - add throughput measurements
> - drop bpf_xdp_adjust_mb_header bpf helper
> - introduce selftest for xdp multibuffer
> - addressed comments on bpf_xdp_get_frag_count
> - introduce xdp multi-buff support to cpumaps
> 
> Changes since v1:
> - Fix use-after-free in xdp_return_{buff/frame}
> - Introduce bpf helpers
> - Introduce xdp_mb sample program
> - access skb_shared_info->nr_frags only on the last fragment
> 
> Changes since RFC:
> - squash multi-buffer bit initialization in a single patch
> - add mvneta non-linear XDP buff support for tx side
> 
> [0] https://netdevconf.info/0x14/pub/slides/62/Implementing%20TCP%20RX%20zero%20copy.pdf
> [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> [2] https://netdevconf.info/0x14/pub/slides/10/add-xdp-on-driver.pdf (XDPmulti-buffers section)
> 
> Lorenzo Bianconi (10):
>   xdp: introduce mb in xdp_buff/xdp_frame
>   xdp: initialize xdp_buff mb bit to 0 in all XDP drivers
>   net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
>   xdp: add multi-buff support to xdp_return_{buff/frame}
>   net: mvneta: add multi buffer support to XDP_TX
>   bpf: move user_size out of bpf_test_init
>   bpf: introduce multibuff support to bpf_prog_test_run_xdp()
>   bpf: add xdp multi-buffer selftest
>   net: mvneta: enable jumbo frames for XDP
>   bpf: cpumap: introduce xdp multi-buff support
> 
> Sameeh Jubran (2):
>   bpf: helpers: add multibuffer support
>   samples/bpf: add bpf program that uses xdp mb helpers
> 
>  drivers/net/ethernet/amazon/ena/ena_netdev.c  |   1 +
>  drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |   1 +
>  .../net/ethernet/cavium/thunder/nicvf_main.c  |   1 +
>  .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  |   1 +
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c   |   1 +
>  drivers/net/ethernet/intel/ice/ice_txrx.c     |   1 +
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   1 +
>  .../net/ethernet/intel/ixgbevf/ixgbevf_main.c |   1 +
>  drivers/net/ethernet/marvell/mvneta.c         | 131 +++++++------
>  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |   1 +
>  drivers/net/ethernet/mellanox/mlx4/en_rx.c    |   1 +
>  .../net/ethernet/mellanox/mlx5/core/en_rx.c   |   1 +
>  .../ethernet/netronome/nfp/nfp_net_common.c   |   1 +
>  drivers/net/ethernet/qlogic/qede/qede_fp.c    |   1 +
>  drivers/net/ethernet/sfc/rx.c                 |   1 +
>  drivers/net/ethernet/socionext/netsec.c       |   1 +
>  drivers/net/ethernet/ti/cpsw.c                |   1 +
>  drivers/net/ethernet/ti/cpsw_new.c            |   1 +
>  drivers/net/hyperv/netvsc_bpf.c               |   1 +
>  drivers/net/tun.c                             |   2 +
>  drivers/net/veth.c                            |   1 +
>  drivers/net/virtio_net.c                      |   2 +
>  drivers/net/xen-netfront.c                    |   1 +
>  include/net/xdp.h                             |  31 ++-
>  include/uapi/linux/bpf.h                      |  14 ++
>  kernel/bpf/cpumap.c                           |  45 +----
>  net/bpf/test_run.c                            |  45 ++++-
>  net/core/dev.c                                |   1 +
>  net/core/filter.c                             |  42 ++++
>  net/core/xdp.c                                | 104 ++++++++++
>  samples/bpf/Makefile                          |   3 +
>  samples/bpf/xdp_mb_kern.c                     |  68 +++++++
>  samples/bpf/xdp_mb_user.c                     | 182 ++++++++++++++++++
>  tools/include/uapi/linux/bpf.h                |  14 ++
>  .../testing/selftests/bpf/prog_tests/xdp_mb.c |  77 ++++++++
>  .../selftests/bpf/progs/test_xdp_multi_buff.c |  24 +++
>  36 files changed, 691 insertions(+), 114 deletions(-)
>  create mode 100644 samples/bpf/xdp_mb_kern.c
>  create mode 100644 samples/bpf/xdp_mb_user.c
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
> 
> -- 
> 2.26.2
> 



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support
  2020-09-30 16:39   ` Lorenzo Bianconi
@ 2020-09-30 21:40     ` Jakub Kicinski
  0 siblings, 0 replies; 24+ messages in thread
From: Jakub Kicinski @ 2020-09-30 21:40 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, bpf, davem, sameehj, john.fastabend, daniel, ast,
	shayagr, brouer, echaudro, lorenzo.bianconi, dsahern

On Wed, 30 Sep 2020 18:39:07 +0200 Lorenzo Bianconi wrote:
> > On Wed, 30 Sep 2020 17:41:51 +0200 Lorenzo Bianconi wrote:  
> > > This series introduce XDP multi-buffer support. The mvneta driver is
> > > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> > > please focus on how these new types of xdp_{buff,frame} packets
> > > traverse the different layers and the layout design. It is on purpose
> > > that BPF-helpers are kept simple, as we don't want to expose the
> > > internal layout to allow later changes.  
> > 
> > This does not apply cleanly to net-next 🤔  
> 
> Hi Jakub,
> 
> patch 12/12 ("bpf: cpumap: introduce xdp multi-buff support") is based on commit
> efa90b50934c ("cpumap: Remove rcpu pointer from cpu_map_build_skb signature")
> already in bpf-next. I though it was important to add patch 12/12 to the
> series. Do you have other conflicts?

Sorry for the delay, my thing (https://patchwork.hopto.org/) does not
collect what failed exactly, sadly :(

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 10/12] bpf: add xdp multi-buffer selftest
  2020-09-30 15:42 ` [PATCH v3 net-next 10/12] bpf: add xdp multi-buffer selftest Lorenzo Bianconi
@ 2020-10-01  7:43   ` Eelco Chaudron
  0 siblings, 0 replies; 24+ messages in thread
From: Eelco Chaudron @ 2020-10-01  7:43 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, bpf, davem, sameehj, kuba, john.fastabend, daniel, ast,
	shayagr, brouer, lorenzo.bianconi, dsahern



On 30 Sep 2020, at 17:42, Lorenzo Bianconi wrote:

> Introduce xdp multi-buffer selftest for the following ebpf helpers:
> - bpf_xdp_get_frags_total_size
> - bpf_xdp_get_frag_count
>
> Co-developed-by: Eelco Chaudron <echaudro@redhat.com>
> Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
>  .../testing/selftests/bpf/prog_tests/xdp_mb.c | 77 
> +++++++++++++++++++
>  .../selftests/bpf/progs/test_xdp_multi_buff.c | 24 ++++++
>  2 files changed, 101 insertions(+)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
>  create mode 100644 
> tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_mb.c 
> b/tools/testing/selftests/bpf/prog_tests/xdp_mb.c
> new file mode 100644
> index 000000000000..8cfe7253bf2a
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/xdp_mb.c
> @@ -0,0 +1,77 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <unistd.h>
> +#include <linux/kernel.h>
> +#include <test_progs.h>
> +#include <network_helpers.h>
> +
> +#include "test_xdp_multi_buff.skel.h"
> +
> +static void test_xdp_mb_check_len(void)
> +{
> +	int test_sizes[] = { 128, 4096, 9000 };
> +	struct test_xdp_multi_buff *pkt_skel;
> +	char *pkt_in = NULL, *pkt_out = NULL;
> +	__u32 duration = 0, retval, size;
> +	int err, pkt_fd, i;
> +
> +	/* Load XDP program */
> +	pkt_skel = test_xdp_multi_buff__open_and_load();
> +	if (CHECK(!pkt_skel, "pkt_skel_load", "test_xdp_mb skeleton 
> failed\n"))
> +		goto out;
> +
> +	/* Allocate resources */
> +	pkt_out = malloc(test_sizes[ARRAY_SIZE(test_sizes) - 1]);
> +	pkt_in = malloc(test_sizes[ARRAY_SIZE(test_sizes) - 1]);
> +	if (CHECK(!pkt_in || !pkt_out, "malloc",
> +		  "Failed malloc, in = %p, out %p\n", pkt_in, pkt_out))
> +		goto out;
> +
> +	pkt_fd = bpf_program__fd(pkt_skel->progs._xdp_check_mb_len);
> +	if (pkt_fd < 0)
> +		goto out;
> +
> +	/* Run test for specific set of packets */
> +	for (i = 0; i < ARRAY_SIZE(test_sizes); i++) {
> +		int frag_count;
> +
> +		/* Run test program */
> +		err = bpf_prog_test_run(pkt_fd, 1, &pkt_in, test_sizes[i],

Small bug, should be:

         err = bpf_prog_test_run(pkt_fd, 1, pkt_in, test_sizes[i],

> +					pkt_out, &size, &retval, &duration);
> +
> +		if (CHECK(err || retval != XDP_PASS, // || size != test_sizes[i],
> +			  "test_run", "err %d errno %d retval %d size %d[%d]\n",
> +			  err, errno, retval, size, test_sizes[i]))
> +			goto out;
> +
> +		/* Verify test results */
> +		frag_count = DIV_ROUND_UP(
> +			test_sizes[i] - pkt_skel->data->test_result_xdp_len,
> +			getpagesize());
> +
> +		if (CHECK(pkt_skel->data->test_result_frag_count != frag_count,
> +			  "result", "frag_count = %llu != %u\n",
> +			  pkt_skel->data->test_result_frag_count, frag_count))
> +			goto out;
> +
> +		if (CHECK(pkt_skel->data->test_result_frag_len != test_sizes[i] -
> +			  pkt_skel->data->test_result_xdp_len,
> +			  "result", "frag_len = %llu != %llu\n",
> +			  pkt_skel->data->test_result_frag_len,
> +			  test_sizes[i] - pkt_skel->data->test_result_xdp_len))
> +			goto out;
> +	}
> +out:
> +	if (pkt_out)
> +		free(pkt_out);
> +	if (pkt_in)
> +		free(pkt_in);
> +
> +	test_xdp_multi_buff__destroy(pkt_skel);
> +}
> +
> +void test_xdp_mb(void)
> +{
> +	if (test__start_subtest("xdp_mb_check_len_frags"))
> +		test_xdp_mb_check_len();
> +}
> diff --git a/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c 
> b/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
> new file mode 100644
> index 000000000000..1a46e0925282
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
> @@ -0,0 +1,24 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/bpf.h>
> +#include <linux/if_ether.h>
> +#include <bpf/bpf_helpers.h>
> +#include <stdint.h>
> +
> +__u64 test_result_frag_len = UINT64_MAX;
> +__u64 test_result_frag_count = UINT64_MAX;
> +__u64 test_result_xdp_len = UINT64_MAX;
> +
> +SEC("xdp_check_mb_len")
> +int _xdp_check_mb_len(struct xdp_md *xdp)
> +{
> +	void *data_end = (void *)(long)xdp->data_end;
> +	void *data = (void *)(long)xdp->data;
> +
> +	test_result_xdp_len = (__u64)(data_end - data);
> +	test_result_frag_len = bpf_xdp_get_frags_total_size(xdp);
> +	test_result_frag_count = bpf_xdp_get_frag_count(xdp);
> +	return XDP_PASS;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> -- 
> 2.26.2


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support
  2020-09-30 19:47 ` John Fastabend
@ 2020-10-01  9:04   ` Lorenzo Bianconi
  0 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-10-01  9:04 UTC (permalink / raw)
  To: John Fastabend
  Cc: netdev, bpf, davem, sameehj, kuba, daniel, ast, shayagr, brouer,
	echaudro, lorenzo.bianconi, dsahern

[-- Attachment #1: Type: text/plain, Size: 9495 bytes --]

> Lorenzo Bianconi wrote:
> > This series introduce XDP multi-buffer support. The mvneta driver is
> > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> > please focus on how these new types of xdp_{buff,frame} packets
> > traverse the different layers and the layout design. It is on purpose
> > that BPF-helpers are kept simple, as we don't want to expose the
> > internal layout to allow later changes.
> > 
> > For now, to keep the design simple and to maintain performance, the XDP
> > BPF-prog (still) only have access to the first-buffer. It is left for
> > later (another patchset) to add payload access across multiple buffers.
> > This patchset should still allow for these future extensions. The goal
> > is to lift the XDP MTU restriction that comes with XDP, but maintain
> > same performance as before.
> > 
> > The main idea for the new multi-buffer layout is to reuse the same
> > layout used for non-linear SKB. This rely on the "skb_shared_info"
> > struct at the end of the first buffer to link together subsequent
> > buffers. Keeping the layout compatible with SKBs is also done to ease
> > and speedup creating an SKB from an xdp_{buff,frame}. Converting
> > xdp_frame to SKB and deliver it to the network stack is shown in cpumap
> > code (patch 12/12).
> 
> Couple questions I think we want in the cover letter. How I read above
> is if mb is enabled every frame received at the end of the first buffer
> there will be skb_shared_info field.

setting mb bit the driver notifies the current xdp_frame is a "non-linear"
one and the skb_shared_info is properly populated. As you said below,
the info is per-frame, so we can receive linear frames (mb = 0) and
non-linear ones (mb = 1). For a linear frame we do not need to access
the skb_shared_info, so we will not introduce any penalty.

> 
> First just to be clear a driver may have mb support but the mb bit
> should only be used per frame so a frame with only a single buffer
> will not have any extra cost even when driver/network layer support
> mb. This way I can receive both multibuffer and single buffer frames
> in the same stack without extra overhead on single buffer frames. I
> think we want to put the details here in the cover letter so we don't
> have to read mvneta driver to learn these details. I'll admit we've
> sort of flung features like this with minimal descriptions in the
> past, but this is important so lets get it described here.

ack, I will add the info above in cover letter. Thanks for pointing this out.

> 
> Or put the details in the patch commits those are pretty terse for
> a new feature that has impacts for all xdp driver writers.
> > 
> > In order to provide to userspace some metdata about the non-linear
> > xdp_{buff,frame}, we introduced 2 bpf helpers:
> > - bpf_xdp_get_frag_count:
> >   get the number of fragments for a given xdp multi-buffer.
> > - bpf_xdp_get_frags_total_size:
> >   get the total size of fragments for a given xdp multi-buffer.
> > 
> > Typical use cases for this series are:
> > - Jumbo-frames
> > - Packet header split (please see Google���s use-case @ NetDevConf 0x14, [0])
> > - TSO
> > 
> > More info about the main idea behind this approach can be found here [1][2].
> > 
> > We carried out some throughput tests in order to verify we did not introduced
> > any performance regression adding xdp multi-buff support to mvneta:
> > 
> > offered load is ~ 1000Kpps, packet size is 64B
> > 
> > commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in mvneta_rx_swbm")
> > - xdp-pass:     ~162Kpps
> > - xdp-drop:     ~701Kpps
> > - xdp-tx:       ~185Kpps
> > - xdp-redirect: ~202Kpps
> > 
> > mvneta xdp multi-buff:
> > - xdp-pass:     ~163Kpps
> > - xdp-drop:     ~739Kpps
> > - xdp-tx:       ~182Kpps
> > - xdp-redirect: ~202Kpps
> 
> But these are fairly low rates?  Also why can't we push line rate
> here on xdp-tx and xdp-redirect, 1gbps should be no problem unless
> we have a very small core or something? Finally, can you explain

I am using a marvell EspressoBin to develop this feature.
The Espressobin runs a cortex a53 and it is not able to push line rate.
The tests above want to prove there is no penalty introducing xdp multi-buff
for linear case (I will point out clearly in the next cover-letter,
the tests above refer to linear case (mb = 0))

> why the huge hit between xdp-drop and xdp-tx?

not sure at the moment, the difference is not due to xdp multi-buff

> 
> I'm a bit wary of touching the end of a buffer on 40/100Gbps nic
> with DDIO and getting a cache miss. Do you have some argument why
> this wouldn't be the case? Do we need someone to step up with a
> 10/40/100gbps nic and implement the feature as well so we can verify
> this?

It would be interesting to have the implementation on a high-end device.
IIRC intel folks are working on it for AF_XDP.

Regards,
Lorenzo

> 
> > 
> > This series is based on "bpf: cpumap: remove rcpu pointer from cpu_map_build_skb signature"
> > https://patchwork.ozlabs.org/project/netdev/patch/33cb9b7dc447de3ea6fd6ce713ac41bca8794423.1601292015.git.lorenzo@kernel.org/
> > 
> > Changes since v2:
> > - add throughput measurements
> > - drop bpf_xdp_adjust_mb_header bpf helper
> > - introduce selftest for xdp multibuffer
> > - addressed comments on bpf_xdp_get_frag_count
> > - introduce xdp multi-buff support to cpumaps
> > 
> > Changes since v1:
> > - Fix use-after-free in xdp_return_{buff/frame}
> > - Introduce bpf helpers
> > - Introduce xdp_mb sample program
> > - access skb_shared_info->nr_frags only on the last fragment
> > 
> > Changes since RFC:
> > - squash multi-buffer bit initialization in a single patch
> > - add mvneta non-linear XDP buff support for tx side
> > 
> > [0] https://netdevconf.info/0x14/pub/slides/62/Implementing%20TCP%20RX%20zero%20copy.pdf
> > [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> > [2] https://netdevconf.info/0x14/pub/slides/10/add-xdp-on-driver.pdf (XDPmulti-buffers section)
> > 
> > Lorenzo Bianconi (10):
> >   xdp: introduce mb in xdp_buff/xdp_frame
> >   xdp: initialize xdp_buff mb bit to 0 in all XDP drivers
> >   net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
> >   xdp: add multi-buff support to xdp_return_{buff/frame}
> >   net: mvneta: add multi buffer support to XDP_TX
> >   bpf: move user_size out of bpf_test_init
> >   bpf: introduce multibuff support to bpf_prog_test_run_xdp()
> >   bpf: add xdp multi-buffer selftest
> >   net: mvneta: enable jumbo frames for XDP
> >   bpf: cpumap: introduce xdp multi-buff support
> > 
> > Sameeh Jubran (2):
> >   bpf: helpers: add multibuffer support
> >   samples/bpf: add bpf program that uses xdp mb helpers
> > 
> >  drivers/net/ethernet/amazon/ena/ena_netdev.c  |   1 +
> >  drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |   1 +
> >  .../net/ethernet/cavium/thunder/nicvf_main.c  |   1 +
> >  .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  |   1 +
> >  drivers/net/ethernet/intel/i40e/i40e_txrx.c   |   1 +
> >  drivers/net/ethernet/intel/ice/ice_txrx.c     |   1 +
> >  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   1 +
> >  .../net/ethernet/intel/ixgbevf/ixgbevf_main.c |   1 +
> >  drivers/net/ethernet/marvell/mvneta.c         | 131 +++++++------
> >  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |   1 +
> >  drivers/net/ethernet/mellanox/mlx4/en_rx.c    |   1 +
> >  .../net/ethernet/mellanox/mlx5/core/en_rx.c   |   1 +
> >  .../ethernet/netronome/nfp/nfp_net_common.c   |   1 +
> >  drivers/net/ethernet/qlogic/qede/qede_fp.c    |   1 +
> >  drivers/net/ethernet/sfc/rx.c                 |   1 +
> >  drivers/net/ethernet/socionext/netsec.c       |   1 +
> >  drivers/net/ethernet/ti/cpsw.c                |   1 +
> >  drivers/net/ethernet/ti/cpsw_new.c            |   1 +
> >  drivers/net/hyperv/netvsc_bpf.c               |   1 +
> >  drivers/net/tun.c                             |   2 +
> >  drivers/net/veth.c                            |   1 +
> >  drivers/net/virtio_net.c                      |   2 +
> >  drivers/net/xen-netfront.c                    |   1 +
> >  include/net/xdp.h                             |  31 ++-
> >  include/uapi/linux/bpf.h                      |  14 ++
> >  kernel/bpf/cpumap.c                           |  45 +----
> >  net/bpf/test_run.c                            |  45 ++++-
> >  net/core/dev.c                                |   1 +
> >  net/core/filter.c                             |  42 ++++
> >  net/core/xdp.c                                | 104 ++++++++++
> >  samples/bpf/Makefile                          |   3 +
> >  samples/bpf/xdp_mb_kern.c                     |  68 +++++++
> >  samples/bpf/xdp_mb_user.c                     | 182 ++++++++++++++++++
> >  tools/include/uapi/linux/bpf.h                |  14 ++
> >  .../testing/selftests/bpf/prog_tests/xdp_mb.c |  77 ++++++++
> >  .../selftests/bpf/progs/test_xdp_multi_buff.c |  24 +++
> >  36 files changed, 691 insertions(+), 114 deletions(-)
> >  create mode 100644 samples/bpf/xdp_mb_kern.c
> >  create mode 100644 samples/bpf/xdp_mb_user.c
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
> > 
> > -- 
> > 2.26.2
> > 
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support
  2020-09-30 19:11   ` Alexei Starovoitov
@ 2020-10-01  9:47     ` Jesper Dangaard Brouer
  2020-10-01 15:05     ` Lorenzo Bianconi
  1 sibling, 0 replies; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2020-10-01  9:47 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Lorenzo Bianconi, netdev, bpf, davem, sameehj, kuba,
	john.fastabend, daniel, ast, shayagr, echaudro, lorenzo.bianconi,
	dsahern, brouer

On Wed, 30 Sep 2020 12:11:21 -0700
Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:

> On Wed, Sep 30, 2020 at 05:41:57PM +0200, Lorenzo Bianconi wrote:
> > From: Sameeh Jubran <sameehj@amazon.com>
> > 
> > The implementation is based on this [0] draft by Jesper D. Brouer.

First of all I think you are giving me too much credit, and this is
both not really relevant and also not specific enough.  The link[0]
contains several proposals (actually from different people) and it is
not clear which of these proposal you reference.

I think this patch need to explain and argue why these BPF-helpers
makes sense... this will become BPF UAPI.

> > Provided two new helpers:
> > 
> > * bpf_xdp_get_frag_count()
> > * bpf_xdp_get_frags_total_size()

Why was the "frag" and "frags" name chosen?

 
> > [0] xdp mb design - https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> > Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
> > Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> >  include/uapi/linux/bpf.h       | 14 ++++++++++++
> >  net/core/filter.c              | 42 ++++++++++++++++++++++++++++++++++
> >  tools/include/uapi/linux/bpf.h | 14 ++++++++++++
> >  3 files changed, 70 insertions(+)
> > 
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index a22812561064..6f97dce8cccf 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -3586,6 +3586,18 @@ union bpf_attr {
> >   * 		the data in *dst*. This is a wrapper of **copy_from_user**\ ().
> >   * 	Return
> >   * 		0 on success, or a negative error in case of failure.
> > + *
> > + * int bpf_xdp_get_frag_count(struct xdp_buff *xdp_md)
> > + *	Description
> > + *		Get the number of fragments for a given xdp multi-buffer.
> > + *	Return
> > + *		The number of fragments
> > + *
> > + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md)
> > + *	Description
> > + *		Get the total size of fragments for a given xdp multi-buffer.
> > + *	Return
> > + *		The total size of fragments for a given xdp multi-buffer.
> >   */
> >  #define __BPF_FUNC_MAPPER(FN)		\
> >  	FN(unspec),			\
> > @@ -3737,6 +3749,8 @@ union bpf_attr {
> >  	FN(inode_storage_delete),	\
> >  	FN(d_path),			\
> >  	FN(copy_from_user),		\
> > +	FN(xdp_get_frag_count),		\
> > +	FN(xdp_get_frags_total_size),	\
> >  	/* */  
> 
> Please route the set via bpf-next otherwise merge conflicts will be severe.
> 



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support
  2020-09-30 19:11   ` Alexei Starovoitov
  2020-10-01  9:47     ` Jesper Dangaard Brouer
@ 2020-10-01 15:05     ` Lorenzo Bianconi
  2020-10-01 15:40       ` Alexei Starovoitov
  1 sibling, 1 reply; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-10-01 15:05 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: netdev, bpf, davem, sameehj, kuba, john.fastabend, daniel, ast,
	shayagr, brouer, echaudro, lorenzo.bianconi, dsahern

[-- Attachment #1: Type: text/plain, Size: 1657 bytes --]

> On Wed, Sep 30, 2020 at 05:41:57PM +0200, Lorenzo Bianconi wrote:

Hi Alexei,

> > From: Sameeh Jubran <sameehj@amazon.com>
> > 
> > The implementation is based on this [0] draft by Jesper D. Brouer.
> > 
> > Provided two new helpers:
> > 
> > * bpf_xdp_get_frag_count()
> > * bpf_xdp_get_frags_total_size()
> > 
> > + * int bpf_xdp_get_frag_count(struct xdp_buff *xdp_md)
> > + *	Description
> > + *		Get the number of fragments for a given xdp multi-buffer.
> > + *	Return
> > + *		The number of fragments
> > + *
> > + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md)
> > + *	Description
> > + *		Get the total size of fragments for a given xdp multi-buffer.
> > + *	Return
> > + *		The total size of fragments for a given xdp multi-buffer.
> >   */
> >  #define __BPF_FUNC_MAPPER(FN)		\
> >  	FN(unspec),			\
> > @@ -3737,6 +3749,8 @@ union bpf_attr {
> >  	FN(inode_storage_delete),	\
> >  	FN(d_path),			\
> >  	FN(copy_from_user),		\
> > +	FN(xdp_get_frag_count),		\
> > +	FN(xdp_get_frags_total_size),	\
> >  	/* */
> 
> Please route the set via bpf-next otherwise merge conflicts will be severe.

ack, fine

in bpf-next the following two commits (available in net-next) are currently missing:
- 632bb64f126a: net: mvneta: try to use in-irq pp cache in mvneta_txq_bufs_free
- 879456bedbe5: net: mvneta: avoid possible cache misses in mvneta_rx_swbm

is it ok to rebase bpf-next ontop of net-next in order to post all the series
in bpf-next? Or do you prefer to post mvneta patches in net-next and bpf
related changes in bpf-next when it will rebased ontop of net-next?

Regards,
Lorenzo

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support
  2020-10-01 15:05     ` Lorenzo Bianconi
@ 2020-10-01 15:40       ` Alexei Starovoitov
  2020-10-01 15:44         ` Lorenzo Bianconi
  0 siblings, 1 reply; 24+ messages in thread
From: Alexei Starovoitov @ 2020-10-01 15:40 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: Network Development, bpf, David S. Miller, sameehj,
	Jakub Kicinski, John Fastabend, Daniel Borkmann,
	Alexei Starovoitov, shayagr, Jesper Dangaard Brouer,
	Eelco Chaudron, Lorenzo Bianconi, David Ahern

On Thu, Oct 1, 2020 at 8:05 AM Lorenzo Bianconi <lorenzo@kernel.org> wrote:
>
> > On Wed, Sep 30, 2020 at 05:41:57PM +0200, Lorenzo Bianconi wrote:
>
> Hi Alexei,
>
> > > From: Sameeh Jubran <sameehj@amazon.com>
> > >
> > > The implementation is based on this [0] draft by Jesper D. Brouer.
> > >
> > > Provided two new helpers:
> > >
> > > * bpf_xdp_get_frag_count()
> > > * bpf_xdp_get_frags_total_size()
> > >
> > > + * int bpf_xdp_get_frag_count(struct xdp_buff *xdp_md)
> > > + * Description
> > > + *         Get the number of fragments for a given xdp multi-buffer.
> > > + * Return
> > > + *         The number of fragments
> > > + *
> > > + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md)
> > > + * Description
> > > + *         Get the total size of fragments for a given xdp multi-buffer.
> > > + * Return
> > > + *         The total size of fragments for a given xdp multi-buffer.
> > >   */
> > >  #define __BPF_FUNC_MAPPER(FN)              \
> > >     FN(unspec),                     \
> > > @@ -3737,6 +3749,8 @@ union bpf_attr {
> > >     FN(inode_storage_delete),       \
> > >     FN(d_path),                     \
> > >     FN(copy_from_user),             \
> > > +   FN(xdp_get_frag_count),         \
> > > +   FN(xdp_get_frags_total_size),   \
> > >     /* */
> >
> > Please route the set via bpf-next otherwise merge conflicts will be severe.
>
> ack, fine
>
> in bpf-next the following two commits (available in net-next) are currently missing:
> - 632bb64f126a: net: mvneta: try to use in-irq pp cache in mvneta_txq_bufs_free
> - 879456bedbe5: net: mvneta: avoid possible cache misses in mvneta_rx_swbm
>
> is it ok to rebase bpf-next ontop of net-next in order to post all the series
> in bpf-next? Or do you prefer to post mvneta patches in net-next and bpf
> related changes in bpf-next when it will rebased ontop of net-next?

bpf-next will receive these patches later today,
so I prefer the whole thing on top of bpf-next at that time.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support
  2020-10-01 15:40       ` Alexei Starovoitov
@ 2020-10-01 15:44         ` Lorenzo Bianconi
  0 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Bianconi @ 2020-10-01 15:44 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Network Development, bpf, David S. Miller, sameehj,
	Jakub Kicinski, John Fastabend, Daniel Borkmann,
	Alexei Starovoitov, shayagr, Jesper Dangaard Brouer,
	Eelco Chaudron, Lorenzo Bianconi, David Ahern

[-- Attachment #1: Type: text/plain, Size: 778 bytes --]

[...]
> > >
> > > Please route the set via bpf-next otherwise merge conflicts will be severe.
> >
> > ack, fine
> >
> > in bpf-next the following two commits (available in net-next) are currently missing:
> > - 632bb64f126a: net: mvneta: try to use in-irq pp cache in mvneta_txq_bufs_free
> > - 879456bedbe5: net: mvneta: avoid possible cache misses in mvneta_rx_swbm
> >
> > is it ok to rebase bpf-next ontop of net-next in order to post all the series
> > in bpf-next? Or do you prefer to post mvneta patches in net-next and bpf
> > related changes in bpf-next when it will rebased ontop of net-next?
> 
> bpf-next will receive these patches later today,
> so I prefer the whole thing on top of bpf-next at that time.

sounds good, thx.

Regards,
Lorenzo

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-10-01 15:44 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-30 15:41 [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 01/12] xdp: introduce mb in xdp_buff/xdp_frame Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 02/12] xdp: initialize xdp_buff mb bit to 0 in all XDP drivers Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 03/12] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 04/12] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 05/12] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support Lorenzo Bianconi
2020-09-30 19:11   ` Alexei Starovoitov
2020-10-01  9:47     ` Jesper Dangaard Brouer
2020-10-01 15:05     ` Lorenzo Bianconi
2020-10-01 15:40       ` Alexei Starovoitov
2020-10-01 15:44         ` Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 07/12] samples/bpf: add bpf program that uses xdp mb helpers Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 08/12] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
2020-09-30 15:42 ` [PATCH v3 net-next 09/12] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
2020-09-30 15:42 ` [PATCH v3 net-next 10/12] bpf: add xdp multi-buffer selftest Lorenzo Bianconi
2020-10-01  7:43   ` Eelco Chaudron
2020-09-30 15:42 ` [PATCH v3 net-next 11/12] net: mvneta: enable jumbo frames for XDP Lorenzo Bianconi
2020-09-30 15:42 ` [PATCH v3 net-next 12/12] bpf: cpumap: introduce xdp multi-buff support Lorenzo Bianconi
2020-09-30 16:31 ` [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Jakub Kicinski
2020-09-30 16:39   ` Lorenzo Bianconi
2020-09-30 21:40     ` Jakub Kicinski
2020-09-30 19:47 ` John Fastabend
2020-10-01  9:04   ` Lorenzo Bianconi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.