From: Lorenzo Bianconi <lorenzo@kernel.org>
To: netdev@vger.kernel.org
Cc: bpf@vger.kernel.org, davem@davemloft.net, sameehj@amazon.com,
kuba@kernel.org, john.fastabend@gmail.com, daniel@iogearbox.net,
ast@kernel.org, shayagr@amazon.com, brouer@redhat.com,
echaudro@redhat.com, lorenzo.bianconi@redhat.com,
dsahern@kernel.org
Subject: [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support
Date: Wed, 30 Sep 2020 17:41:51 +0200 [thread overview]
Message-ID: <cover.1601478613.git.lorenzo@kernel.org> (raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 6405 bytes --]
This series introduce XDP multi-buffer support. The mvneta driver is
the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
please focus on how these new types of xdp_{buff,frame} packets
traverse the different layers and the layout design. It is on purpose
that BPF-helpers are kept simple, as we don't want to expose the
internal layout to allow later changes.
For now, to keep the design simple and to maintain performance, the XDP
BPF-prog (still) only have access to the first-buffer. It is left for
later (another patchset) to add payload access across multiple buffers.
This patchset should still allow for these future extensions. The goal
is to lift the XDP MTU restriction that comes with XDP, but maintain
same performance as before.
The main idea for the new multi-buffer layout is to reuse the same
layout used for non-linear SKB. This rely on the "skb_shared_info"
struct at the end of the first buffer to link together subsequent
buffers. Keeping the layout compatible with SKBs is also done to ease
and speedup creating an SKB from an xdp_{buff,frame}. Converting
xdp_frame to SKB and deliver it to the network stack is shown in cpumap
code (patch 12/12).
A multi-buffer bit (mb) has been introduced in xdp_{buff,frame} structure
to notify the bpf/network layer this is a xdp multi-buffer frame.
In order to provide to userspace some metdata about the non-linear
xdp_{buff,frame}, we introduced 2 bpf helpers:
- bpf_xdp_get_frag_count:
get the number of fragments for a given xdp multi-buffer.
- bpf_xdp_get_frags_total_size:
get the total size of fragments for a given xdp multi-buffer.
Typical use cases for this series are:
- Jumbo-frames
- Packet header split (please see Google’s use-case @ NetDevConf 0x14, [0])
- TSO
More info about the main idea behind this approach can be found here [1][2].
We carried out some throughput tests in order to verify we did not introduced
any performance regression adding xdp multi-buff support to mvneta:
offered load is ~ 1000Kpps, packet size is 64B
commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in mvneta_rx_swbm")
- xdp-pass: ~162Kpps
- xdp-drop: ~701Kpps
- xdp-tx: ~185Kpps
- xdp-redirect: ~202Kpps
mvneta xdp multi-buff:
- xdp-pass: ~163Kpps
- xdp-drop: ~739Kpps
- xdp-tx: ~182Kpps
- xdp-redirect: ~202Kpps
This series is based on "bpf: cpumap: remove rcpu pointer from cpu_map_build_skb signature"
https://patchwork.ozlabs.org/project/netdev/patch/33cb9b7dc447de3ea6fd6ce713ac41bca8794423.1601292015.git.lorenzo@kernel.org/
Changes since v2:
- add throughput measurements
- drop bpf_xdp_adjust_mb_header bpf helper
- introduce selftest for xdp multibuffer
- addressed comments on bpf_xdp_get_frag_count
- introduce xdp multi-buff support to cpumaps
Changes since v1:
- Fix use-after-free in xdp_return_{buff/frame}
- Introduce bpf helpers
- Introduce xdp_mb sample program
- access skb_shared_info->nr_frags only on the last fragment
Changes since RFC:
- squash multi-buffer bit initialization in a single patch
- add mvneta non-linear XDP buff support for tx side
[0] https://netdevconf.info/0x14/pub/slides/62/Implementing%20TCP%20RX%20zero%20copy.pdf
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
[2] https://netdevconf.info/0x14/pub/slides/10/add-xdp-on-driver.pdf (XDPmulti-buffers section)
Lorenzo Bianconi (10):
xdp: introduce mb in xdp_buff/xdp_frame
xdp: initialize xdp_buff mb bit to 0 in all XDP drivers
net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
xdp: add multi-buff support to xdp_return_{buff/frame}
net: mvneta: add multi buffer support to XDP_TX
bpf: move user_size out of bpf_test_init
bpf: introduce multibuff support to bpf_prog_test_run_xdp()
bpf: add xdp multi-buffer selftest
net: mvneta: enable jumbo frames for XDP
bpf: cpumap: introduce xdp multi-buff support
Sameeh Jubran (2):
bpf: helpers: add multibuffer support
samples/bpf: add bpf program that uses xdp mb helpers
drivers/net/ethernet/amazon/ena/ena_netdev.c | 1 +
drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 1 +
.../net/ethernet/cavium/thunder/nicvf_main.c | 1 +
.../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 1 +
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 1 +
drivers/net/ethernet/intel/ice/ice_txrx.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 1 +
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 1 +
drivers/net/ethernet/marvell/mvneta.c | 131 +++++++------
.../net/ethernet/marvell/mvpp2/mvpp2_main.c | 1 +
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 1 +
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 1 +
.../ethernet/netronome/nfp/nfp_net_common.c | 1 +
drivers/net/ethernet/qlogic/qede/qede_fp.c | 1 +
drivers/net/ethernet/sfc/rx.c | 1 +
drivers/net/ethernet/socionext/netsec.c | 1 +
drivers/net/ethernet/ti/cpsw.c | 1 +
drivers/net/ethernet/ti/cpsw_new.c | 1 +
drivers/net/hyperv/netvsc_bpf.c | 1 +
drivers/net/tun.c | 2 +
drivers/net/veth.c | 1 +
drivers/net/virtio_net.c | 2 +
drivers/net/xen-netfront.c | 1 +
include/net/xdp.h | 31 ++-
include/uapi/linux/bpf.h | 14 ++
kernel/bpf/cpumap.c | 45 +----
net/bpf/test_run.c | 45 ++++-
net/core/dev.c | 1 +
net/core/filter.c | 42 ++++
net/core/xdp.c | 104 ++++++++++
samples/bpf/Makefile | 3 +
samples/bpf/xdp_mb_kern.c | 68 +++++++
samples/bpf/xdp_mb_user.c | 182 ++++++++++++++++++
tools/include/uapi/linux/bpf.h | 14 ++
.../testing/selftests/bpf/prog_tests/xdp_mb.c | 77 ++++++++
.../selftests/bpf/progs/test_xdp_multi_buff.c | 24 +++
36 files changed, 691 insertions(+), 114 deletions(-)
create mode 100644 samples/bpf/xdp_mb_kern.c
create mode 100644 samples/bpf/xdp_mb_user.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
--
2.26.2
next reply other threads:[~2020-09-30 15:42 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-30 15:41 Lorenzo Bianconi [this message]
2020-09-30 15:41 ` [PATCH v3 net-next 01/12] xdp: introduce mb in xdp_buff/xdp_frame Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 02/12] xdp: initialize xdp_buff mb bit to 0 in all XDP drivers Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 03/12] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 04/12] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 05/12] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 06/12] bpf: helpers: add multibuffer support Lorenzo Bianconi
2020-09-30 19:11 ` Alexei Starovoitov
2020-10-01 9:47 ` Jesper Dangaard Brouer
2020-10-01 15:05 ` Lorenzo Bianconi
2020-10-01 15:40 ` Alexei Starovoitov
2020-10-01 15:44 ` Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 07/12] samples/bpf: add bpf program that uses xdp mb helpers Lorenzo Bianconi
2020-09-30 15:41 ` [PATCH v3 net-next 08/12] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
2020-09-30 15:42 ` [PATCH v3 net-next 09/12] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
2020-09-30 15:42 ` [PATCH v3 net-next 10/12] bpf: add xdp multi-buffer selftest Lorenzo Bianconi
2020-10-01 7:43 ` Eelco Chaudron
2020-09-30 15:42 ` [PATCH v3 net-next 11/12] net: mvneta: enable jumbo frames for XDP Lorenzo Bianconi
2020-09-30 15:42 ` [PATCH v3 net-next 12/12] bpf: cpumap: introduce xdp multi-buff support Lorenzo Bianconi
2020-09-30 16:31 ` [PATCH v3 net-next 00/12] mvneta: introduce XDP multi-buffer support Jakub Kicinski
2020-09-30 16:39 ` Lorenzo Bianconi
2020-09-30 21:40 ` Jakub Kicinski
2020-09-30 19:47 ` John Fastabend
2020-10-01 9:04 ` Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1601478613.git.lorenzo@kernel.org \
--to=lorenzo@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brouer@redhat.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=echaudro@redhat.com \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=lorenzo.bianconi@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=sameehj@amazon.com \
--cc=shayagr@amazon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.