All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: Lorenzo Bianconi <lorenzo@kernel.org>,
	bpf@vger.kernel.org, netdev@vger.kernel.org
Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net,
	kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net,
	shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org,
	brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com,
	alexander.duyck@gmail.com, saeed@kernel.org,
	maciej.fijalkowski@intel.com, magnus.karlsson@intel.com,
	tirthendu.sarkar@intel.com, toke@redhat.com
Subject: RE: [PATCH v20 bpf-next 00/23] mvneta: introduce XDP multi-buffer support
Date: Fri, 10 Dec 2021 16:16:51 -0800	[thread overview]
Message-ID: <61b3edf34d399_2c40320815@john.notmuch> (raw)
In-Reply-To: <cover.1639162845.git.lorenzo@kernel.org>

Lorenzo Bianconi wrote:
> This series introduce XDP multi-buffer support. The mvneta driver is
> the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> please focus on how these new types of xdp_{buff,frame} packets
> traverse the different layers and the layout design. It is on purpose
> that BPF-helpers are kept simple, as we don't want to expose the
> internal layout to allow later changes.
> 
> The main idea for the new multi-buffer layout is to reuse the same
> structure used for non-linear SKB. This rely on the "skb_shared_info"
> struct at the end of the first buffer to link together subsequent
> buffers. Keeping the layout compatible with SKBs is also done to ease
> and speedup creating a SKB from an xdp_{buff,frame}.
> Converting xdp_frame to SKB and deliver it to the network stack is shown
> in patch 05/18 (e.g. cpumaps).
> 
> A multi-buffer bit (mb) has been introduced in the flags field of xdp_{buff,frame}
> structure to notify the bpf/network layer if this is a xdp multi-buffer frame
> (mb = 1) or not (mb = 0).
> The mb bit will be set by a xdp multi-buffer capable driver only for
> non-linear frames maintaining the capability to receive linear frames
> without any extra cost since the skb_shared_info structure at the end
> of the first buffer will be initialized only if mb is set.
> Moreover the flags field in xdp_{buff,frame} will be reused even for
> xdp rx csum offloading in future series.
> 
> Typical use cases for this series are:
> - Jumbo-frames
> - Packet header split (please see Google���s use-case @ NetDevConf 0x14, [0])
> - TSO/GRO for XDP_REDIRECT
> 
> The three following ebpf helpers (and related selftests) has been introduced:
> - bpf_xdp_load_bytes:
>   This helper is provided as an easy way to load data from a xdp buffer. It
>   can be used to load len bytes from offset from the frame associated to
>   xdp_md, into the buffer pointed by buf.
> - bpf_xdp_store_bytes:
>   Store len bytes from buffer buf into the frame associated to xdp_md, at
>   offset.
> - bpf_xdp_get_buff_len:
>   Return the total frame size (linear + paged parts)
> 
> bpf_xdp_adjust_tail and bpf_xdp_copy helpers have been modified to take into
> account xdp multi-buff frames.
> Moreover, similar to skb_header_pointer, we introduced bpf_xdp_pointer utility
> routine to return a pointer to a given position in the xdp_buff if the
> requested area (offset + len) is contained in a contiguous memory area
> otherwise it must be copied in a bounce buffer provided by the caller running
> bpf_xdp_copy_buf().
> 
> BPF_F_XDP_MB flag for bpf_attr has been introduced to notify the kernel the
> eBPF program fully support xdp multi-buffer.
> SEC("xdp_mb/"), SEC_DEF("xdp_devmap_mb/") and SEC_DEF("xdp_cpumap_mb/" have been
> introduced to declare xdp multi-buffer support.
> The NIC driver is expected to reject an eBPF program if it is running in XDP
> multi-buffer mode and the program does not support XDP multi-buffer.
> In the same way it is not possible to mix xdp multi-buffer and xdp legacy
> programs in a CPUMAP/DEVMAP or tailcall a xdp multi-buffer/legacy program from
> a legacy/multi-buff one.
> 
> More info about the main idea behind this approach can be found here [1][2].

Thanks for sticking with this.

OK for the series, I really want to see this on some other hardware though,
preferably 40Gbps or more ASAP...

Acked-by: John Fastabend <john.fastabend@gmail.com>

  parent reply	other threads:[~2021-12-11  0:17 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-10 19:14 [PATCH v20 bpf-next 00/23] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 01/23] net: skbuff: add size metadata to skb_shared_info for xdp Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 02/23] xdp: introduce flags field in xdp_buff/xdp_frame Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 03/23] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
2021-12-11  0:09   ` John Fastabend
2021-12-10 19:14 ` [PATCH v20 bpf-next 04/23] net: mvneta: simplify mvneta_swbm_add_rx_fragment management Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 05/23] net: xdp: add xdp_update_skb_shared_info utility routine Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 06/23] net: marvell: rely on " Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 07/23] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 08/23] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 09/23] bpf: introduce BPF_F_XDP_MB flag in prog_flags loading the ebpf program Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 10/23] net: mvneta: enable jumbo frames if the loaded XDP program support mb Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 11/23] bpf: introduce bpf_xdp_get_buff_len helper Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 12/23] bpf: add multi-buff support to the bpf_xdp_adjust_tail() API Lorenzo Bianconi
2021-12-11  0:11   ` John Fastabend
2021-12-10 19:14 ` [PATCH v20 bpf-next 13/23] bpf: add multi-buffer support to xdp copy helpers Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 14/23] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 15/23] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 16/23] bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 17/23] bpf: selftests: update xdp_adjust_tail selftest to include multi-buffer Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 18/23] libbpf: Add SEC name for xdp_mb programs Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 19/23] bpf: generalise tail call map compatibility check Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 20/23] net: xdp: introduce bpf_xdp_pointer utility routine Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 21/23] bpf: selftests: introduce bpf_xdp_{load,store}_bytes selftest Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 22/23] bpf: selftests: add CPUMAP/DEVMAP selftests for xdp multi-buff Lorenzo Bianconi
2021-12-10 19:14 ` [PATCH v20 bpf-next 23/23] xdp: disable XDP_REDIRECT " Lorenzo Bianconi
2021-12-11  6:49   ` Jesper Dangaard Brouer
2021-12-11  0:16 ` John Fastabend [this message]
2021-12-11 17:38 ` [PATCH v20 bpf-next 00/23] mvneta: introduce XDP multi-buffer support Toke Høiland-Jørgensen
2021-12-28 14:45 ` Lorenzo Bianconi
2021-12-30  2:07   ` Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=61b3edf34d399_2c40320815@john.notmuch \
    --to=john.fastabend@gmail.com \
    --cc=alexander.duyck@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=echaudro@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeed@kernel.org \
    --cc=shayagr@amazon.com \
    --cc=tirthendu.sarkar@intel.com \
    --cc=toke@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.