bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Magnus Karlsson <magnus.karlsson@gmail.com>
To: John Fastabend <john.fastabend@gmail.com>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>, bpf <bpf@vger.kernel.org>,
	Network Development <netdev@vger.kernel.org>,
	Lorenzo Bianconi <lorenzo.bianconi@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	shayagr@amazon.com, sameehj@amazon.com,
	David Ahern <dsahern@kernel.org>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Eelco Chaudron <echaudro@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	Saeed Mahameed <saeed@kernel.org>,
	"Fijalkowski, Maciej" <maciej.fijalkowski@intel.com>,
	"Karlsson, Magnus" <magnus.karlsson@intel.com>,
	Tirthendu <tirthendu.sarkar@intel.com>
Subject: Re: [PATCH v9 bpf-next 00/14] mvneta: introduce XDP multi-buffer support
Date: Thu, 1 Jul 2021 09:56:40 +0200	[thread overview]
Message-ID: <CAJ8uoz3pOrMM-krx_f=n_f5LrhiXy8pHLb78shENuvSRxN68og@mail.gmail.com> (raw)
In-Reply-To: <60d26fcdbd5c7_1342e208f6@john-XPS-13-9370.notmuch>

On Wed, Jun 23, 2021 at 1:19 AM John Fastabend <john.fastabend@gmail.com> wrote:
>
> Lorenzo Bianconi wrote:
> > This series introduce XDP multi-buffer support. The mvneta driver is
> > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> > please focus on how these new types of xdp_{buff,frame} packets
> > traverse the different layers and the layout design. It is on purpose
> > that BPF-helpers are kept simple, as we don't want to expose the
> > internal layout to allow later changes.
> >
> > For now, to keep the design simple and to maintain performance, the XDP
> > BPF-prog (still) only have access to the first-buffer. It is left for
> > later (another patchset) to add payload access across multiple buffers.
> > This patchset should still allow for these future extensions. The goal
> > is to lift the XDP MTU restriction that comes with XDP, but maintain
> > same performance as before.
>
> At this point I don't think we can have a partial implementation. At
> the moment we have packet capture applications and protocol parsers
> running in production. If we allow this to go in staged we are going
> to break those applications that make the fundamental assumption they
> have access to all the data in the packet.
>
> There will be no way to fix it when it happens. The teams running the
> applications wont necessarily be able to change the network MTU. Now
> it doesn't work, hard stop. This is better than it sort of works some
> of the time. Worse if we get in a situation where some drivers support
> partial access and others support full access the support matrix gets worse.
>
> I think we need to get full support and access to all bytes. I believe
> I said this earlier, but now we've deployed apps that really do need
> access to the payloads so its not a theoritical concern anymore, but
> rather a real one based on deployed BPF programs.
>
> >
> > The main idea for the new multi-buffer layout is to reuse the same
> > layout used for non-linear SKB. This rely on the "skb_shared_info"
> > struct at the end of the first buffer to link together subsequent
> > buffers. Keeping the layout compatible with SKBs is also done to ease
> > and speedup creating an SKB from an xdp_{buff,frame}.
> > Converting xdp_frame to SKB and deliver it to the network stack is shown
> > in patch 07/14 (e.g. cpumaps).
> >
> > A multi-buffer bit (mb) has been introduced in the flags field of xdp_{buff,frame}
> > structure to notify the bpf/network layer if this is a xdp multi-buffer frame
> > (mb = 1) or not (mb = 0).
> > The mb bit will be set by a xdp multi-buffer capable driver only for
> > non-linear frames maintaining the capability to receive linear frames
> > without any extra cost since the skb_shared_info structure at the end
> > of the first buffer will be initialized only if mb is set.
> > Moreover the flags field in xdp_{buff,frame} will be reused even for
> > xdp rx csum offloading in future series.
> >
> > Typical use cases for this series are:
> > - Jumbo-frames
> > - Packet header split (please see Google���s use-case @ NetDevConf 0x14, [0])
> > - TSO
> >
> > A new bpf helper (bpf_xdp_get_buff_len) has been introduce in order to notify
> > the eBPF layer about the total frame size (linear + paged parts).
>
> Is it possible to make currently working programs continue to work?
> For a simple packet capture example a program might capture the
> entire packet of bytes '(data_end - data_start)'. With above implementation
> the program will continue to run, but will no longer be capturing
> all the bytes... so its a silent failure. Otherwise I'll need to
> backport fixes into my BPF programs and releases to ensure they
> don't walk onto a new kernel with multi-buffer support enabled.
> Its not ideal.
>
> >
> > bpf_xdp_adjust_tail and bpf_xdp_copy helpers have been modified to take into
> > account xdp multi-buff frames.
> >
> > More info about the main idea behind this approach can be found here [1][2].
>
> Will read [1],[2].
>
> Where did the perf data for the 40gbps NIC go? I think we want that
> done again on this series with at least 40gbps NICs and better
> yet 100gbps drivers. If its addressed in a patch commit message
> I'm reading the series now.

Here is the perf data for a 40 gbps i40e on my 2.1 GHz Cascade Lake server.

                                xdpsock -r           XDP_DROP        XDP_TX
Lorenzo's patches:  -2%/+1.5 cycles    -3%/+3            +2%/-6 (Yes,
it gets better!)
+ i40e support:        -5.5%/+5               -8%/+9             -9%/+31

It seems that it is the driver support itself that hurts now. The
overhead of the base support has decreased substantially over time
which is good.

> >
> > Changes since v8:
> > - add proper dma unmapping if XDP_TX fails on mvneta for a xdp multi-buff
> > - switch back to skb_shared_info implementation from previous xdp_shared_info
> >   one
> > - avoid using a bietfield in xdp_buff/xdp_frame since it introduces performance
> >   regressions. Tested now on 10G NIC (ixgbe) to verify there are no performance
> >   penalties for regular codebase
> > - add bpf_xdp_get_buff_len helper and remove frame_length field in xdp ctx
> > - add data_len field in skb_shared_info struct
> >
> > Changes since v7:
> > - rebase on top of bpf-next
> > - fix sparse warnings
> > - improve comments for frame_length in include/net/xdp.h
> >
> > Changes since v6:
> > - the main difference respect to previous versions is the new approach proposed
> >   by Eelco to pass full length of the packet to eBPF layer in XDP context
> > - reintroduce multi-buff support to eBPF kself-tests
> > - reintroduce multi-buff support to bpf_xdp_adjust_tail helper
> > - introduce multi-buffer support to bpf_xdp_copy helper
> > - rebase on top of bpf-next
> >
> > Changes since v5:
> > - rebase on top of bpf-next
> > - initialize mb bit in xdp_init_buff() and drop per-driver initialization
> > - drop xdp->mb initialization in xdp_convert_zc_to_xdp_frame()
> > - postpone introduction of frame_length field in XDP ctx to another series
> > - minor changes
> >
> > Changes since v4:
> > - rebase ontop of bpf-next
> > - introduce xdp_shared_info to build xdp multi-buff instead of using the
> >   skb_shared_info struct
> > - introduce frame_length in xdp ctx
> > - drop previous bpf helpers
> > - fix bpf_xdp_adjust_tail for xdp multi-buff
> > - introduce xdp multi-buff self-tests for bpf_xdp_adjust_tail
> > - fix xdp_return_frame_bulk for xdp multi-buff
> >
> > Changes since v3:
> > - rebase ontop of bpf-next
> > - add patch 10/13 to copy back paged data from a xdp multi-buff frame to
> >   userspace buffer for xdp multi-buff selftests
> >
> > Changes since v2:
> > - add throughput measurements
> > - drop bpf_xdp_adjust_mb_header bpf helper
> > - introduce selftest for xdp multibuffer
> > - addressed comments on bpf_xdp_get_frags_count
> > - introduce xdp multi-buff support to cpumaps
> >
> > Changes since v1:
> > - Fix use-after-free in xdp_return_{buff/frame}
> > - Introduce bpf helpers
> > - Introduce xdp_mb sample program
> > - access skb_shared_info->nr_frags only on the last fragment
> >
> > Changes since RFC:
> > - squash multi-buffer bit initialization in a single patch
> > - add mvneta non-linear XDP buff support for tx side
> >
> > [0] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy
> > [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> > [2] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver (XDPmulti-buffers section)
> >
> > Eelco Chaudron (3):
> >   bpf: add multi-buff support to the bpf_xdp_adjust_tail() API
> >   bpf: add multi-buffer support to xdp copy helpers
> >   bpf: update xdp_adjust_tail selftest to include multi-buffer
> >
> > Lorenzo Bianconi (11):
> >   net: skbuff: add data_len field to skb_shared_info
> >   xdp: introduce flags field in xdp_buff/xdp_frame
> >   net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
> >   xdp: add multi-buff support to xdp_return_{buff/frame}
> >   net: mvneta: add multi buffer support to XDP_TX
> >   net: mvneta: enable jumbo frames for XDP
> >   net: xdp: add multi-buff support to xdp_build_skb_from_frame
> >   bpf: introduce bpf_xdp_get_buff_len helper
> >   bpf: move user_size out of bpf_test_init
> >   bpf: introduce multibuff support to bpf_prog_test_run_xdp()
> >   bpf: test_run: add xdp_shared_info pointer in bpf_test_finish
> >     signature
> >
> >  drivers/net/ethernet/marvell/mvneta.c         | 143 ++++++++++------
> >  include/linux/skbuff.h                        |   5 +-
> >  include/net/xdp.h                             |  56 ++++++-
> >  include/uapi/linux/bpf.h                      |   7 +
> >  kernel/trace/bpf_trace.c                      |   3 +
> >  net/bpf/test_run.c                            | 108 +++++++++---
> >  net/core/filter.c                             | 157 +++++++++++++++++-
> >  net/core/xdp.c                                |  72 +++++++-
> >  tools/include/uapi/linux/bpf.h                |   7 +
> >  .../bpf/prog_tests/xdp_adjust_tail.c          | 105 ++++++++++++
> >  .../selftests/bpf/prog_tests/xdp_bpf2bpf.c    | 127 +++++++++-----
> >  .../bpf/progs/test_xdp_adjust_tail_grow.c     |  10 +-
> >  .../bpf/progs/test_xdp_adjust_tail_shrink.c   |  32 +++-
> >  .../selftests/bpf/progs/test_xdp_bpf2bpf.c    |   2 +-
> >  14 files changed, 705 insertions(+), 129 deletions(-)
> >
> > --
> > 2.31.1
> >
>
>

      parent reply	other threads:[~2021-07-01  7:56 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-14 12:49 [PATCH v9 bpf-next 00/14] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 01/14] net: skbuff: add data_len field to skb_shared_info Lorenzo Bianconi
2021-06-28 19:58   ` Alexander Duyck
2021-06-29 12:44     ` Lorenzo Bianconi
2021-06-29 17:08       ` Jakub Kicinski
2021-06-29 18:18         ` Alexander Duyck
2021-06-29 18:37           ` Jakub Kicinski
2021-06-29 19:11             ` Jesper Dangaard Brouer
2021-06-29 19:18               ` Lorenzo Bianconi
2021-06-29 20:45                 ` Alexander Duyck
2021-06-14 12:49 ` [PATCH v9 bpf-next 02/14] xdp: introduce flags field in xdp_buff/xdp_frame Lorenzo Bianconi
2021-06-28 20:14   ` Alexander Duyck
2021-06-29 12:43     ` Lorenzo Bianconi
2021-06-29 13:07       ` Alexander Duyck
2021-06-29 13:25         ` Lorenzo Bianconi
2021-07-05 15:52         ` Lorenzo Bianconi
2021-07-05 21:35           ` Alexander Duyck
2021-07-06 11:53             ` Lorenzo Bianconi
2021-07-06 14:04               ` Alexander Duyck
2021-07-06 17:47                 ` Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 03/14] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 04/14] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 05/14] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 06/14] net: mvneta: enable jumbo frames for XDP Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 07/14] net: xdp: add multi-buff support to xdp_build_skb_from_frame Lorenzo Bianconi
2021-06-28 21:05   ` Alexander Duyck
2021-06-29 18:34     ` Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 08/14] bpf: add multi-buff support to the bpf_xdp_adjust_tail() API Lorenzo Bianconi
2021-06-22 23:37   ` John Fastabend
2021-06-24  9:26     ` Eelco Chaudron
2021-06-24 14:24       ` John Fastabend
2021-06-24 15:16         ` Zvi Effron
2021-06-29 13:19         ` Lorenzo Bianconi
2021-06-29 13:27           ` Toke Høiland-Jørgensen
2021-07-06 21:44         ` Backwards compatibility for XDP multi-buff (was: Re: [PATCH v9 bpf-next 08/14] bpf: add multi-buff support to the bpf_xdp_adjust_tail() API) Toke Høiland-Jørgensen
2021-06-14 12:49 ` [PATCH v9 bpf-next 09/14] bpf: introduce bpf_xdp_get_buff_len helper Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 10/14] bpf: add multi-buffer support to xdp copy helpers Lorenzo Bianconi
2021-06-22 23:49   ` John Fastabend
2021-06-24  9:42     ` Eelco Chaudron
2021-06-24 14:28       ` John Fastabend
2021-06-25  8:25         ` Eelco Chaudron
2021-06-29 13:23         ` Lorenzo Bianconi
2021-07-06 10:15         ` Eelco Chaudron
2021-06-14 12:49 ` [PATCH v9 bpf-next 11/14] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 12/14] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 13/14] bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature Lorenzo Bianconi
2021-06-14 12:49 ` [PATCH v9 bpf-next 14/14] bpf: update xdp_adjust_tail selftest to include multi-buffer Lorenzo Bianconi
2021-06-22 23:18 ` [PATCH v9 bpf-next 00/14] mvneta: introduce XDP multi-buffer support John Fastabend
2021-06-23  3:41   ` David Ahern
2021-06-23  5:48     ` John Fastabend
2021-06-23 14:40       ` David Ahern
2021-06-24 14:22         ` John Fastabend
2021-07-01  7:56   ` Magnus Karlsson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJ8uoz3pOrMM-krx_f=n_f5LrhiXy8pHLb78shENuvSRxN68og@mail.gmail.com' \
    --to=magnus.karlsson@gmail.com \
    --cc=alexander.duyck@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=echaudro@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeed@kernel.org \
    --cc=sameehj@amazon.com \
    --cc=shayagr@amazon.com \
    --cc=tirthendu.sarkar@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).