From: Willem de Bruijn <firstname.lastname@example.org> To: "Toke Høiland-Jørgensen" <email@example.com> Cc: Jesper Dangaard Brouer <firstname.lastname@example.org>, "Machulsky, Zorik" <email@example.com>, "Jubran, Samih" <firstname.lastname@example.org>, "email@example.com" <firstname.lastname@example.org>, "email@example.com" <firstname.lastname@example.org>, "Woodhouse, David" <email@example.com>, "Matushevsky, Alexander" <firstname.lastname@example.org>, "Bshara, Saeed" <email@example.com>, "Wilson, Matt" <firstname.lastname@example.org>, "Liguori, Anthony" <email@example.com>, "Bshara, Nafea" <firstname.lastname@example.org>, "Tzalik, Guy" <email@example.com>, "Belgazal, Netanel" <firstname.lastname@example.org>, "Saidi, Ali" <email@example.com>, "Herrenschmidt, Benjamin" <firstname.lastname@example.org>, "Kiyanovski, Arthur" <email@example.com>, Daniel Borkmann <firstname.lastname@example.org>, Ilias Apalodimas <email@example.com>, Alexei Starovoitov <firstname.lastname@example.org>, Jakub Kicinski <email@example.com>, "firstname.lastname@example.org" <email@example.com> Subject: Re: XDP multi-buffer incl. jumbo-frames (Was: [RFC V1 net-next 1/1] net: ena: implement XDP drop support) Date: Wed, 26 Jun 2019 11:20:45 -0400 Message-ID: <CA+FuTSfKnhv9rr=cDa_4m7Dd9qkEm_oabDfyvH0T0sM+fQTUfirstname.lastname@example.org> (raw) In-Reply-To: <email@example.com> On Wed, Jun 26, 2019 at 11:01 AM Toke Høiland-Jørgensen <firstname.lastname@example.org> wrote: > > Jesper Dangaard Brouer <email@example.com> writes: > > > On Wed, 26 Jun 2019 13:52:16 +0200 > > Toke Høiland-Jørgensen <firstname.lastname@example.org> wrote: > > > >> Jesper Dangaard Brouer <email@example.com> writes: > >> > >> > On Tue, 25 Jun 2019 03:19:22 +0000 > >> > "Machulsky, Zorik" <firstname.lastname@example.org> wrote: > >> > > >> >> On 6/23/19, 7:21 AM, "Jesper Dangaard Brouer" <email@example.com> wrote: > >> >> > >> >> On Sun, 23 Jun 2019 10:06:49 +0300 <firstname.lastname@example.org> wrote: > >> >> > >> >> > This commit implements the basic functionality of drop/pass logic in the > >> >> > ena driver. > >> >> > >> >> Usually we require a driver to implement all the XDP return codes, > >> >> before we accept it. But as Daniel and I discussed with Zorik during > >> >> NetConf, we are going to make an exception and accept the driver > >> >> if you also implement XDP_TX. > >> >> > >> >> As we trust that Zorik/Amazon will follow and implement XDP_REDIRECT > >> >> later, given he/you wants AF_XDP support which requires XDP_REDIRECT. > >> >> > >> >> Jesper, thanks for your comments and very helpful discussion during > >> >> NetConf! That's the plan, as we agreed. From our side I would like to > >> >> reiterate again the importance of multi-buffer support by xdp frame. > >> >> We would really prefer not to see our MTU shrinking because of xdp > >> >> support. > >> > > >> > Okay we really need to make a serious attempt to find a way to support > >> > multi-buffer packets with XDP. With the important criteria of not > >> > hurting performance of the single-buffer per packet design. > >> > > >> > I've created a design document, that I will update based on our > >> > discussions:  https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org > >> > > >> > The use-case that really convinced me was Eric's packet header-split. Thanks for starting this discussion Jesper! > >> > > >> > > >> > Lets refresh: Why XDP don't have multi-buffer support: > >> > > >> > XDP is designed for maximum performance, which is why certain driver-level > >> > use-cases were not supported, like multi-buffer packets (like jumbo-frames). > >> > As it e.g. complicated the driver RX-loop and memory model handling. > >> > > >> > The single buffer per packet design, is also tied into eBPF Direct-Access > >> > (DA) to packet data, which can only be allowed if the packet memory is in > >> > contiguous memory. This DA feature is essential for XDP performance. > >> > > >> > > >> > One way forward is to define that XDP only get access to the first > >> > packet buffer, and it cannot see subsequent buffers. For XDP_TX and > >> > XDP_REDIRECT to work then XDP still need to carry pointers (plus > >> > len+offset) to the other buffers, which is 16 bytes per extra buffer. > >> > >> Yeah, I think this would be reasonable. As long as we can have a > >> metadata field with the full length + still give XDP programs the > >> ability to truncate the packet (i.e., discard the subsequent pages) > > > > You touch upon some interesting complications already: > > > > 1. It is valuable for XDP bpf_prog to know "full" length? > > (if so, then we need to extend xdp ctx with info) > > Valuable, quite likely. A hard requirement, probably not (for all use > cases). Agreed. One common validation use would be to drop any packets whose header length disagrees with the actual packet length. > > But if we need to know the full length, when the first-buffer is > > processed. Then realize that this affect the drivers RX-loop, because > > then we need to "collect" all the buffers before we can know the > > length (although some HW provide this in first descriptor). > > > > We likely have to change drivers RX-loop anyhow, as XDP_TX and > > XDP_REDIRECT will also need to "collect" all buffers before the packet > > can be forwarded. (Although this could potentially happen later in > > driver loop when it meet/find the End-Of-Packet descriptor bit). Yes, this might be quite a bit of refactoring of device driver code. Should we move forward with some initial constraints, e.g., no XDP_REDIRECT, no "full" length and no bpf_xdp_adjust_tail? That already allows many useful programs. As long as we don't arrive at a design that cannot be extended with those features later. > > > > > > 2. Can we even allow helper bpf_xdp_adjust_tail() ? > > > > Wouldn't it be easier to disallow a BPF-prog with this helper, when > > driver have configured multi-buffer? > > Easier, certainly. But then it's even easier to not implement this at > all ;) > > > Or will it be too restrictive, if jumbo-frame is very uncommon and > > only enabled because switch infra could not be changed (like Amazon > > case). Header-split, LRO and jumbo frame are certainly not limited to the Amazon case. > I think it would be preferable to support it; but maybe we can let that > depend on how difficult it actually turns out to be to allow it? > > > Perhaps it is better to let bpf_xdp_adjust_tail() fail runtime? > > If we do disallow it, I think I'd lean towards failing the call at > runtime... Disagree. I'd rather have a program fail at load if it depends on multi-frag support while the (driver) implementation does not yet support it.
next prev parent reply index Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-06-23 7:06 [RFC V1 net-next 0/1] Introduce xdp to ena sameehj 2019-06-23 7:06 ` [RFC V1 net-next 1/1] net: ena: implement XDP drop support sameehj 2019-06-23 14:09 ` Jesper Dangaard Brouer 2019-06-23 14:21 ` Jesper Dangaard Brouer 2019-06-25 3:19 ` Machulsky, Zorik 2019-06-26 8:38 ` XDP multi-buffer incl. jumbo-frames (Was: [RFC V1 net-next 1/1] net: ena: implement XDP drop support) Jesper Dangaard Brouer 2019-06-26 11:52 ` Toke Høiland-Jørgensen 2019-06-26 14:40 ` Jesper Dangaard Brouer 2019-06-26 15:01 ` Toke Høiland-Jørgensen 2019-06-26 15:20 ` Willem de Bruijn [this message] 2019-06-26 16:42 ` Jonathan Lemon 2019-06-26 20:00 ` Jesper Dangaard Brouer 2019-06-27 22:07 ` Jonathan Lemon 2019-06-28 8:46 ` Jesper Dangaard Brouer 2019-06-26 15:14 ` Toke Høiland-Jørgensen 2019-06-26 16:36 ` Jesper Dangaard Brouer 2019-06-28 7:14 ` Eelco Chaudron 2019-06-28 7:46 ` Toke Høiland-Jørgensen 2019-06-28 11:49 ` Eelco Chaudron 2019-06-28 8:22 ` Jesper Dangaard Brouer 2019-06-23 14:28 ` [RFC V1 net-next 1/1] net: ena: implement XDP drop support David Ahern 2019-06-23 14:51 ` Maciej Fijalkowski
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to='CA+FuTSfKnhv9rr=cDa_4m7Dd9qkEm_oabDfyvH0T0sM+fQTUemail@example.com' \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Netdev Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/netdev/0 netdev/git/0.git git clone --mirror https://lore.kernel.org/netdev/1 netdev/git/1.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 netdev netdev/ https://lore.kernel.org/netdev \ email@example.com public-inbox-index netdev Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.netdev AGPL code for this site: git clone https://public-inbox.org/public-inbox.git