* Re: [PATCH v4] netdev attribute to control xdpgeneric skb linearization [not found] ` <CA+FuTSfd80pZroxtqZDsTeEz4FaronC=pdgjeaBBfYqqi5HiyQ@mail.gmail.com> @ 2020-03-03 19:46 ` Daniel Borkmann 2020-03-03 20:50 ` Jakub Kicinski 2020-03-04 10:06 ` Luigi Rizzo 0 siblings, 2 replies; 6+ messages in thread From: Daniel Borkmann @ 2020-03-03 19:46 UTC (permalink / raw) To: Willem de Bruijn, Jakub Kicinski Cc: Luigi Rizzo, Network Development, Toke Høiland-Jørgensen, David Miller, hawk, Jubran, Samih, linux-kernel, ast, bpf On 2/29/20 12:53 AM, Willem de Bruijn wrote: > On Fri, Feb 28, 2020 at 2:01 PM Jakub Kicinski <kuba@kernel.org> wrote: >> On Fri, 28 Feb 2020 02:54:35 -0800 Luigi Rizzo wrote: >>> Add a netdevice flag to control skb linearization in generic xdp mode. >>> >>> The attribute can be modified through >>> /sys/class/net/<DEVICE>/xdpgeneric_linearize >>> The default is 1 (on) >>> >>> Motivation: xdp expects linear skbs with some minimum headroom, and >>> generic xdp calls skb_linearize() if needed. The linearization is >>> expensive, and may be unnecessary e.g. when the xdp program does >>> not need access to the whole payload. >>> This sysfs entry allows users to opt out of linearization on a >>> per-device basis (linearization is still performed on cloned skbs). >>> >>> On a kernel instrumented to grab timestamps around the linearization >>> code in netif_receive_generic_xdp, and heavy netperf traffic with 1500b >>> mtu, I see the following times (nanoseconds/pkt) >>> >>> The receiver generally sees larger packets so the difference is more >>> significant. >>> >>> ns/pkt RECEIVER SENDER >>> >>> p50 p90 p99 p50 p90 p99 >>> >>> LINEARIZATION: 600ns 1090ns 4900ns 149ns 249ns 460ns >>> NO LINEARIZATION: 40ns 59ns 90ns 40ns 50ns 100ns >>> >>> v1 --> v2 : added Documentation >>> v2 --> v3 : adjusted for skb_cloned >>> v3 --> v4 : renamed to xdpgeneric_linearize, documentation >>> >>> Signed-off-by: Luigi Rizzo <lrizzo@google.com> >> >> Just load your program in cls_bpf. No extensions or knobs needed. >> >> Making xdpgeneric-only extensions without touching native XDP makes >> no sense to me. Is this part of some greater vision? > > Yes, native xdp has the same issue when handling packets that exceed a > page (4K+ MTU) or otherwise consist of multiple segments. The issue is > just more acute in generic xdp. But agreed that both need to be solved > together. > > Many programs need only access to the header. There currently is not a > way to express this, or for xdp to convey that the buffer covers only > part of the packet. Right, my only question I had earlier was that when users ship their application with /sys/class/net/<DEVICE>/xdpgeneric_linearize turned off, how would they know how much of the data is actually pulled in? Afaik, some drivers might only have a linear section that covers the eth header and that is it. What should the BPF prog do in such case? Drop the skb since it does not have the rest of the data to e.g. make a XDP_PASS decision or fallback to tc/BPF altogether? I hinted earlier, one way to make this more graceful is to add a skb pointer inside e.g. struct xdp_rxq_info and then enable an bpf_skb_pull_data()-like helper e.g. as: BPF_CALL_2(bpf_xdp_pull_data, struct xdp_buff *, xdp, u32, len) { struct sk_buff *skb = xdp->rxq->skb; return skb ? bpf_try_make_writable(skb, len ? : skb_headlen(skb)) : -ENOTSUPP; } Thus, when the data/data_end test fails in generic XDP, the user can call e.g. bpf_xdp_pull_data(xdp, 64) to make sure we pull in as much as is needed w/o full linearization and once done the data/data_end can be repeated to proceed. Native XDP will leave xdp->rxq->skb as NULL, but later we could perhaps reuse the same bpf_xdp_pull_data() helper for native with skb-less backing. Thoughts? Thanks, Daniel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v4] netdev attribute to control xdpgeneric skb linearization 2020-03-03 19:46 ` [PATCH v4] netdev attribute to control xdpgeneric skb linearization Daniel Borkmann @ 2020-03-03 20:50 ` Jakub Kicinski 2020-03-03 21:04 ` Daniel Borkmann 2020-03-03 21:10 ` Willem de Bruijn 2020-03-04 10:06 ` Luigi Rizzo 1 sibling, 2 replies; 6+ messages in thread From: Jakub Kicinski @ 2020-03-03 20:50 UTC (permalink / raw) To: Daniel Borkmann Cc: Willem de Bruijn, Luigi Rizzo, Network Development, Toke Høiland-Jørgensen, David Miller, hawk, Jubran, Samih, linux-kernel, ast, bpf On Tue, 3 Mar 2020 20:46:55 +0100 Daniel Borkmann wrote: > Thus, when the data/data_end test fails in generic XDP, the user can > call e.g. bpf_xdp_pull_data(xdp, 64) to make sure we pull in as much as > is needed w/o full linearization and once done the data/data_end can be > repeated to proceed. Native XDP will leave xdp->rxq->skb as NULL, but > later we could perhaps reuse the same bpf_xdp_pull_data() helper for > native with skb-less backing. Thoughts? I'm curious why we consider a xdpgeneric-only addition. Is attaching a cls_bpf program noticeably slower than xdpgeneric? ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v4] netdev attribute to control xdpgeneric skb linearization 2020-03-03 20:50 ` Jakub Kicinski @ 2020-03-03 21:04 ` Daniel Borkmann 2020-03-03 21:10 ` Willem de Bruijn 1 sibling, 0 replies; 6+ messages in thread From: Daniel Borkmann @ 2020-03-03 21:04 UTC (permalink / raw) To: Jakub Kicinski Cc: Willem de Bruijn, Luigi Rizzo, Network Development, Toke Høiland-Jørgensen, David Miller, hawk, Jubran, Samih, linux-kernel, ast, bpf On 3/3/20 9:50 PM, Jakub Kicinski wrote: > On Tue, 3 Mar 2020 20:46:55 +0100 Daniel Borkmann wrote: >> Thus, when the data/data_end test fails in generic XDP, the user can >> call e.g. bpf_xdp_pull_data(xdp, 64) to make sure we pull in as much as >> is needed w/o full linearization and once done the data/data_end can be >> repeated to proceed. Native XDP will leave xdp->rxq->skb as NULL, but >> later we could perhaps reuse the same bpf_xdp_pull_data() helper for >> native with skb-less backing. Thoughts? > > I'm curious why we consider a xdpgeneric-only addition. Is attaching > a cls_bpf program noticeably slower than xdpgeneric? Yeah, agree, I'm curious about that part as well. Thanks, Daniel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v4] netdev attribute to control xdpgeneric skb linearization 2020-03-03 20:50 ` Jakub Kicinski 2020-03-03 21:04 ` Daniel Borkmann @ 2020-03-03 21:10 ` Willem de Bruijn 2020-03-04 9:18 ` Jesper Dangaard Brouer 1 sibling, 1 reply; 6+ messages in thread From: Willem de Bruijn @ 2020-03-03 21:10 UTC (permalink / raw) To: Jakub Kicinski Cc: Daniel Borkmann, Luigi Rizzo, Network Development, Toke Høiland-Jørgensen, David Miller, hawk, Jubran, Samih, linux-kernel, Alexei Starovoitov, bpf On Tue, Mar 3, 2020 at 3:50 PM Jakub Kicinski <kuba@kernel.org> wrote: > > On Tue, 3 Mar 2020 20:46:55 +0100 Daniel Borkmann wrote: > > Thus, when the data/data_end test fails in generic XDP, the user can > > call e.g. bpf_xdp_pull_data(xdp, 64) to make sure we pull in as much as > > is needed w/o full linearization and once done the data/data_end can be > > repeated to proceed. Native XDP will leave xdp->rxq->skb as NULL, but > > later we could perhaps reuse the same bpf_xdp_pull_data() helper for > > native with skb-less backing. Thoughts? Something akin to pskb_may_pull sounds like a great solution to me. Another approach would be a new xdp_action XDP_NEED_LINEARIZED that causes the program to be restarted after linearization. But that is both more expensive and less elegant. Instead of a sysctl or device option, is this an optimization that could be taken based on the program? Specifically, would XDP_FLAGS be a path to pass a SUPPORT_SG flag along with the program? I'm not entirely familiar with the XDP setup code, so this may be a totally off. But from a quick read it seems like generic_xdp_install could transfer such a flag to struct net_device. > I'm curious why we consider a xdpgeneric-only addition. Is attaching > a cls_bpf program noticeably slower than xdpgeneric? This just should not be xdp*generic* only, but allow us to use any XDP with large MTU sizes and without having to disable GRO. I'd still like a way to be able to drop or modify packets before GRO, or to signal that a type of packet should skip GRO. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v4] netdev attribute to control xdpgeneric skb linearization 2020-03-03 21:10 ` Willem de Bruijn @ 2020-03-04 9:18 ` Jesper Dangaard Brouer 0 siblings, 0 replies; 6+ messages in thread From: Jesper Dangaard Brouer @ 2020-03-04 9:18 UTC (permalink / raw) To: Willem de Bruijn Cc: brouer, Jakub Kicinski, Daniel Borkmann, Luigi Rizzo, Network Development, Toke Høiland-Jørgensen, David Miller, Alexander Duyck, Jubran, Samih, linux-kernel, Alexei Starovoitov, bpf On Tue, 3 Mar 2020 16:10:14 -0500 Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote: > On Tue, Mar 3, 2020 at 3:50 PM Jakub Kicinski <kuba@kernel.org> wrote: > > > > On Tue, 3 Mar 2020 20:46:55 +0100 Daniel Borkmann wrote: > > > Thus, when the data/data_end test fails in generic XDP, the user can > > > call e.g. bpf_xdp_pull_data(xdp, 64) to make sure we pull in as much as > > > is needed w/o full linearization and once done the data/data_end can be > > > repeated to proceed. Native XDP will leave xdp->rxq->skb as NULL, but > > > later we could perhaps reuse the same bpf_xdp_pull_data() helper for > > > native with skb-less backing. Thoughts? > > Something akin to pskb_may_pull sounds like a great solution to me. > > Another approach would be a new xdp_action XDP_NEED_LINEARIZED that > causes the program to be restarted after linearization. But that is both > more expensive and less elegant. > > Instead of a sysctl or device option, is this an optimization that > could be taken based on the program? Specifically, would XDP_FLAGS be > a path to pass a SUPPORT_SG flag along with the program? I'm not > entirely familiar with the XDP setup code, so this may be a totally > off. But from a quick read it seems like generic_xdp_install could > transfer such a flag to struct net_device. > > > I'm curious why we consider a xdpgeneric-only addition. Is attaching > > a cls_bpf program noticeably slower than xdpgeneric? > > This just should not be xdp*generic* only, but allow us to use any XDP > with large MTU sizes and without having to disable GRO. This is an important point: "should not be xdp*generic* only". I really want to see this work for XDP-native *first*, and it seems that with Daniel's idea, it can can also work for XDP-generic. As Jakub also hinted, it seems strange that people are trying to implement this for XDP-generic, as I don't think there is any performance advantage over cls_bpf. We really want this to work from XDP-native. > I'd still like a way to be able to drop or modify packets before GRO, > or to signal that a type of packet should skip GRO. That is a use-case, that we should remember to support. Samih (cc'ed) is working on adding multi-frame support[1] to XDP-native. Given the huge interest this thread shows, I think I will dedicate some of my time to help him out on the actual coding. For my idea to work[1], we first have storage space for the multi-buffer references, and I propose we use the skb_shared_info area, that is available anyhow for XDP_PASS that calls build_skb(). Thus, we first need to standardize across all XDP drivers, how and where this memory area is referenced/offset. [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org [2] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org#storage-space-for-multi-buffer-referencessegments -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v4] netdev attribute to control xdpgeneric skb linearization 2020-03-03 19:46 ` [PATCH v4] netdev attribute to control xdpgeneric skb linearization Daniel Borkmann 2020-03-03 20:50 ` Jakub Kicinski @ 2020-03-04 10:06 ` Luigi Rizzo 1 sibling, 0 replies; 6+ messages in thread From: Luigi Rizzo @ 2020-03-04 10:06 UTC (permalink / raw) To: Daniel Borkmann Cc: Willem de Bruijn, Jakub Kicinski, Network Development, Toke Høiland-Jørgensen, David Miller, Jesper Dangaard Brouer, Jubran, Samih, linux-kernel, ast, bpf [taking one message in the thread to answer multiple issues] On Tue, Mar 3, 2020 at 11:47 AM Daniel Borkmann <daniel@iogearbox.net> wrote: > > On 2/29/20 12:53 AM, Willem de Bruijn wrote: > > On Fri, Feb 28, 2020 at 2:01 PM Jakub Kicinski <kuba@kernel.org> wrote: > >> On Fri, 28 Feb 2020 02:54:35 -0800 Luigi Rizzo wrote: > >>> Add a netdevice flag to control skb linearization in generic xdp mode. > >>> > >>> The attribute can be modified through > >>> /sys/class/net/<DEVICE>/xdpgeneric_linearize > >>> The default is 1 (on) ... > >>> ns/pkt RECEIVER SENDER > >>> > >>> p50 p90 p99 p50 p90 p99 > >>> > >>> LINEARIZATION: 600ns 1090ns 4900ns 149ns 249ns 460ns > >>> NO LINEARIZATION: 40ns 59ns 90ns 40ns 50ns 100ns ... > >> Just load your program in cls_bpf. No extensions or knobs needed. Yes this is indeed an option, perhaps the only downside is that it acts after packet taps, so if, say, the program is there to filter unwanted traffic we would miss that protection. ... > >> Making xdpgeneric-only extensions without touching native XDP makes > >> no sense to me. Is this part of some greater vision? > > > > Yes, native xdp has the same issue when handling packets that exceed a > > page (4K+ MTU) or otherwise consist of multiple segments. The issue is > > just more acute in generic xdp. But agreed that both need to be solved > > together. > > > > Many programs need only access to the header. There currently is not a > > way to express this, or for xdp to convey that the buffer covers only > > part of the packet. > > Right, my only question I had earlier was that when users ship their > application with /sys/class/net/<DEVICE>/xdpgeneric_linearize turned off, > how would they know how much of the data is actually pulled in? Afaik, The short answer is that before turning linearization off, the sysadmin should make sure that the linear section contains enough data for the program to operate. In doubt, leave linearization on and live with the cost. The long answer (which probably repeats things I already discussed with some of you): clearly this patch is not perfect, as it lacks ways for the kernel and bpf program to communicate a) whether there is a non-linear section, and b) whether the bpf program understands non-linear/partial packets and how much linear data (and headroom) it expects. Adding these two features needs some agreement on the details. We had a thread a few weeks ago about multi-segment xdp support, I am not sure we reached a conclusion, and I am concerned that we may end up reimplementing sg lists or simplified-skbs for use in bpf programs where perhaps we could just live with pull_up/accessor for occasional access to the non-linear part, and some hints that the program can pass to the driver/xdpgeneric to specify requirements. for #b Specifically: #a is trivial -- add a field to the xdp_buff, and a helper to read it from the bpf program; #b is a bit less clear -- it involves a helper to either pull_up or access the non linear data (which one is preferable probably depends on the use case and we may want both), and some attribute that the program passes to the kernel at load time, to control when linearization should be applied. I have hacked the 'license' section to pass this information on a per-program basis, but we need a cleaner way. My reasoning for suggesting this patch, as an interim solution, is that being completely opt-in, one can carefully evaluate when it is safe to use even without having #b implemented. For #a, the program might infer (but not reliably) that some data are missing by looking at the payload length which may be present in some of the headers. We could mitigate abuse by e.g. forcing XDP_REDIRECT and XDP_TX in xdpgeneric only accept linear packets. cheers luigi > some drivers might only have a linear section that covers the eth header > and that is it. What should the BPF prog do in such case? Drop the skb > since it does not have the rest of the data to e.g. make a XDP_PASS > decision or fallback to tc/BPF altogether? I hinted earlier, one way to > make this more graceful is to add a skb pointer inside e.g. struct > xdp_rxq_info and then enable an bpf_skb_pull_data()-like helper e.g. as: > > BPF_CALL_2(bpf_xdp_pull_data, struct xdp_buff *, xdp, u32, len) > { > struct sk_buff *skb = xdp->rxq->skb; > > return skb ? bpf_try_make_writable(skb, len ? : > skb_headlen(skb)) : -ENOTSUPP; > } > > Thus, when the data/data_end test fails in generic XDP, the user can > call e.g. bpf_xdp_pull_data(xdp, 64) to make sure we pull in as much as > is needed w/o full linearization and once done the data/data_end can be > repeated to proceed. Native XDP will leave xdp->rxq->skb as NULL, but > later we could perhaps reuse the same bpf_xdp_pull_data() helper for > native with skb-less backing. Thoughts? > > Thanks, > Daniel ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-03-04 10:06 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <20200228105435.75298-1-lrizzo@google.com> [not found] ` <20200228110043.2771fddb@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> [not found] ` <CA+FuTSfd80pZroxtqZDsTeEz4FaronC=pdgjeaBBfYqqi5HiyQ@mail.gmail.com> 2020-03-03 19:46 ` [PATCH v4] netdev attribute to control xdpgeneric skb linearization Daniel Borkmann 2020-03-03 20:50 ` Jakub Kicinski 2020-03-03 21:04 ` Daniel Borkmann 2020-03-03 21:10 ` Willem de Bruijn 2020-03-04 9:18 ` Jesper Dangaard Brouer 2020-03-04 10:06 ` Luigi Rizzo
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).