From: Jesper Dangaard Brouer <firstname.lastname@example.org> To: Alexei Starovoitov <email@example.com> Cc: "Toke Høiland-Jørgensen" <firstname.lastname@example.org>, "Daniel Borkmann" <email@example.com>, "Alexei Starovoitov" <firstname.lastname@example.org>, "Martin KaFai Lau" <email@example.com>, "Song Liu" <firstname.lastname@example.org>, "Yonghong Song" <email@example.com>, "Marek Majkowski" <firstname.lastname@example.org>, "Lorenz Bauer" <email@example.com>, "Alan Maguire" <firstname.lastname@example.org>, "David Miller" <email@example.com>, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org Subject: Re: [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other Date: Wed, 16 Oct 2019 10:27:12 +0200 [thread overview] Message-ID: <20191016102712.18f369e7@carbon> (raw) In-Reply-To: <20191016022849.weomgfdtep4aojpm@ast-mbp> On Tue, 15 Oct 2019 19:28:51 -0700 Alexei Starovoitov <email@example.com> wrote: > On Mon, Oct 14, 2019 at 02:35:45PM +0200, Toke Høiland-Jørgensen wrote: > > Alexei Starovoitov <firstname.lastname@example.org> writes: > > > > > On Wed, Oct 09, 2019 at 10:03:43AM +0200, Toke Høiland-Jørgensen wrote: > > >> Alexei Starovoitov <email@example.com> writes: > > >> > > >> > Please implement proper indirect calls and jumps. > > >> > > >> I am still not convinced this will actually solve our problem; but OK, I > > >> can give it a shot. > > > > > > If you're not convinced let's talk about it first. > > > > > > Indirect calls is a building block for debugpoints. > > > Let's not call them tracepoints, because Linus banned any discusion > > > that includes that name. > > > The debugpoints is a way for BPF program to insert points in its > > > code to let external facility to do tracing and debugging. > > > > > > void (*debugpoint1)(struct xdp_buff *, int code); > > > void (*debugpoint2)(struct xdp_buff *); > > > void (*debugpoint3)(int len); > > > > So how would these work? Similar to global variables (i.e., the loader > > creates a single-entry PROG_ARRAY map for each one)? Presumably with > > some BTF to validate the argument types? > > > > So what would it take to actually support this? It doesn't quite sound > > trivial to add? > > Depends on definition of 'trivial' :) > The kernel has a luxury of waiting until clean solution is implemented > instead of resorting to hacks. > > > > Essentially it's live debugging (tracing) of cooperative bpf programs > > > that added debugpoints to their code. > > > > Yup, certainly not disputing that this would be useful for debugging; > > although it'll probably be a while before its use becomes widespread > > enough that it'll be a reliable tool for people deploying XDP programs... > > same for any new api. > > > > Obviously indirect calls can be used for a ton of other things > > > including proper chaing of progs, but I'm convinced that > > > you don't need chaining to solve your problem. > > > You need debugging. > > > > Debugging is certainly also an area that I want to improve. However, I > > think that focusing on debugging as the driver for chaining programs was > > a mistake on my part; rudimentary debugging (using a tool such as > > xdpdump) is something that falls out of program chaining, but it's not > > the main driver for it. > > xdpdump can be done already the way I suggested without adding new > kernel code and it will work on old-ish kernels. Aside from xdp itself > the other requirement is to have get_fd_by_id sys_bpf command. You only demonstrated we can hook in xdpdump BEFORE an existing XDP program without modifying the XDP program. I'm much more interested in running xdpdump AFTER an existing XDP program (without modifying it), and very importantly I want to know the XDP-return codes in my xdpdump. That said, with your proposal of "proper indirect calls for BPF", then the xdpdump AFTER will be easy to implement. Maybe we should not focus on the xdpdump use-case, because it might be better to solve by simply adding a tracepoint, that have access to the xdp_buff. > > > If you disagree please explain _your_ problem again. > > > Saying that fb katran is a use case for chaining is, hrm, not correct. > > > > I never said Katran was the driver for this. I just used Katran as one > > of the "prior art" examples for my "how are people solving running > > multiple programs on the same interface" survey. > > and they solved it. that's the point. > > > What I want to achieve is simply the ability to run multiple independent > > XDP programs on the same interface, without having to put any > > constraints on the programs themselves. I'm not disputing that this is > > *possible* to do completely in userspace, I just don't believe the > > resulting solution will be very good. > > What makes me uneasy about the whole push for program chaining > is that tc cls_bpf supported multiple independent programs from day one. > Yet it doesn't help to run two firewalls hooked into tc ingress. I do understand your concerns. Let me explain why I believe TC cls_bpf multiple independent programs have not seen much usage. First of all the TC-tool is notorious difficult to use and configure (I admit, I struggle with this myself every single time). (The TC layer have some amazing features, like hash based lookup, that never get used due to this). Second, the multiple "independent programs", are actually not independent, because the current running program must return TC_ACT_UNSPEC to allow next bpf-prog to run. Thus, it is not really usable. > Similarly cgroup-bpf had a ton discussions on proper multi-prog api. > Everyone was eventually convinced that it's flexible and generic. > Yet people who started to use it complain that it's missing features > to make it truly usable in production. I've not looked at the cgroup-bpf multi-prog API, I guess we should to understand why this failed. > Tracing is the only bit where multi-prog works. > Because kernel always runs all programs there. This is important insight ("kernel always runs all programs"). A key part of Toke's design with chain-calling, is that the kernel always runs all the XDP/BPF-progs in the chain. Regardless of the XDP return value. The next program in the chain, need info about previous BPF-prog return value, but it can choose to override this. > If we could use PROG_RUN_ARRAY for XDP that could have been a solution. > But we cannot. Return codes matter for XDP. The proposal from Toke, is to allow next-chain BPF-program can override the prev BPF-prog return value. This part of the design, which I must admit is also the only option due to tail-calls. But I do think it makes sense, because even if XDP_DROP is returned, then I can install another XDP-prog that does XDP_REDIRECT out another interface to an analyzer box, or into an AF_XDP based dump tool. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2019-10-16 8:27 UTC|newest] Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-10-07 17:20 [PATCH bpf-next v3 0/5] xdp: Support multiple programs on a single interface through chain calls Toke Høiland-Jørgensen 2019-10-07 17:20 ` [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other Toke Høiland-Jørgensen 2019-10-07 20:42 ` Alexei Starovoitov 2019-10-08 8:07 ` Toke Høiland-Jørgensen 2019-10-09 1:51 ` Alexei Starovoitov 2019-10-09 8:03 ` Toke Høiland-Jørgensen 2019-10-10 4:41 ` Alexei Starovoitov 2019-10-14 12:35 ` Toke Høiland-Jørgensen 2019-10-14 17:08 ` John Fastabend 2019-10-14 18:48 ` Toke Høiland-Jørgensen 2019-10-15 16:30 ` Edward Cree 2019-10-15 16:42 ` Toke Høiland-Jørgensen 2019-10-15 18:33 ` Edward Cree 2019-10-17 12:11 ` Toke Høiland-Jørgensen 2019-10-22 17:27 ` Edward Cree 2019-10-22 18:07 ` Toke Høiland-Jørgensen 2019-11-12 2:51 ` static and dynamic linking. Was: [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF Alexei Starovoitov 2019-11-12 16:20 ` Toke Høiland-Jørgensen 2019-11-12 19:52 ` Alexei Starovoitov 2019-11-12 21:25 ` Edward Cree 2019-11-12 23:18 ` Alexei Starovoitov 2019-11-13 18:30 ` Edward Cree 2019-11-13 18:51 ` Andrii Nakryiko 2019-11-15 2:13 ` Alexei Starovoitov 2019-11-15 16:56 ` John Fastabend 2019-11-12 23:25 ` John Fastabend 2019-11-13 0:21 ` Alexei Starovoitov 2019-11-13 5:33 ` John Fastabend 2019-11-15 1:50 ` Alexei Starovoitov 2019-11-15 16:39 ` John Fastabend 2019-11-14 15:41 ` Toke Høiland-Jørgensen 2019-11-12 16:32 ` Edward Cree 2019-11-15 11:48 ` Lorenz Bauer 2019-11-15 23:02 ` Alexei Starovoitov 2019-11-18 13:29 ` Lorenz Bauer 2019-10-21 23:51 ` [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other Edward Cree 2019-10-16 2:28 ` Alexei Starovoitov 2019-10-16 8:27 ` Jesper Dangaard Brouer [this message] 2019-10-16 10:35 ` Daniel Borkmann 2019-10-16 11:16 ` Toke Høiland-Jørgensen 2019-10-16 13:51 ` Toke Høiland-Jørgensen 2019-10-19 20:09 ` bpf indirect calls Alexei Starovoitov 2019-10-20 10:58 ` Toke Høiland-Jørgensen 2019-10-25 16:30 ` Alexei Starovoitov 2019-10-27 12:15 ` Toke Høiland-Jørgensen 2019-10-09 10:19 ` [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other Jesper Dangaard Brouer 2019-10-09 17:57 ` Alexei Starovoitov 2019-10-07 17:20 ` [PATCH bpf-next v3 2/5] bpf: Add support for setting chain call sequence for programs Toke Høiland-Jørgensen 2019-10-07 20:38 ` Daniel Borkmann 2019-10-08 8:09 ` Toke Høiland-Jørgensen 2019-10-07 17:20 ` [PATCH bpf-next v3 3/5] tools: Update bpf.h header for program chain calls Toke Høiland-Jørgensen 2019-10-07 17:20 ` [PATCH bpf-next v3 4/5] libbpf: Add syscall wrappers for BPF_PROG_CHAIN_* commands Toke Høiland-Jørgensen 2019-10-07 17:20 ` [PATCH bpf-next v3 5/5] selftests: Add tests for XDP chain calls Toke Høiland-Jørgensen 2019-10-07 18:58 ` [PATCH bpf-next v3 0/5] xdp: Support multiple programs on a single interface through " John Fastabend 2019-10-08 8:42 ` Toke Høiland-Jørgensen
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191016102712.18f369e7@carbon \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --subject='Re: [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).