bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Borkmann <daniel@iogearbox.net>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: "Alexei Starovoitov" <alexei.starovoitov@gmail.com>,
	"Toke Høiland-Jørgensen" <toke@redhat.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Martin KaFai Lau" <kafai@fb.com>,
	"Song Liu" <songliubraving@fb.com>, "Yonghong Song" <yhs@fb.com>,
	"Marek Majkowski" <marek@cloudflare.com>,
	"Lorenz Bauer" <lmb@cloudflare.com>,
	"Alan Maguire" <alan.maguire@oracle.com>,
	"David Miller" <davem@davemloft.net>,
	netdev@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other
Date: Wed, 16 Oct 2019 12:35:01 +0200	[thread overview]
Message-ID: <20191016103501.GB21367@pc-63.home> (raw)
In-Reply-To: <20191016102712.18f369e7@carbon>

On Wed, Oct 16, 2019 at 10:27:12AM +0200, Jesper Dangaard Brouer wrote:
> On Tue, 15 Oct 2019 19:28:51 -0700
> Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> > On Mon, Oct 14, 2019 at 02:35:45PM +0200, Toke Høiland-Jørgensen wrote:
> > > Alexei Starovoitov <alexei.starovoitov@gmail.com> writes:
> > > > On Wed, Oct 09, 2019 at 10:03:43AM +0200, Toke Høiland-Jørgensen wrote:  
> > > >> Alexei Starovoitov <alexei.starovoitov@gmail.com> writes:
[...]
> > > > If you disagree please explain _your_ problem again.
> > > > Saying that fb katran is a use case for chaining is, hrm, not correct.  
> > > 
> > > I never said Katran was the driver for this. I just used Katran as one
> > > of the "prior art" examples for my "how are people solving running
> > > multiple programs on the same interface" survey.  
> > 
> > and they solved it. that's the point.
> > 
> > > What I want to achieve is simply the ability to run multiple independent
> > > XDP programs on the same interface, without having to put any
> > > constraints on the programs themselves. I'm not disputing that this is
> > > *possible* to do completely in userspace, I just don't believe the
> > > resulting solution will be very good.  
> > 
> > What makes me uneasy about the whole push for program chaining
> > is that tc cls_bpf supported multiple independent programs from day one.
> > Yet it doesn't help to run two firewalls hooked into tc ingress.
> 
> I do understand your concerns.
> 
> Let me explain why I believe TC cls_bpf multiple independent programs
> have not seen much usage.
> 
> First of all the TC-tool is notorious difficult to use and configure (I
> admit, I struggle with this myself every single time). (The TC layer
> have some amazing features, like hash based lookup, that never get used
> due to this).

We do use cls_bpf heavily in Cilium, but I don't necessarily agree on
the notorious difficult to use aspect (at least for tc + BPF): i) this
is abstracted away from the /user/ entirely to the point that this is an
implementation detail he doesn't need to know about, ii) these days most
access to these hooks is done programmatically, if this is a worry, then
lets simply add a cls_bpf pendant for APIs like bpf_set_link_xdp_fd() we
have in libbpf where you only pass in ifindex, direction (ingress/egress)
and priority of the program so that underneath it sets up cls_act qdisc
with a cls_bpf instance that makes the whole thing foolproof, e.g.:

  int bpf_set_link_tc_fd(int ifindex, int fd, enum bpf_tc_dir dir,
                         __u32 priority, __u32 flags);

The flags could be similar to XDP: 0 or xxx_FLAGS_UPDATE_IF_NOEXIST and
xxx_FLAGS_HW_MODE. The problem that might be easy to miss via tc cmdline
tool is that when you don't specify explicit prio/handle upon tc replace,
then it auto-allocates one and keeps adding new programs instead of
replacing the old ones, but this quirk can be solved via API like above.

> Second, the multiple "independent programs", are actually not
> independent, because the current running program must return
> TC_ACT_UNSPEC to allow next bpf-prog to run.  Thus, it is not really
> usable.

I'd argue that unless the only thing you do in your debugging program is
to introspect (read-only) the packet at the current point, you'd run into
a similar coordination issue, meaning, the "independent programs" works
for simple cases where you only have ACCEPT and DROP policy, such that
you could run through all the programs and have precedence on DROP.

But once you have conflicting policies with regards to how these programs
mangle and redirect packets, how would you handle these? I'd argue it's
a non-trivial task to outsource if /admins/ are supposed to do manual
order adjustments and more importantly to troubleshoot issues due to
them. Potentially debugging hooks would make that easier to avoid
recompilation, but it's more of a developer task.

Often times orchestration tools i) assume they just own the data path
to reduce complexity in an already complex system and ii) also keep
'refreshing' their setup. One random example for the latter is k8s'
kube-proxy that reinstalls its iptables rules every x sec, in order to
make sure there was no manual messing around and to keep the data path
eventually consistent with the daemon view (if they got borked). How
would you make the loader aware of daemons automatically refreshing/
reconfiguring their BPF progs in the situation where admins changed
the pipeline, adding similar handle as tc so whoever does the 'chain'
assembly know which one to update?

> > Similarly cgroup-bpf had a ton discussions on proper multi-prog api.
> > Everyone was eventually convinced that it's flexible and generic.
> > Yet people who started to use it complain that it's missing features
> > to make it truly usable in production.
> 
> I've not looked at the cgroup-bpf multi-prog API, I guess we should to
> understand why this failed.
> 
> > Tracing is the only bit where multi-prog works.
> > Because kernel always runs all programs there.
> 
> This is important insight ("kernel always runs all programs").  A key
> part of Toke's design with chain-calling, is that the kernel always
> runs all the XDP/BPF-progs in the chain. Regardless of the XDP return
> value.  The next program in the chain, need info about previous
> BPF-prog return value, but it can choose to override this.
> 
> > If we could use PROG_RUN_ARRAY for XDP that could have been a solution.
> > But we cannot. Return codes matter for XDP.
> 
> The proposal from Toke, is to allow next-chain BPF-program can override
> the prev BPF-prog return value.  This part of the design, which I must
> admit is also the only option due to tail-calls.  But I do think it
> makes sense, because even if XDP_DROP is returned, then I can install
> another XDP-prog that does XDP_REDIRECT out another interface to an
> analyzer box, or into an AF_XDP based dump tool.

Thanks,
Daniel

  reply	other threads:[~2019-10-16 10:35 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-07 17:20 [PATCH bpf-next v3 0/5] xdp: Support multiple programs on a single interface through chain calls Toke Høiland-Jørgensen
2019-10-07 17:20 ` [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other Toke Høiland-Jørgensen
2019-10-07 20:42   ` Alexei Starovoitov
2019-10-08  8:07     ` Toke Høiland-Jørgensen
2019-10-09  1:51       ` Alexei Starovoitov
2019-10-09  8:03         ` Toke Høiland-Jørgensen
2019-10-10  4:41           ` Alexei Starovoitov
2019-10-14 12:35             ` Toke Høiland-Jørgensen
2019-10-14 17:08               ` John Fastabend
2019-10-14 18:48                 ` Toke Høiland-Jørgensen
2019-10-15 16:30                   ` Edward Cree
2019-10-15 16:42                     ` Toke Høiland-Jørgensen
2019-10-15 18:33                       ` Edward Cree
2019-10-17 12:11                         ` Toke Høiland-Jørgensen
2019-10-22 17:27                           ` Edward Cree
2019-10-22 18:07                             ` Toke Høiland-Jørgensen
2019-11-12  2:51                               ` static and dynamic linking. Was: [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF Alexei Starovoitov
2019-11-12 16:20                                 ` Toke Høiland-Jørgensen
2019-11-12 19:52                                   ` Alexei Starovoitov
2019-11-12 21:25                                     ` Edward Cree
2019-11-12 23:18                                       ` Alexei Starovoitov
2019-11-13 18:30                                         ` Edward Cree
2019-11-13 18:51                                           ` Andrii Nakryiko
2019-11-15  2:13                                           ` Alexei Starovoitov
2019-11-15 16:56                                             ` John Fastabend
2019-11-12 23:25                                     ` John Fastabend
2019-11-13  0:21                                       ` Alexei Starovoitov
2019-11-13  5:33                                         ` John Fastabend
2019-11-15  1:50                                           ` Alexei Starovoitov
2019-11-15 16:39                                             ` John Fastabend
2019-11-14 15:41                                     ` Toke Høiland-Jørgensen
2019-11-12 16:32                                 ` Edward Cree
2019-11-15 11:48                                 ` Lorenz Bauer
2019-11-15 23:02                                   ` Alexei Starovoitov
2019-11-18 13:29                                     ` Lorenz Bauer
2019-10-21 23:51                         ` [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other Edward Cree
2019-10-16  2:28               ` Alexei Starovoitov
2019-10-16  8:27                 ` Jesper Dangaard Brouer
2019-10-16 10:35                   ` Daniel Borkmann [this message]
2019-10-16 11:16                     ` Toke Høiland-Jørgensen
2019-10-16 13:51                 ` Toke Høiland-Jørgensen
2019-10-19 20:09                   ` bpf indirect calls Alexei Starovoitov
2019-10-20 10:58                     ` Toke Høiland-Jørgensen
2019-10-25 16:30                       ` Alexei Starovoitov
2019-10-27 12:15                         ` Toke Høiland-Jørgensen
2019-10-09 10:19         ` [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other Jesper Dangaard Brouer
2019-10-09 17:57           ` Alexei Starovoitov
2019-10-07 17:20 ` [PATCH bpf-next v3 2/5] bpf: Add support for setting chain call sequence for programs Toke Høiland-Jørgensen
2019-10-07 20:38   ` Daniel Borkmann
2019-10-08  8:09     ` Toke Høiland-Jørgensen
2019-10-07 17:20 ` [PATCH bpf-next v3 3/5] tools: Update bpf.h header for program chain calls Toke Høiland-Jørgensen
2019-10-07 17:20 ` [PATCH bpf-next v3 4/5] libbpf: Add syscall wrappers for BPF_PROG_CHAIN_* commands Toke Høiland-Jørgensen
2019-10-07 17:20 ` [PATCH bpf-next v3 5/5] selftests: Add tests for XDP chain calls Toke Høiland-Jørgensen
2019-10-07 18:58 ` [PATCH bpf-next v3 0/5] xdp: Support multiple programs on a single interface through " John Fastabend
2019-10-08  8:42   ` Toke Høiland-Jørgensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191016103501.GB21367@pc-63.home \
    --to=daniel@iogearbox.net \
    --cc=alan.maguire@oracle.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=kafai@fb.com \
    --cc=lmb@cloudflare.com \
    --cc=marek@cloudflare.com \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    --cc=toke@redhat.com \
    --cc=yhs@fb.com \
    --subject='Re: [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).