bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Toke Høiland-Jørgensen" <toke@redhat.com>
To: John Fastabend <john.fastabend@gmail.com>,
	Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>,
	Martin KaFai Lau <kafai@fb.com>, Song Liu <songliubraving@fb.com>,
	Yonghong Song <yhs@fb.com>,
	Marek Majkowski <marek@cloudflare.com>,
	Lorenz Bauer <lmb@cloudflare.com>,
	David Miller <davem@davemloft.net>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	netdev@vger.kernel.org, bpf@vger.kernel.org
Subject: RE: [PATCH bpf-next 0/9] xdp: Support multiple programs on a single interface through chain calls
Date: Wed, 02 Oct 2019 21:09:33 +0200	[thread overview]
Message-ID: <87wodnq80i.fsf@toke.dk> (raw)
In-Reply-To: <5d94d3c5a238f_22502b00ea21a5b4e9@john-XPS-13-9370.notmuch>

John Fastabend <john.fastabend@gmail.com> writes:

> Toke Høiland-Jørgensen wrote:
>> This series adds support for executing multiple XDP programs on a single
>> interface in sequence, through the use of chain calls, as discussed at the Linux
>> Plumbers Conference last month:
>> 
>> https://linuxplumbersconf.org/event/4/contributions/460/
>> 
>> # HIGH-LEVEL IDEA
>> 
>> The basic idea is to express the chain call sequence through a special map type,
>> which contains a mapping from a (program, return code) tuple to another program
>> to run in next in the sequence. Userspace can populate this map to express
>> arbitrary call sequences, and update the sequence by updating or replacing the
>> map.
>> 
>> The actual execution of the program sequence is done in bpf_prog_run_xdp(),
>> which will lookup the chain sequence map, and if found, will loop through calls
>> to BPF_PROG_RUN, looking up the next XDP program in the sequence based on the
>> previous program ID and return code.
>> 
>> An XDP chain call map can be installed on an interface by means of a new netlink
>> attribute containing an fd pointing to a chain call map. This can be supplied
>> along with the XDP prog fd, so that a chain map is always installed together
>> with an XDP program.
>> 
>> # PERFORMANCE
>> 
>> I performed a simple performance test to get an initial feel for the overhead of
>> the chain call mechanism. This test consists of running only two programs in
>> sequence: One that returns XDP_PASS and another that returns XDP_DROP. I then
>> measure the drop PPS performance and compare it to a baseline of just a single
>> program that only returns XDP_DROP.
>> 
>> For comparison, a test case that uses regular eBPF tail calls to sequence two
>> programs together is also included. Finally, because 'perf' showed that the
>> hashmap lookup was the largest single source of overhead, I also added a test
>> case where I removed the jhash() call from the hashmap code, and just use the
>> u32 key directly as an index into the hash bucket structure.
>> 
>> The performance for these different cases is as follows (with retpolines disabled):
>
> retpolines enabled would also be interesting.

Yes, I'll try that as well.

>> 
>> | Test case                       | Perf      | Add. overhead | Total overhead |
>> |---------------------------------+-----------+---------------+----------------|
>> | Before patch (XDP DROP program) | 31.0 Mpps |               |                |
>> | After patch (XDP DROP program)  | 28.9 Mpps |        2.3 ns |         2.3 ns |
>
> IMO even 1 Mpps overhead is too much for a feature that is primarily
> about ease of use. Sacrificing performance to make userland a bit
> easier is hard to justify for me when XDP _is_ singularly about
> performance. Also that is nearly 10% overhead which is fairly large.
> So I think going forward the performance gab needs to be removed.

It's not just about ease of use. If we really want XDP to catch on and
be integrated into third-party applications, we *need* to solve the
multi-program problem. Otherwise users are going to have to choose
between running their DDOS protection application and their IDS; which
means no one is going to be able to rely on XDP support, and so no one
is going to bother to implement it. Or at least, I'm afraid it'll end up
that way.

That being said, I agree that the performance impact should be kept at
an absolute minimum. In terms of code it already is: it's a single if
statement to check if a chain map is loaded. I haven't looked into why
that takes 2.3 ns to do yet. I suspect it may just be because we're
taking a cache miss: the chain map pointer is not stored anywhere near
the rest of the XDP data structures...

>> | XDP tail call                   | 26.6 Mpps |        3.0 ns |         5.3 ns |
>> | XDP chain call (no jhash)       | 19.6 Mpps |       13.4 ns |        18.7 ns |
>> | XDP chain call (this series)    | 17.0 Mpps |        7.9 ns |        26.6 ns |
>> 
>> From this it is clear that while there is some overhead from this mechanism; but
>> the jhash removal example indicates that it is probably possible to optimise the
>> code to the point where the overhead becomes low enough that it is acceptable.
>
> I'm missing why 'in theory' at least this can't be made as-fast as
> tail calls? Again I can't see why someone would lose 30% of their
> performance when a userland program could populate a tail call map for
> the same effect. Sure userland would also have to enforce some program
> standards/conventions but it could be done and at 30% overhead that
> pain is probably worth it IMO.
>
> My thinking though is if we are a bit clever chaining and tail calls
> could be performance-wise equivalent?

Oh, I totally agree that 30% overhead is way too much. I just
prioritised getting this to work and send it out for comments to get the
conversation going around the API before I spend too much time
optimising the performance bits.

I think it should be possible to get it pretty close to pure tail calls.
A tail call compiles down to a direct jump instruction without leaving
the BPF execution environment, though, so maybe not *exactly* as fast.
But I'd settle for an overhead of, say, a handful of nanoseconds
compared to a pure tail call. Certainly not the tens of nanoseconds I
get right now.

Also, bear in mind that an overhead of say 4-6 nanoseconds only
translates into such a large percentage-wise overhead in this test
because the XDP programs being run don't actually do anything - not even
touch the packet data. As the programs themselves get larger, the
percentage-wise overhead becomes smaller as well. So a test like this
shows the worst-case performance.

-Toke

  reply	other threads:[~2019-10-02 19:09 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-02 13:30 [PATCH bpf-next 0/9] xdp: Support multiple programs on a single interface through chain calls Toke Høiland-Jørgensen
2019-10-02 13:30 ` [PATCH bpf-next 1/9] hashtab: Add new bpf_map_fd_put_value op Toke Høiland-Jørgensen
2019-10-02 13:30 ` [PATCH bpf-next 2/9] xdp: Add new xdp_chain_map type for specifying XDP call sequences Toke Høiland-Jørgensen
2019-10-02 15:50   ` Lorenz Bauer
2019-10-02 18:25     ` Toke Høiland-Jørgensen
2019-10-02 13:30 ` [PATCH bpf-next 3/9] xdp: Support setting and getting device chain map Toke Høiland-Jørgensen
2019-10-02 15:50   ` Lorenz Bauer
2019-10-02 18:32     ` Toke Høiland-Jørgensen
2019-10-02 18:07   ` kbuild test robot
2019-10-02 18:29   ` kbuild test robot
2019-10-02 13:30 ` [PATCH bpf-next 4/9] xdp: Implement chain call logic to support multiple programs on one interface Toke Høiland-Jørgensen
2019-10-02 17:33   ` kbuild test robot
2019-10-02 17:53   ` kbuild test robot
2019-10-02 13:30 ` [PATCH bpf-next 5/9] tools/include/uapi: Add XDP chain map definitions Toke Høiland-Jørgensen
2019-10-02 13:30 ` [PATCH bpf-next 6/9] tools/libbpf_probes: Add support for xdp_chain map type Toke Høiland-Jørgensen
2019-10-02 13:30 ` [PATCH bpf-next 7/9] bpftool: Add definitions " Toke Høiland-Jørgensen
2019-10-02 13:30 ` [PATCH bpf-next 8/9] libbpf: Add support for setting and getting XDP chain maps Toke Høiland-Jørgensen
2019-10-02 13:30 ` [PATCH bpf-next 9/9] selftests: Add tests for XDP chain calls Toke Høiland-Jørgensen
2019-10-02 15:10 ` [PATCH bpf-next 0/9] xdp: Support multiple programs on a single interface through " Alan Maguire
2019-10-02 15:33   ` Toke Høiland-Jørgensen
2019-10-02 16:34     ` John Fastabend
2019-10-02 18:33       ` Toke Høiland-Jørgensen
2019-10-02 20:34         ` John Fastabend
2019-10-03  7:48           ` Toke Høiland-Jørgensen
2019-10-03 10:09             ` Jesper Dangaard Brouer
2019-10-03 19:45               ` John Fastabend
2019-10-02 16:35 ` Lorenz Bauer
2019-10-02 18:54   ` Toke Høiland-Jørgensen
2019-10-02 16:43 ` John Fastabend
2019-10-02 19:09   ` Toke Høiland-Jørgensen [this message]
2019-10-02 19:15   ` Daniel Borkmann
2019-10-02 19:29     ` Toke Høiland-Jørgensen
2019-10-02 19:46     ` Alexei Starovoitov
2019-10-03  7:58       ` Toke Høiland-Jørgensen
2019-10-02 18:38 ` Song Liu
2019-10-02 18:54   ` Song Liu
2019-10-02 19:25     ` Toke Høiland-Jørgensen
2019-10-03  8:53       ` Jesper Dangaard Brouer
2019-10-03 14:03         ` Alexei Starovoitov
2019-10-03 14:33           ` Toke Høiland-Jørgensen
2019-10-03 14:53             ` Edward Cree
2019-10-03 18:49               ` Jesper Dangaard Brouer
2019-10-03 19:35               ` John Fastabend
2019-10-04  8:09                 ` Toke Høiland-Jørgensen
2019-10-04 10:34                   ` Edward Cree
2019-10-04 15:58                     ` Lorenz Bauer
2019-10-07 16:43                       ` Edward Cree
2019-10-07 17:12                         ` Lorenz Bauer
2019-10-07 19:21                           ` Edward Cree
2019-10-07 21:01                         ` Alexei Starovoitov
2019-10-02 19:23   ` Toke Høiland-Jørgensen
2019-10-02 19:49     ` Song Liu
2019-10-03  7:59       ` Toke Høiland-Jørgensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wodnq80i.fsf@toke.dk \
    --to=toke@redhat.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=john.fastabend@gmail.com \
    --cc=kafai@fb.com \
    --cc=lmb@cloudflare.com \
    --cc=marek@cloudflare.com \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).