bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrii Nakryiko <andrii.nakryiko@gmail.com>
To: Jiong Wang <jiong.wang@netronome.com>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Edward Cree <ecree@solarflare.com>,
	"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>,
	Andrii Nakryiko <andriin@fb.com>,
	Jakub Kicinski <jakub.kicinski@netronome.com>,
	bpf <bpf@vger.kernel.org>, Networking <netdev@vger.kernel.org>,
	oss-drivers@netronome.com
Subject: Re: [RFC bpf-next 0/8] bpf: accelerate insn patching speed
Date: Wed, 10 Jul 2019 10:39:44 -0700	[thread overview]
Message-ID: <CAEf4BzavePpW-C+zORN1kwSUJAWuJ3LxZ6QGxqaE9msxCq8ZLA@mail.gmail.com> (raw)
In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com>

On Thu, Jul 4, 2019 at 2:31 PM Jiong Wang <jiong.wang@netronome.com> wrote:
>
> This is an RFC based on latest bpf-next about acclerating insn patching
> speed, it is now near the shape of final PATCH set, and we could see the
> changes migrating to list patching would brings, so send out for
> comments. Most of the info are in cover letter. I splitted the code in a
> way to show API migration more easily.


Hey Jiong,


Sorry, took me a while to get to this and learn more about instruction
patching. Overall this looks good and I think is a good direction.
I'll post high-level feedback here, and some more
implementation-specific ones in corresponding patches.


>
> Test Results
> ===
>   - Full pass on test_verifier/test_prog/test_prog_32 under all three
>     modes (interpreter, JIT, JIT with blinding).
>
>   - Benchmarking shows 10 ~ 15x faster on medium sized prog, and reduce
>     patching time from 5100s (nearly one and a half hour) to less than
>     0.5s for 1M insn patching.
>
> Known Issues
> ===
>   - The following warning is triggered when running scale test which
>     contains 1M insns and patching:
>       warning of mm/page_alloc.c:4639 __alloc_pages_nodemask+0x29e/0x330
>
>     This is caused by existing code, it can be reproduced on bpf-next
>     master with jit blinding enabled, then run scale unit test, it will
>     shown up after half an hour. After this set, patching is very fast, so
>     it shows up quickly.
>
>   - No line info adjustment support when doing insn delete, subprog adj
>     is with bug when doing insn delete as well. Generally, removal of insns
>     could possibly cause remove of entire line or subprog, therefore
>     entries of prog->aux->linfo or env->subprog needs to be deleted. I
>     don't have good idea and clean code for integrating this into the
>     linearization code at the moment, will do more experimenting,
>     appreciate ideas and suggestions on this.

Is there any specific problem to detect which line info to delete? Or
what am I missing besides careful implementation?

>
>     Insn delete doesn't happen on normal programs, for example Cilium
>     benchmarks, and happens rarely on test_progs, so the test coverage is
>     not good. That's also why this RFC have a full pass on selftest with
>     this known issue.

I hope you'll add test for deletion (and w/ corresponding line info)
in final patch set :)

>
>   - Could further use mem pool to accelerate the speed, changes are trivial
>     on top of this RFC, and could be 2x extra faster. Not included in this
>     RFC as reducing the algo complexity from quadratic to linear of insn
>     number is the first step.

Honestly, I think that would add more complexity than necessary, and I
think we can further speed up performance without that, see below.

>
> Background
> ===
> This RFC aims to accelerate BPF insn patching speed, patching means expand
> one bpf insn at any offset inside bpf prog into a set of new insns, or
> remove insns.
>
> At the moment, insn patching is quadratic of insn number, this is due to
> branch targets of jump insns needs to be adjusted, and the algo used is:
>
>   for insn inside prog
>     patch insn + regeneate bpf prog
>     for insn inside new prog
>       adjust jump target
>
> This is causing significant time spending when a bpf prog requires large
> amount of patching on different insns. Benchmarking shows it could take
> more than half minutes to finish patching when patching number is more
> than 50K, and the time spent could be more than one hour when patching
> number is around 1M.
>
>   15000   :    3s
>   45000   :   29s
>   95000   :  125s
>   195000  :  712s
>   1000000 : 5100s
>
> This RFC introduces new patching infrastructure. Before doing insn
> patching, insns in bpf prog are turned into a singly linked list, insert
> new insns just insert new list node, delete insns just set delete flag.
> And finally, the list is linearized back into array, and branch target
> adjustment is done for all jump insns during linearization. This algo
> brings the time complexity from quadratic to linear of insn number.
>
> Benchmarking shows the new patching infrastructure could be 10 ~ 15x faster
> on medium sized prog, and for a 1M patching it reduce the time from 5100s
> to less than 0.5s.
>
> Patching API
> ===
> Insn patching could happen on two layers inside BPF. One is "core layer"
> where only BPF insns are patched. The other is "verification layer" where
> insns have corresponding aux info as well high level subprog info, so
> insn patching means aux info needs to be patched as well, and subprog info
> needs to be adjusted. BPF prog also has debug info associated, so line info
> should always be updated after insn patching.
>
> So, list creation, destroy, insert, delete is the same for both layer,
> but lineration is different. "verification layer" patching require extra
> work. Therefore the patch APIs are:
>
>    list creation:                bpf_create_list_insn
>    list patch:                   bpf_patch_list_insn
>    list pre-patch:               bpf_prepatch_list_insn

I think pre-patch name is very confusing, until I read full
description I couldn't understand what it's supposed to be used for.
Speaking of bpf_patch_list_insn, patch is also generic enough to leave
me wondering whether instruction buffer is inserted after instruction,
or instruction is replaced with a bunch of instructions.

So how about two more specific names:
bpf_patch_list_insn -> bpf_list_insn_replace (meaning replace given
instruction with a list of patch instructions)
bpf_prepatch_list_insn -> bpf_list_insn_prepend (well, I think this
one is pretty clear).

>    list lineration (core layer): prog = bpf_linearize_list_insn(prog, list)
>    list lineration (veri layer): env = verifier_linearize_list_insn(env, list)

These two functions are both quite involved, as well as share a lot of
common code. I'd rather have one linearize instruction, that takes env
as an optional parameter. If env is specified (which is the case for
all cases except for constant blinding pass), then adjust aux_data and
subprogs along the way.

This would keep logic less duplicated and shouldn't complexity beyond
few null checks in few places.

>    list destroy:                 bpf_destroy_list_insn
>

I'd also add a macro foreach_list_insn instead of explicit for loops
in multiple places. That would also allow to skip deleted instructions
transparently.

> list patch could change the insn at patch point, it will invalid the aux

typo: invalid -> invalidate

> info at patching point. list pre-patch insert new insns before patch point
> where the insn and associated aux info are not touched, it is used for
> example in convert_ctx_access when generating prologue.
>
> Typical API sequence for one patching pass:
>
>    struct bpf_list_insn list = bpf_create_list_insn(struct bpf_prog);
>    for (elem = list; elem; elem = elem->next)
>       patch_buf = gen_patch_buf_logic;
>       elem = bpf_patch_list_insn(elem, patch_buf, cnt);
>    bpf_prog = bpf_linearize_list_insn(list)
>    bpf_destroy_list_insn(list)
>
> Several patching passes could also share the same list:
>
>    struct bpf_list_insn list = bpf_create_list_insn(struct bpf_prog);
>    for (elem = list; elem; elem = elem->next)
>       patch_buf = gen_patch_buf_logic1;
>       elem = bpf_patch_list_insn(elem, patch_buf, cnt);
>    for (elem = list; elem; elem = elem->next)
>       patch_buf = gen_patch_buf_logic2;
>       elem = bpf_patch_list_insn(elem, patch_buf, cnt);
>    bpf_prog = bpf_linearize_list_insn(list)
>    bpf_destroy_list_insn(list)
>
> but note new inserted insns int early passes won't have aux info except
> zext info. So, if one patch pass requires all aux info updated and
> recalculated for all insns including those pathced, it should first
> linearize the old list, then re-create the list. The RFC always create and
> linearize the list for each migrated patching pass separately.

I think we should do just one list creation, few passes of patching
and then linearize once. That will save quite a lot of memory
allocation and will speed up a lot of things. All the verifier
patching happens one after the other without any other functionality
in between, so there shouldn't be any problem.

As for aux_data. We can solve that even more simply and reliably by
storing a pointer along the struct bpf_list_insn (btw, how about
calling it bpf_patchable_insn?).

Here's how I propose to represent this patchable instruction:

struct bpf_list_insn {
       struct bpf_insn insn;
       struct bpf_list_insn *next;
       struct bpf_list_insn *target;
       struct bpf_insn_aux_data *aux_data;
       s32 orig_idx; // can repurpose this to have three meanings:
                     // -2 - deleted
                     // -1 - patched/inserted insn
                     // >=0 - original idx
};

The idea would be as follows:
1. when creating original list, target pointer will point directly to
a patchable instruction wrapper for jumps/calls. This will allow to
stop tracking and re-calculating jump offsets and instruction indicies
until linearization.
2. aux_data is also filled at that point. Later at linearization time
you'd just iterate over all the instructions in final order and copy
original aux_data, if it's present. And then just repace env's
aux_data array at the end, should be very simple and fast.
3. during fix_bpf_calls, zext, ctx rewrite passes, we'll reuse the
same list of instructions and those passes will just keep inserting
instruction buffers. Given we have restriction that all the jumps are
only within patch buffer, it will be trivial to construct proper
patchable instruction wrappers for newly added instructions, with NULL
for aux_data and possibly non-NULL target (if it's a JMP insn).
4. After those passes, linearize, adjust subprogs (for this you'll
probably still need to create index mapping, right?), copy or create
new aux_data.
5. Done.

What do you think? I think this should be overall simpler and faster.
But let me know if I'm missing something.

>
> Compared with old patching code, this new infrastructure has much less core
> code, even though the final code has a couple of extra lines but that is
> mostly due to for list based infrastructure, we need to do more error
> checks, so the list and associated aux data structure could be freed when
> errors happens.
>
> Patching Restrictions
> ===
>   - For core layer, the linearization assume no new jumps inside patch buf.
>     Currently, the only user of this layer is jit blinding.
>   - For verifier layer, there could be new jumps inside patch buf, but
>     they should have branch target resolved themselves, meaning new jumps
>     doesn't jump to insns out of the patch buf. This is the case for all
>     existing verifier layer users.
>   - bpf_insn_aux_data for all patched insns including the one at patch
>     point are invalidated, only 32-bit zext info will be recalcuated.
>     If the aux data of insn at patch point needs to be retained, it is
>     purely insn insertion, so need to use the pre-patch API.
>
> I plan to send out a PATCH set once I finished insn deletion line info adj
> support, please have a looks at this RFC, and appreciate feedbacks.
>
> Jiong Wang (8):
>   bpf: introducing list based insn patching infra to core layer
>   bpf: extend list based insn patching infra to verification layer
>   bpf: migrate jit blinding to list patching infra
>   bpf: migrate convert_ctx_accesses to list patching infra
>   bpf: migrate fixup_bpf_calls to list patching infra
>   bpf: migrate zero extension opt to list patching infra
>   bpf: migrate insn remove to list patching infra
>   bpf: delete all those code around old insn patching infrastructure
>
>  include/linux/bpf_verifier.h |   1 -
>  include/linux/filter.h       |  27 +-
>  kernel/bpf/core.c            | 431 +++++++++++++++++-----------
>  kernel/bpf/verifier.c        | 649 +++++++++++++++++++------------------------
>  4 files changed, 580 insertions(+), 528 deletions(-)
>
> --
> 2.7.4
>

  parent reply	other threads:[~2019-07-10 17:39 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-04 21:26 [RFC bpf-next 0/8] bpf: accelerate insn patching speed Jiong Wang
2019-07-04 21:26 ` [RFC bpf-next 1/8] bpf: introducing list based insn patching infra to core layer Jiong Wang
2019-07-10 17:49   ` Andrii Nakryiko
2019-07-11 11:53     ` Jiong Wang
2019-07-12 19:48       ` Andrii Nakryiko
2019-07-15  9:58         ` Jiong Wang
2019-07-04 21:26 ` [RFC bpf-next 2/8] bpf: extend list based insn patching infra to verification layer Jiong Wang
2019-07-10 17:50   ` Andrii Nakryiko
2019-07-11 11:59     ` [oss-drivers] " Jiong Wang
2019-07-11 12:20       ` Jiong Wang
2019-07-12 19:51         ` Andrii Nakryiko
2019-07-15 10:02           ` Jiong Wang
2019-07-15 22:29             ` Andrii Nakryiko
2019-07-16  8:12               ` Jiong Wang
2019-07-04 21:26 ` [RFC bpf-next 3/8] bpf: migrate jit blinding to list patching infra Jiong Wang
2019-07-04 21:26 ` [RFC bpf-next 4/8] bpf: migrate convert_ctx_accesses " Jiong Wang
2019-07-04 21:26 ` [RFC bpf-next 5/8] bpf: migrate fixup_bpf_calls " Jiong Wang
2019-07-04 21:26 ` [RFC bpf-next 6/8] bpf: migrate zero extension opt " Jiong Wang
2019-07-04 21:26 ` [RFC bpf-next 7/8] bpf: migrate insn remove " Jiong Wang
2019-07-04 21:26 ` [RFC bpf-next 8/8] bpf: delete all those code around old insn patching infrastructure Jiong Wang
2019-07-10 17:39 ` Andrii Nakryiko [this message]
2019-07-11 11:22   ` [RFC bpf-next 0/8] bpf: accelerate insn patching speed Jiong Wang
2019-07-12 19:43     ` Andrii Nakryiko
2019-07-15  9:21       ` Jiong Wang
2019-07-15 22:55         ` Andrii Nakryiko
2019-07-15 23:00           ` Andrii Nakryiko
2019-07-16  8:50           ` Jiong Wang
2019-07-16 16:17             ` Alexei Starovoitov
2019-07-16 19:39               ` Jiong Wang
2019-07-16 22:12               ` Jakub Kicinski
2019-07-17  1:17                 ` Alexei Starovoitov
2019-07-16 17:49             ` Andrii Nakryiko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEf4BzavePpW-C+zORN1kwSUJAWuJ3LxZ6QGxqaE9msxCq8ZLA@mail.gmail.com \
    --to=andrii.nakryiko@gmail.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andriin@fb.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=ecree@solarflare.com \
    --cc=jakub.kicinski@netronome.com \
    --cc=jiong.wang@netronome.com \
    --cc=naveen.n.rao@linux.vnet.ibm.com \
    --cc=netdev@vger.kernel.org \
    --cc=oss-drivers@netronome.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).