Netdev Archive on lore.kernel.org
 help / color / Atom feed
From: Jiong Wang <jiong.wang@netronome.com>
To: Edward Cree <ecree@solarflare.com>
Cc: Jiong Wang <jiong.wang@netronome.com>,
	Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>,
	Daniel Borkmann <daniel@iogearbox.net>, bpf <bpf@vger.kernel.org>,
	Network Development <netdev@vger.kernel.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Jakub Kicinski <jakub.kicinski@netronome.com>
Subject: Re: [PATCH] bpf: optimize constant blinding
Date: Mon, 17 Jun 2019 20:59:54 +0100
Message-ID: <878su0geyt.fsf@netronome.com> (raw)
In-Reply-To: <41dfe080-be03-3344-d279-e638a5a6168d@solarflare.com>


Edward Cree writes:

> On 14/06/2019 16:13, Jiong Wang wrote:
>> Just an update and keep people posted.
>>
>> Working on linked list based approach, the implementation looks like the
>> following, mostly a combine of discussions happened and Naveen's patch,
>> please feel free to comment.
>>
>>   - Use the reserved opcode 0xf0 with BPF_ALU as new pseudo insn code
>>     BPF_LIST_INSN. (0xf0 is also used with BPF_JMP class for tail call).
>>
>>   - Introduce patch pool into bpf_prog->aux to keep all patched insns.
> It's not clear to me what the point of the patch pool is, rather than just
>  doing the patch straight away. 

I used pool because I was thinking insn to be patched could be high
percentage, so doing lots of alloc call is going to be less efficient? so
allocate a big pool, and each time when creating new patch node, allocate
it from the pool directly. Node is addressed using pool_base + offset, each
node only need to keep offset.

> Idea is that when prog is half-patched,
>  insn idxs / jump offsets / etc. all refer to the original locations, only
>  some of those might now be a list rather than a single insn.  Concretely:
>
> struct bpf_insn_list { bpf_insn insn; struct list_head list; }
> orig prog: array bpf_insn[3] = {cond jump +1, insn_foo, exit};
> start of day verifier converts this into array bpf_insn_list[3],
>  each with new.insn = orig.insn; INIT_LIST_HEAD(new.list)
>
> verifier decides to patch insn_foo into {insn_bar, insn_baz}.
> so we allocate bpf_insn_list *x;
> insn_foo.insn = insn_bar;
> x->insn = insn_baz;
> list_add_tail(&x->list, &insn_foo.list);
>
> the cond jump +1 is _not_ changed at this point, because it counts
>  bpf_insn_lists and not insns.
>
> Then at end of day to linearise we just have int new_idx[3];
>  populate it by iterating over the array of bpf_insn_list and adding on
>  the list length each time (so we get {0, 1, 3})
>  then allocate output prog of the total length (here 4, calculated by
>  that same pass as effectively off-the-end entry of new_idx)
>  then iterate again to write out the output prog, when we see that 'cond
>  jump +1' in old_idx 0 we see that (new_idx[2] - new_idx[0] - 1) = 2, so
>  it becomes 'cond jump +2'.
>
>
> This seems to me like it'd be easier to work with than making the whole
>  program one big linked list (where you have to separately keep track of
>  what each insn's idx was before patching).  But I haven't tried to code
>  it yet, so anything could happen ;-)  It does rely on the assumption
>  that only original insns (or the first insn of a list they get patched
>  with) will ever be jump targets or otherwise need their insn_idx taken.
>
> -Ed


  reply index

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-12 11:32 Naveen N. Rao
2019-06-12 14:47 ` Alexei Starovoitov
2019-06-12 15:04   ` Jiong Wang
2019-06-12 15:25     ` Jiong Wang
2019-06-12 15:28       ` Alexei Starovoitov
2019-06-14 15:13         ` Jiong Wang
2019-06-14 17:05           ` Alexei Starovoitov
2019-06-14 22:28             ` Andrii Nakryiko
2019-06-17 19:47           ` Edward Cree
2019-06-17 19:59             ` Jiong Wang [this message]
2019-06-17 20:11               ` Edward Cree
2019-06-17 20:40                 ` Jiong Wang
2019-06-17 20:53                   ` Alexei Starovoitov
2019-06-17 21:01                     ` Jiong Wang
2019-06-17 21:16                   ` Edward Cree
2019-06-19 20:45                     ` Jiong Wang
2019-06-14  4:30 ` kbuild test robot

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=878su0geyt.fsf@netronome.com \
    --to=jiong.wang@netronome.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=ecree@solarflare.com \
    --cc=jakub.kicinski@netronome.com \
    --cc=mpe@ellerman.id.au \
    --cc=naveen.n.rao@linux.vnet.ibm.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Netdev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/netdev/0 netdev/git/0.git
	git clone --mirror https://lore.kernel.org/netdev/1 netdev/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 netdev netdev/ https://lore.kernel.org/netdev \
		netdev@vger.kernel.org netdev@archiver.kernel.org
	public-inbox-index netdev


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.netdev


AGPL code for this site: git clone https://public-inbox.org/ public-inbox