bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jiong Wang <jiong.wang@netronome.com>
To: Edward Cree <ecree@solarflare.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andriin@fb.com>
Cc: Jiong Wang <jiong.wang@netronome.com>,
	Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>,
	Daniel Borkmann <daniel@iogearbox.net>, bpf <bpf@vger.kernel.org>,
	Network Development <netdev@vger.kernel.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Jakub Kicinski <jakub.kicinski@netronome.com>
Subject: Re: [PATCH] bpf: optimize constant blinding
Date: Mon, 17 Jun 2019 21:40:26 +0100	[thread overview]
Message-ID: <877e9kgd39.fsf@netronome.com> (raw)
In-Reply-To: <58d86352-4989-38d6-666b-5e932df9ed46@solarflare.com>


Edward Cree writes:

> On 17/06/2019 20:59, Jiong Wang wrote:
>> Edward Cree writes:
>>
>>> On 14/06/2019 16:13, Jiong Wang wrote:
>>>> Just an update and keep people posted.
>>>>
>>>> Working on linked list based approach, the implementation looks like the
>>>> following, mostly a combine of discussions happened and Naveen's patch,
>>>> please feel free to comment.
>>>>
>>>>   - Use the reserved opcode 0xf0 with BPF_ALU as new pseudo insn code
>>>>     BPF_LIST_INSN. (0xf0 is also used with BPF_JMP class for tail call).
>>>>
>>>>   - Introduce patch pool into bpf_prog->aux to keep all patched insns.
>>> It's not clear to me what the point of the patch pool is, rather than just
>>>  doing the patch straight away. 
>> I used pool because I was thinking insn to be patched could be high
>> percentage, so doing lots of alloc call is going to be less efficient? so
>> allocate a big pool, and each time when creating new patch node, allocate
>> it from the pool directly. Node is addressed using pool_base + offset, each
>> node only need to keep offset.
> Good idea; but in that case it doesn't need to be a pool of patches (storing
>  their prev and next), just a pool of insns.  I.e. struct bpf_insn pool[many];
>  then in orig prog when patching an insn replace it with BPF_LIST_INSN.  If
>  we later decide to patch an insn within a patch, we can replace it (i.e. the
>  entry in bpf_insn_pool) with another BPF_LIST_INSN pointing to some later bit
>  of the pool, then we just have a little bit of recursion at linearise time.
> Does that work?

I feel it is not going to work well. What I have proposed initially is
something similar, except when we are patching an insn within a patch (do
exist, for example zext insertion will apply to some patched alu insn
inserted in ctx convert pass), then we split the patch, this then makes the
data structures used in two shapes,

1. original prog->insnsi is still maintained as array.
2. if one insn is BPF_LIST_INSN, then it is a list, and could traverse it
using pool_base + insn.imm = list head.

But then there is data structure inconsistent, so now when doing insn
traversal we need something like:
   for (idx = 0; idx < insn_cnt; idx++) {
     if (insns[idx] is not BPF_LIST_INSN) {
       do_insn(...)
     }
     else if (insns[idx] is BPF_LIST_INSN) {
       list = pool_base + insn.imm;
       while (list) {
         insn = list_head->insn;
         do_insn(...)
         list = pool_base + list->next;
       }
     }
   }

Code logic inside convert_ctx_accesses/fixup_bpf_call etc needs to be
re-factored out into a separate function do_NNN so it could be called
in above logic.

Now if we don't split patch when patch an insn inside patch, instead, if we
replace the patched insn using what you suggested, then the logic looks to
me becomes even more complex, something like

   for (idx = 0; idx < insn_cnt; idx++) {
     if (insns[idx] is not BPF_LIST_INSN) {
       do_insn(...)
     }
     else if (insns[idx] is BPF_LIST_INSN) {
       list = pool_base + insn.imm;
       while (list) {
         insn = list_head->insn;
         if (insn is BF_LIST_INSN) {
           sub_list = ...
           while ()
             do_insn()
           continue;
         }
         do_insn(...)
         list = pool_base + list->next;
       }
     }
   }

So, I am thinking what Alexei and Andrii suggested make sense, just use
single data structure (singly linked list) to represent everything, so the
insn traversal etc could be simple, but I am considering it is better to
still base the list on top of the pool infrastructure mentioned?

I have somethingn like the following:

+struct bpf_list_insn {
+       struct bpf_insn insn;
+       u32 next;
+};

struct bpf_prog_aux {:

+       struct {
+               struct bpf_list_insn *pool;
+               u32 size;
+               u32 used;
+       };

Whenever you want to do intensive patching work you could call
bpf_patch_init to setup the pool, the setup including:
  1. convert the original prog->insnsi into a list of bpf_list_insn.
     next is index into the pool, so for those initial insnsi, they are
     1, 2, 3, 4, 5, ...

Then when doing patching, just allocate new slot from the pool, and link
them into the list.

When patching finished call bpf_patch_fini to do lineraize, could use the
same algo in Navee's patch.

After digest Alexei and Andrii's reply, I still don't see the need to turn
branch target into list, and I am not sure whether pool based list sound
good? it saves size, resize pool doesn't invalid allocated node (the offset
doesn't change) but requires one extra addition to calculate the pointer.

>
> -Ed


  reply	other threads:[~2019-06-17 20:40 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-12 11:32 [PATCH] bpf: optimize constant blinding Naveen N. Rao
2019-06-12 14:47 ` Alexei Starovoitov
2019-06-12 15:04   ` Jiong Wang
2019-06-12 15:25     ` Jiong Wang
2019-06-12 15:28       ` Alexei Starovoitov
2019-06-14 15:13         ` Jiong Wang
2019-06-14 17:05           ` Alexei Starovoitov
2019-06-14 22:28             ` Andrii Nakryiko
2019-06-17 19:47           ` Edward Cree
2019-06-17 19:59             ` Jiong Wang
2019-06-17 20:11               ` Edward Cree
2019-06-17 20:40                 ` Jiong Wang [this message]
2019-06-17 20:53                   ` Alexei Starovoitov
2019-06-17 21:01                     ` Jiong Wang
2019-06-17 21:16                   ` Edward Cree
2019-06-19 20:45                     ` Jiong Wang
2019-06-14  4:30 ` kbuild test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=877e9kgd39.fsf@netronome.com \
    --to=jiong.wang@netronome.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andriin@fb.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=ecree@solarflare.com \
    --cc=jakub.kicinski@netronome.com \
    --cc=mpe@ellerman.id.au \
    --cc=naveen.n.rao@linux.vnet.ibm.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).