linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Song Liu <songliubraving@fb.com>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: Song Liu <song@kernel.org>, bpf <bpf@vger.kernel.org>,
	Network Development <netdev@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"Alexei Starovoitov" <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"Andrii Nakryiko" <andrii@kernel.org>,
	Kernel Team <Kernel-team@fb.com>,
	"Peter Zijlstra" <peterz@infradead.org>, X86 ML <x86@kernel.org>
Subject: Re: [PATCH v6 bpf-next 6/7] bpf: introduce bpf_prog_pack allocator
Date: Sat, 22 Jan 2022 01:01:41 +0000	[thread overview]
Message-ID: <5407DA0E-C0F8-4DA9-B407-3DE657301BB2@fb.com> (raw)
In-Reply-To: <CAADnVQJLHXaU7tUJN=EM-Nt28xtu4vw9+Ox_uQsjh-E-4VNKoA@mail.gmail.com>



> On Jan 21, 2022, at 4:41 PM, Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> 
> On Fri, Jan 21, 2022 at 4:23 PM Song Liu <songliubraving@fb.com> wrote:
>> 
>> 
>> 
>>> On Jan 21, 2022, at 3:55 PM, Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
>>> 
>>> On Fri, Jan 21, 2022 at 11:49 AM Song Liu <song@kernel.org> wrote:
>>>> 
>>>> +static struct bpf_binary_header *
>>>> +__bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
>>>> +                      unsigned int alignment,
>>>> +                      bpf_jit_fill_hole_t bpf_fill_ill_insns,
>>>> +                      u32 round_up_to)
>>>> +{
>>>> +       struct bpf_binary_header *hdr;
>>>> +       u32 size, hole, start;
>>>> +
>>>> +       WARN_ON_ONCE(!is_power_of_2(alignment) ||
>>>> +                    alignment > BPF_IMAGE_ALIGNMENT);
>>>> +
>>>> +       /* Most of BPF filters are really small, but if some of them
>>>> +        * fill a page, allow at least 128 extra bytes to insert a
>>>> +        * random section of illegal instructions.
>>>> +        */
>>>> +       size = round_up(proglen + sizeof(*hdr) + 128, round_up_to);
>>>> +
>>>> +       if (bpf_jit_charge_modmem(size))
>>>> +               return NULL;
>>>> +       hdr = bpf_jit_alloc_exec(size);
>>>> +       if (!hdr) {
>>>> +               bpf_jit_uncharge_modmem(size);
>>>> +               return NULL;
>>>> +       }
>>>> +
>>>> +       /* Fill space with illegal/arch-dep instructions. */
>>>> +       bpf_fill_ill_insns(hdr, size);
>>>> +
>>>> +       hdr->size = size;
>>>> +       hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)),
>>>> +                    PAGE_SIZE - sizeof(*hdr));
>>> 
>>> It probably should be 'round_up_to' instead of PAGE_SIZE ?
>> 
>> Actually, some of these change is not longer needed after the following
>> change in v6:
>> 
>>  4. Change fall back round_up_to in bpf_jit_binary_alloc_pack() from
>>     BPF_PROG_MAX_PACK_PROG_SIZE to PAGE_SIZE.
>> 
>> My initial thought (last year) was if we allocate more than 2MB (either
>> 2.1MB or 3.9MB), we round up to 4MB to save page table entries.
>> However, when I revisited this earlier today, I thought we should still
>> round up to PAGE_SIZE to save memory
>> 
>> Right now, I am not sure which way is better. What do you think? If we
>> round up to PAGE_SIZE, we don't need split out __bpf_jit_binary_alloc().
> 
> The less code duplication the better.

Got it. Will go with PAGE_SIZE. 

[...]

>>>> +
>>>>       if (bpf_jit_charge_modmem(size))
>>>>               return NULL;
>>>> -       hdr = bpf_jit_alloc_exec(size);
>>>> +       hdr = bpf_prog_pack_alloc(size);
>>>>       if (!hdr) {
>>>>               bpf_jit_uncharge_modmem(size);
>>>>               return NULL;
>>>> @@ -888,9 +1052,8 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
>>>>       /* Fill space with illegal/arch-dep instructions. */
>>>>       bpf_fill_ill_insns(hdr, size);
>>>> 
>>>> -       hdr->size = size;
>>> 
>>> I'm missing where it's assigned.
>>> Looks like hdr->size stays zero, so free is never performed?
>> 
>> This is read only memory, so we set it in bpf_fill_ill_insns(). There was a
>> comment in x86/bpf_jit_comp.c. I guess we also need a comment here.
> 
> Ahh. Found it. Pls don't do it in fill_insn.
> It's the wrong layering.
> It feels that callbacks need to be redesigned.
> I would operate on rw_header here and use
> existing arch specific callback fill_insn to write into rw_image.
> Both insns during JITing and 0xcc on both sides of the prog.
> Populate rw_header->size (either before or after JITing)
> and then do single text_poke_copy to write the whole thing
> into the correct spot.

In this way, we need to allocate rw_image here, and free it in 
bpf_jit_comp.c. This feels a little weird to me, but I guess that
is still the cleanest solution for now. 

Thanks,
Song


  reply	other threads:[~2022-01-22  1:01 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-21 19:49 [PATCH v6 bpf-next 0/7] bpf_prog_pack allocator Song Liu
2022-01-21 19:49 ` [PATCH v6 bpf-next 1/7] x86/Kconfig: select HAVE_ARCH_HUGE_VMALLOC with HAVE_ARCH_HUGE_VMAP Song Liu
2022-01-21 19:49 ` [PATCH v6 bpf-next 2/7] bpf: use bytes instead of pages for bpf_jit_[charge|uncharge]_modmem Song Liu
2022-01-21 19:49 ` [PATCH v6 bpf-next 3/7] bpf: use size instead of pages in bpf_binary_header Song Liu
2022-01-21 19:49 ` [PATCH v6 bpf-next 4/7] bpf: add a pointer of bpf_binary_header to bpf_prog Song Liu
2022-01-21 19:49 ` [PATCH v6 bpf-next 5/7] x86/alternative: introduce text_poke_copy Song Liu
2022-01-21 19:49 ` [PATCH v6 bpf-next 6/7] bpf: introduce bpf_prog_pack allocator Song Liu
2022-01-21 23:55   ` Alexei Starovoitov
2022-01-22  0:23     ` Song Liu
2022-01-22  0:41       ` Alexei Starovoitov
2022-01-22  1:01         ` Song Liu [this message]
2022-01-22  1:12           ` Alexei Starovoitov
2022-01-22  1:30             ` Song Liu
2022-01-22  2:12               ` Alexei Starovoitov
2022-01-23  1:03                 ` Song Liu
2022-01-24 12:29                   ` Ilya Leoshkevich
2022-01-24 18:27                     ` Song Liu
2022-01-25  5:21                       ` Alexei Starovoitov
2022-01-25  7:21                         ` Song Liu
2022-01-25 19:59                           ` Alexei Starovoitov
2022-01-25 22:25                             ` Song Liu
2022-01-25 22:48                               ` Alexei Starovoitov
2022-01-25 23:09                                 ` Song Liu
2022-01-26  0:38                                   ` Alexei Starovoitov
2022-01-26  0:50                                     ` Song Liu
2022-01-26  1:20                                       ` Alexei Starovoitov
2022-01-26  1:28                                         ` Song Liu
2022-01-26  1:31                                           ` Song Liu
2022-01-26  1:34                                             ` Alexei Starovoitov
2022-01-24 12:45                   ` Peter Zijlstra
2022-01-21 19:49 ` [PATCH v6 bpf-next 7/7] bpf, x86_64: use " Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5407DA0E-C0F8-4DA9-B407-3DE657301BB2@fb.com \
    --to=songliubraving@fb.com \
    --cc=Kernel-team@fb.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=song@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).