linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Song Liu <songliubraving@fb.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: bpf <bpf@vger.kernel.org>, Networking <netdev@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Kernel Team <Kernel-team@fb.com>,
	Christoph Hellwig <hch@infradead.org>,
	Peter Zijlstra <peterz@infradead.org>, Song Liu <song@kernel.org>
Subject: Re: [PATCH bpf v2 0/3] bpf: invalidate unused part of bpf_prog_pack
Date: Sat, 7 May 2022 06:50:17 +0000	[thread overview]
Message-ID: <719D99A4-3100-49EC-A7D6-6F9CDBA053C4@fb.com> (raw)
In-Reply-To: <57DBEBDB-71AF-4A85-AB8D-8274541E0F3C@fb.com>



> On Apr 27, 2022, at 11:48 PM, Song Liu <songliubraving@fb.com> wrote:
> 
> Hi Linus, 
> 
> Thanks for your thorough analysis of the situation, which make a lot of
> sense. 
> 
>> On Apr 27, 2022, at 6:45 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote:
>> 
>> On Wed, Apr 27, 2022 at 3:24 PM Song Liu <songliubraving@fb.com> wrote:
>>> 
>>> Could you please share your suggestions on this set? Shall we ship it
>>> with 5.18?
>> 
>> I'd personally prefer to just not do the prog_pack thing at all, since
>> I don't think it was actually in a "ready to ship" state for this
>> merge window, and the hugepage mapping protection games I'm still
>> leery of.
>> 
>> Yes, the hugepage protection things probably do work from what I saw
>> when I looked through them, but that x86 vmalloc hugepage code was
>> really designed for another use (non-refcounted device pages), so the
>> fact that it all actually seems surprisingly ok certainly wasn't
>> because the code was designed to do that new case.
>> 
>> Does the prog_pack thing work with small pages?
>> 
>> Yes. But that wasn't what it was designed for or its selling point, so
>> it all is a bit suspect to me.
> 
> prog_pack on small pages can also reduce the direct map fragmentation.
> This is because libbpf uses tiny BPF programs to probe kernel features. 
> Before prog_pack, all these BPF programs can fragment the direct map.
> For example, runqslower (tools/bpf/runqslower/) loads total 7 BPF programs 
> (3 actual programs and 4 tiny probe programs). All these programs may 
> cause direct map fragmentation. With prog_pack, OTOH, these BPF programs 
> would fit in a single page (or even share pages with other tools). 

Here are some performance data from our web service production benchmark, 
which is the biggest service in our fleet. We compare 3 kernels:    

  nopack: no bpf_prog_pack; IOW, the same behavior as 5.17
  4kpack: use bpf_prog_pack on 4kB pages (same as 5.18-rc5)
  2mpack: use bpf_prog_pack on 2MB pages

The benchmark measures system throughput under latency constraints. 
4kpack provides 0.5% to 0.7% more throughput than nopack. 
2mpack provides 0.6% to 0.9% more throughput than nopack. 

So the data has confirmed:
1. Direct map fragmentation has non-trivial impact on system performance;
2. While 2MB pages are preferred, bpf_prog_pack on 4kB pages also gives 
   Significant performance improvements.  

Thanks,
Song


  reply	other threads:[~2022-05-07  6:50 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-25 20:39 [PATCH bpf v2 0/3] bpf: invalidate unused part of bpf_prog_pack Song Liu
2022-04-25 20:39 ` [PATCH v2 bpf 1/3] bpf: fill new bpf_prog_pack with illegal instructions Song Liu
2022-04-25 20:39 ` [PATCH v2 bpf 2/3] x86/alternative: introduce text_poke_set Song Liu
2022-04-25 20:39 ` [PATCH v2 bpf 3/3] bpf: introduce bpf_arch_text_invalidate for bpf_prog_pack Song Liu
2022-04-27 22:10 ` [PATCH bpf v2 0/3] bpf: invalidate unused part of bpf_prog_pack Song Liu
2022-04-28  1:45   ` Linus Torvalds
2022-04-28  6:48     ` Song Liu
2022-05-07  6:50       ` Song Liu [this message]
2022-05-07 19:36         ` Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=719D99A4-3100-49EC-A7D6-6F9CDBA053C4@fb.com \
    --to=songliubraving@fb.com \
    --cc=Kernel-team@fb.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=song@kernel.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).