From: Song Liu <song@kernel.org>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
"peterz@infradead.org" <peterz@infradead.org>,
"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
"rppt@kernel.org" <rppt@kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"hch@lst.de" <hch@lst.de>, "x86@kernel.org" <x86@kernel.org>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"Lu, Aaron" <aaron.lu@intel.com>
Subject: Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs
Date: Wed, 16 Nov 2022 17:17:21 -0800 [thread overview]
Message-ID: <CAPhsuW6KyuQNim4O81WiAeVNsBrtFF9TT9595pP1UsShWFZXLw@mail.gmail.com> (raw)
In-Reply-To: <Y3V4DEUeICDBYt62@bombadil.infradead.org>
On Wed, Nov 16, 2022 at 3:53 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 16, 2022 at 10:47:04PM +0000, Edgecombe, Rick P wrote:
> > On Wed, 2022-11-16 at 14:33 -0800, Luis Chamberlain wrote:
> > > More in lines with what I was hoping for. Can something just do
> > > the parallelization for you in one shot? Can bench alone do it for
> > > you?
> > > Is there no interest to have soemthing which generically showcases
> > > multithreading / hammering a system with tons of eBPF JITs? It may
> > > prove useful.
> > >
> > > And also, it begs the question, what if you had another iTLB generic
> > > benchmark or genearl memory pressure workload running *as* you run
> > > the
> > > above? I as, as it was my understanding that one of the issues was
> > > the
> > > long term slowdown caused by the directmap fragmentation without
> > > bpf_prog_pack, and so such an application should crawl to its knees
> > > over time, and there should be numbers you could show to prove that
> > > too, before and after.
> >
> > We did have some benchmarks that showed if your direct map was totally
> > fragmented (started from boot at 4k page size) what the regression was:
> >
> >
> > https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
>
> Oh yes that is a good example of effort, but I'm suggesting taking for
> instance will-it-scale and run it in tandem with bpg prog pack
> and measure on *both* iTLB differences, before / after, *and* doing
> this again after a period of expected deterioation of the direct
> map fragmentation (say after non-bpf-prog-pack shows high direct
> map fragmetnation).
>
> This is the sort of thing which easily go into a commit log.
To be honest, I don't see experimental results with artificial benchmarks
would help this set. I don't think a real workload would see 10% speed
up from this set (we can see large % improvements in TLB miss rate
though). However, 1% or even 0.5% improvement matters a lot for
large scale workload.
Thanks,
Song
next prev parent reply other threads:[~2022-11-17 1:17 UTC|newest]
Thread overview: 91+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-07 22:39 [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 1/5] vmalloc: introduce execmem_alloc, execmem_free, and execmem_fill Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 2/5] x86/alternative: support execmem_alloc() and execmem_free() Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 3/5] bpf: use execmem_alloc for bpf program and bpf dispatcher Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 4/5] vmalloc: introduce register_text_tail_vm() Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 5/5] x86: use register_text_tail_vm Song Liu
2022-11-08 19:04 ` Edgecombe, Rick P
2022-11-08 22:15 ` Song Liu
2022-11-15 17:28 ` Edgecombe, Rick P
2022-11-07 22:55 ` [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs Luis Chamberlain
2022-11-07 23:13 ` Song Liu
2022-11-07 23:39 ` Luis Chamberlain
2022-11-08 0:13 ` Edgecombe, Rick P
2022-11-08 2:45 ` Luis Chamberlain
2022-11-08 18:20 ` Song Liu
2022-11-08 18:12 ` Song Liu
2022-11-08 11:27 ` Mike Rapoport
2022-11-08 12:38 ` Aaron Lu
2022-11-09 6:55 ` Christoph Hellwig
2022-11-09 11:05 ` Peter Zijlstra
2022-11-08 16:51 ` Edgecombe, Rick P
2022-11-08 18:50 ` Song Liu
2022-11-09 11:17 ` Mike Rapoport
2022-11-09 17:04 ` Edgecombe, Rick P
2022-11-09 17:53 ` Song Liu
2022-11-13 10:34 ` Mike Rapoport
2022-11-14 20:30 ` Song Liu
2022-11-15 21:18 ` Luis Chamberlain
2022-11-15 21:39 ` Edgecombe, Rick P
2022-11-16 22:34 ` Luis Chamberlain
2022-11-17 8:50 ` Mike Rapoport
2022-11-17 18:36 ` Song Liu
2022-11-20 10:41 ` Mike Rapoport
2022-11-21 14:52 ` Song Liu
2022-11-30 9:39 ` Mike Rapoport
2022-11-09 17:43 ` Song Liu
2022-11-09 21:23 ` Christophe Leroy
2022-11-10 1:50 ` Song Liu
2022-11-13 10:42 ` Mike Rapoport
2022-11-14 20:45 ` Song Liu
2022-11-15 20:51 ` Luis Chamberlain
2022-11-20 10:44 ` Mike Rapoport
2022-11-08 18:41 ` Song Liu
2022-11-08 19:43 ` Christophe Leroy
2022-11-08 21:40 ` Song Liu
2022-11-13 9:58 ` Mike Rapoport
2022-11-14 20:13 ` Song Liu
2022-11-08 11:44 ` Christophe Leroy
2022-11-08 18:47 ` Song Liu
2022-11-08 19:32 ` Christophe Leroy
2022-11-08 11:48 ` Mike Rapoport
2022-11-15 1:30 ` Song Liu
2022-11-15 17:34 ` Edgecombe, Rick P
2022-11-15 21:54 ` Song Liu
2022-11-15 22:14 ` Edgecombe, Rick P
2022-11-15 22:32 ` Song Liu
2022-11-16 1:20 ` Song Liu
2022-11-16 21:22 ` Edgecombe, Rick P
2022-11-16 22:03 ` Song Liu
2022-11-15 21:09 ` Luis Chamberlain
2022-11-15 21:32 ` Luis Chamberlain
2022-11-15 22:48 ` Song Liu
2022-11-16 22:33 ` Luis Chamberlain
2022-11-16 22:47 ` Edgecombe, Rick P
2022-11-16 23:53 ` Luis Chamberlain
2022-11-17 1:17 ` Song Liu [this message]
2022-11-17 9:37 ` Mike Rapoport
2022-11-29 10:23 ` Thomas Gleixner
2022-11-29 17:26 ` Song Liu
2022-11-29 23:56 ` Thomas Gleixner
2022-11-30 16:18 ` Song Liu
2022-12-01 9:08 ` Thomas Gleixner
2022-12-01 19:31 ` Song Liu
2022-12-02 1:38 ` Thomas Gleixner
2022-12-02 8:38 ` Song Liu
2022-12-02 9:22 ` Thomas Gleixner
2022-12-06 20:25 ` Song Liu
2022-12-07 15:36 ` Thomas Gleixner
2022-12-07 16:53 ` Christophe Leroy
2022-12-07 19:29 ` Song Liu
2022-12-07 21:04 ` Thomas Gleixner
2022-12-07 21:48 ` Christophe Leroy
2022-12-07 19:26 ` Song Liu
2022-12-07 20:57 ` Thomas Gleixner
2022-12-07 23:17 ` Song Liu
2022-12-02 10:46 ` Christophe Leroy
2022-12-02 17:43 ` Thomas Gleixner
2022-12-01 20:23 ` Mike Rapoport
2022-12-01 22:34 ` Thomas Gleixner
2022-12-03 14:46 ` Mike Rapoport
2022-12-03 20:58 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAPhsuW6KyuQNim4O81WiAeVNsBrtFF9TT9595pP1UsShWFZXLw@mail.gmail.com \
--to=song@kernel.org \
--cc=aaron.lu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=bpf@vger.kernel.org \
--cc=hch@lst.de \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=rppt@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).