All of lore.kernel.org
 help / color / mirror / Atom feed
From: Aaron Lu <aaron.lu@intel.com>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>, Song Liu <song@kernel.org>,
	<linux-kernel@vger.kernel.org>, <bpf@vger.kernel.org>,
	<linux-mm@kvack.org>, <ast@kernel.org>, <daniel@iogearbox.net>,
	<peterz@infradead.org>, <torvalds@linux-foundation.org>,
	<rick.p.edgecombe@intel.com>, <kernel-team@fb.com>
Subject: Re: [PATCH v4 bpf-next 0/8] bpf_prog_pack followup
Date: Tue, 21 Jun 2022 09:45:36 +0800	[thread overview]
Message-ID: <YrEiwHGs4vY+wLsx@ziqianlu-Dell-Optiplex7000> (raw)
In-Reply-To: <YrC9CyOPamPneUOT@bombadil.infradead.org>

On Mon, Jun 20, 2022 at 11:31:39AM -0700, Luis Chamberlain wrote:
> On Mon, Jun 20, 2022 at 07:11:45PM +0800, Aaron Lu wrote:
> > Hi Song,
> > 
> > On Fri, May 20, 2022 at 04:57:50PM -0700, Song Liu wrote:
> > 
> > ... ...
> > 
> > > The primary goal of bpf_prog_pack is to reduce iTLB miss rate and reduce
> > > direct memory mapping fragmentation. This leads to non-trivial performance
> > > improvements.
> > >
> > > For our web service production benchmark, bpf_prog_pack on 4kB pages
> > > gives 0.5% to 0.7% more throughput than not using bpf_prog_pack.
> > > bpf_prog_pack on 2MB pages 0.6% to 0.9% more throughput than not using
> > > bpf_prog_pack. Note that 0.5% is a huge improvement for our fleet. I
> > > believe this is also significant for other companies with many thousand
> > > servers.
> > >
> > 
> > I'm evaluationg performance impact due to direct memory mapping
> > fragmentation 
> 
> BTW how exactly are you doing this?

Right now I'm mostly collecting materials from the web :-)

Zhengjun has run some extensive microbenmarks with different page size
for direct mapping and on different server machines a while ago, here
is his report:
https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
Quoting part of the conclusion:
"
This leads us to conclude that although 1G mappings are a 
good default choice, there is no compelling evidence that it must be the 
only choice, or that folks deriving benefits (like hardening) from 
smaller mapping sizes should avoid the smaller mapping sizes.
"

I searched the archive and found there is performance problem when
kernel text huge mapping gets splitted:
https://lore.kernel.org/lkml/20190823052335.572133-1-songliubraving@fb.com/

But I haven't found a report complaining direct mapping fragmentation yet.

      reply	other threads:[~2022-06-21  1:45 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-20 23:57 [PATCH v4 bpf-next 0/8] bpf_prog_pack followup Song Liu
2022-05-20 23:57 ` [PATCH v4 bpf-next 1/8] bpf: fill new bpf_prog_pack with illegal instructions Song Liu
2022-05-20 23:57 ` [PATCH v4 bpf-next 2/8] x86/alternative: introduce text_poke_set Song Liu
2022-05-20 23:57 ` [PATCH v4 bpf-next 3/8] bpf: introduce bpf_arch_text_invalidate for bpf_prog_pack Song Liu
2022-05-24  7:20   ` Mike Rapoport
2022-05-20 23:57 ` [PATCH v4 bpf-next 4/8] module: introduce module_alloc_huge Song Liu
2022-05-20 23:57 ` [PATCH v4 bpf-next 5/8] bpf: use module_alloc_huge for bpf_prog_pack Song Liu
2022-05-24  7:22   ` Mike Rapoport
2022-05-24 16:42     ` Edgecombe, Rick P
2022-06-17 23:05   ` Edgecombe, Rick P
2022-05-20 23:57 ` [PATCH v4 bpf-next 6/8] vmalloc: WARN for set_vm_flush_reset_perms() on huge pages Song Liu
2022-05-20 23:57 ` [PATCH v4 bpf-next 7/8] vmalloc: introduce huge_vmalloc_supported Song Liu
2022-05-20 23:57 ` [PATCH v4 bpf-next 8/8] bpf: simplify select_bpf_prog_pack_size Song Liu
2022-05-23 21:20 ` [PATCH v4 bpf-next 0/8] bpf_prog_pack followup patchwork-bot+netdevbpf
2022-06-20 11:11 ` Aaron Lu
2022-06-20 16:03   ` Song Liu
2022-06-21  1:31     ` Aaron Lu
2022-06-21  2:51       ` Song Liu
2022-06-21  3:25         ` Aaron Lu
2022-06-20 18:31   ` Luis Chamberlain
2022-06-21  1:45     ` Aaron Lu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YrEiwHGs4vY+wLsx@ziqianlu-Dell-Optiplex7000 \
    --to=aaron.lu@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=dave@stgolabs.net \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mcgrof@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=song@kernel.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.