linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "mcgrof@kernel.org" <mcgrof@kernel.org>
Cc: "songliubraving@fb.com" <songliubraving@fb.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
	"hch@infradead.org" <hch@infradead.org>,
	"rppt@kernel.org" <rppt@kernel.org>,
	"daniel@iogearbox.net" <daniel@iogearbox.net>,
	"Torvalds, Linus" <torvalds@linux-foundation.org>,
	"ast@kernel.org" <ast@kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"song@kernel.org" <song@kernel.org>,
	"christophe.leroy@csgroup.eu" <christophe.leroy@csgroup.eu>,
	"dave@stgolabs.net" <dave@stgolabs.net>,
	"Kernel-team@fb.com" <Kernel-team@fb.com>,
	"pmladek@suse.com" <pmladek@suse.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"hpa@zytor.com" <hpa@zytor.com>,
	"dborkman@redhat.com" <dborkman@redhat.com>,
	"edumazet@google.com" <edumazet@google.com>,
	"bp@alien8.de" <bp@alien8.de>, "mbenes@suse.cz" <mbenes@suse.cz>,
	"a.manzanares@samsung.com" <a.manzanares@samsung.com>,
	"imbrenda@linux.ibm.com" <imbrenda@linux.ibm.com>
Subject: Re: [PATCH v4 bpf 0/4] vmalloc: bpf: introduce VM_ALLOW_HUGE_VMAP
Date: Tue, 19 Apr 2022 23:58:51 +0000	[thread overview]
Message-ID: <fc9a006f8f7ae548cbc5881038428ee5bcc3ae16.camel@intel.com> (raw)
In-Reply-To: <Yl8olpqvZxY8KoNf@bombadil.infradead.org>

On Tue, 2022-04-19 at 14:24 -0700, Luis Chamberlain wrote:
> On Tue, Apr 19, 2022 at 01:56:03AM +0000, Edgecombe, Rick P wrote:
> > Yea, that was my understanding. X86 modules have to be linked
> > within
> > 2GB of the kernel text, also eBPF x86 JIT generates code that
> > expects
> > to be within 2GB of the kernel text.
> 
> And kprobes / live patching / ftrace.
> 
> Another architectural fun fact, powerpc book3s/32 requires
> executability
> to be set per 256 Mbytes segments. Some architectures like this one
> will want to also optimize how they use the module alloc area.
> 
> Even though today the use cases might be limited, we don't exactly
> know
> how much memory a target device has a well, and so treating memory
> failures for "special memory" request as regular memory failures
> seems
> a bit odd, and users could get confused. For instance slapping on
> extra memory on a system won't resolve any issues if the limit for a
> special type of memory is already hit. Very likely not a problem at
> all today,
> given how small modules / eBPF jit programs are / etc, but
> conceptually it
> would seem wrong to just say -ENOMEM when in fact it's a special type
> of
> required memory which cannot be allocated and the issue cannot
> possibly be
> fixed. I don't think we have an option but to use -ENOMEM but at
> least
> hinting of the special failure would have seem desirable.

ENOMEM doesn't always mean out of physical memory though right? Could
be hitting some other memory limit. Not sure where this discussion of
the error code is coming from though.

As far as the problem of eating a whole 2MB on small systems, this
makes sense to me to worry about. Erratas limit what we can do here
with swapping page sizes on the fly. Probably a sensible solution would
be to decide whether to try based on system properties like boot memory
size.

Even without 2MB pages though, there are still improvements from these
types of changes.

> 
> Do we have other type of architectural limitations for "special
> memory"
> other than executable? Do we have *new* types of special memory we
> should consider which might be similar / limited in nature? And can /
> could /
> should these architectural limitations hopefully be disappear in
> newer CPUs?
> I see vmalloc_pks() as you pointed out [0] . Anything else?

Hmm, shadow stack permission memory could pop up in vmalloc someday.

Not sure what you mean by architectural limitations. The relative
addressing? If so, those other usages shouldn't be restricted by that.

> 
> > I think of two types of caches we could have: caches of unmapped
> > pages
> > on the direct map and caches of virtual memory mappings. Caches of
> > pages on the direct map reduce breakage of the large pages (and is
> > somewhat x86 specific problem). Caches of virtual memory mappings
> > reduce shootdowns, and are also required to share huge pages. I'll
> > plug
> > my old RFC, where I tried to work towards enabling both:
> > 
> > 
> > https://lore.kernel.org/lkml/20201120202426.18009-1-rick.p.edgecombe@intel.com/
> > 
> > Since then Mike has taken a lot further the direct map cache piece.
> > 
> > Yea, probably a lot of JIT's are way smaller than a page, but there
> > is
> > also hopefully some performance benefit of reduced ITLB pressure
> > and
> > TLB shootdowns. I think kprobes/ftrace (or at least one of them)
> > keeps
> > its own cache of a page for putting very small trampolines.
> 
> The reason I looked into *why* module_alloc() was used was
> particularly
> because it seemed a bit odd to have such ITLB enhancements for such
> a niche use case and we couldn't have desired this elsewhere before.

I think in general it is the only cross-arch way to get an allocation
in the arch's executable compatible area (which some need as you say).
module_alloc() is probably misnamed at this point with so many users
that are not modules.

> 
> > > Then, since it seems since the vmalloc area was not initialized,
> > > wouldn't that break the old JIT spray fixes, refer to commit
> > > 314beb9bcabfd ("x86: bpf_jit_comp: secure bpf jit against
> > > spraying
> > > attacks")?
> > 
> > Hmm, yea it might be a way to get around the ebpf jit rlimit. The
> > allocator could just text_poke() invalid instructions on "free" of
> > the
> > jit.
> > 
> > > 
> > > Is that sort of work not needed anymore? If in doubt I at least
> > > made
> > > the
> > > old proof of concept JIT spray stuff compile on recent kernels
> > > [0],
> > > but
> > > I haven't tried out your patches yet. If this is not needed
> > > anymore,
> > > why not?
> > 
> > IIRC this got addressed in two ways, randomizing of the jit offset
> > inside the vmalloc allocation, and "constant blinding", such that
> > the
> > specific attack of inserting unaligned instructions as immediate
> > instruction data did not work. Neither of those mitigations seem
> > unworkable with a large page caching allocator.
> 
> Got it, but was it *also* considerd in the fixes posted recently?

I didn't read any discussion about it. But if it wasn't clear, I'm just
an onlooker on bpf_prog_pack. I didn't see it until it was already
upstream. Maybe Song can say.

> 
> > > The collection of tribal knowedge around these sorts of things
> > > would
> > > be
> > > good to not loose and if we can share, even better.
> > 
> > Totally agree here. I think the abstraction I was exploring in that
> > RFC
> > could remove some of the special permission memory tribal knowledge
> > that is lurking in in the cross-arch module.c. I wonder if you have
> > any
> > thoughts on something like that? The normal modules proved the
> > hardest.
> 
> Yeah modules will be harder now with the new
> ARCH_WANTS_MODULES_DATA_IN_VMALLOC
> which Christophe Leroy added (queued in my modules-next).

Part of that work was separating out each module into 4 allocations, so
it might make it easier.

>  At a quick
> glance it seems like an API in the right direction, but you just need
> more architecture folks other than the usual x86 suspects to review.
> 
> Perhaps time for a new spin?

I would very much like to, but I am currently too busy with another
project. As such, I am mostly just trying to contribute ideas and my
personal collection of the hidden knowledge. It sounded like from Song,
someone might want to tackle it before I can get back to it.

And, yes other arch's review would be critical to making sure it
actually is a better interface and not just another one.

> 
> [0] 
> 
https://lore.kernel.org/lkml/20201009201410.3209180-2-ira.weiny@intel.com/
> 
>   Luis

  reply	other threads:[~2022-04-19 23:59 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20220415164413.2727220-1-song@kernel.org>
     [not found] ` <20220415164413.2727220-2-song@kernel.org>
2022-04-15 17:43   ` [PATCH v4 bpf 1/4] vmalloc: replace VM_NO_HUGE_VMAP with VM_ALLOW_HUGE_VMAP Rik van Riel
     [not found] ` <20220415164413.2727220-3-song@kernel.org>
2022-04-15 17:43   ` [PATCH v4 bpf 2/4] page_alloc: use vmalloc_huge for large system hash Rik van Riel
2022-04-25  7:07     ` Geert Uytterhoeven
2022-04-25  8:17       ` Linus Torvalds
2022-04-25  8:24         ` Geert Uytterhoeven
     [not found] ` <20220415164413.2727220-4-song@kernel.org>
2022-04-15 18:06   ` [PATCH v4 bpf 3/4] module: introduce module_alloc_huge Rik van Riel
2022-06-16 16:10   ` Dave Hansen
2022-04-15 19:05 ` [PATCH v4 bpf 0/4] vmalloc: bpf: introduce VM_ALLOW_HUGE_VMAP Luis Chamberlain
2022-04-16  1:34   ` Song Liu
2022-04-16  1:42     ` Luis Chamberlain
2022-04-16  1:43       ` Luis Chamberlain
2022-04-16  5:08   ` Christoph Hellwig
2022-04-16 19:55     ` Song Liu
2022-04-16 20:30       ` Linus Torvalds
2022-04-16 22:26         ` Song Liu
2022-04-18 10:06           ` Mike Rapoport
2022-04-19  0:44             ` Luis Chamberlain
2022-04-19  1:56               ` Edgecombe, Rick P
2022-04-19  5:36                 ` Song Liu
2022-04-19 18:42                   ` Mike Rapoport
2022-04-19 19:20                     ` Linus Torvalds
2022-04-20  2:03                       ` Alexei Starovoitov
2022-04-20  2:18                         ` Linus Torvalds
2022-04-20 14:42                           ` Song Liu
2022-04-20 18:28                             ` Luis Chamberlain
2022-04-21  3:25                       ` Nicholas Piggin
2022-04-21  5:48                         ` Linus Torvalds
2022-04-21  6:02                           ` Linus Torvalds
2022-04-21  9:07                             ` Nicholas Piggin
2022-04-21  8:57                           ` Nicholas Piggin
2022-04-21 15:44                             ` Linus Torvalds
2022-04-21 23:30                               ` Nicholas Piggin
2022-04-22  0:49                                 ` Linus Torvalds
2022-04-22  1:51                                   ` Nicholas Piggin
2022-04-22  2:31                                     ` Linus Torvalds
2022-04-22  2:57                                       ` Nicholas Piggin
2022-04-21 15:47                             ` Edgecombe, Rick P
2022-04-21 16:15                               ` Linus Torvalds
2022-04-22  0:12                                 ` Nicholas Piggin
2022-04-22  2:29                                   ` Edgecombe, Rick P
2022-04-22  2:47                                     ` Linus Torvalds
2022-04-22 16:54                                       ` Edgecombe, Rick P
2022-04-22  3:08                                     ` Nicholas Piggin
2022-04-22  4:31                                       ` Nicholas Piggin
2022-04-22 17:10                                         ` Edgecombe, Rick P
2022-04-22 20:22                                           ` Edgecombe, Rick P
2022-04-22  3:33                                     ` Nicholas Piggin
2022-04-21  9:47                           ` Nicholas Piggin
2022-04-19 21:24                 ` Luis Chamberlain
2022-04-19 23:58                   ` Edgecombe, Rick P [this message]
2022-04-20  7:58                   ` Petr Mladek
2022-04-19 18:20               ` Mike Rapoport
2022-04-24 17:43       ` Linus Torvalds
2022-04-25  6:48         ` Song Liu
2022-04-21  3:19     ` Nicholas Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fc9a006f8f7ae548cbc5881038428ee5bcc3ae16.camel@intel.com \
    --to=rick.p.edgecombe@intel.com \
    --cc=Kernel-team@fb.com \
    --cc=a.manzanares@samsung.com \
    --cc=akpm@linux-foundation.org \
    --cc=ast@kernel.org \
    --cc=bp@alien8.de \
    --cc=bpf@vger.kernel.org \
    --cc=christophe.leroy@csgroup.eu \
    --cc=daniel@iogearbox.net \
    --cc=dave@stgolabs.net \
    --cc=dborkman@redhat.com \
    --cc=edumazet@google.com \
    --cc=hch@infradead.org \
    --cc=hpa@zytor.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mbenes@suse.cz \
    --cc=mcgrof@kernel.org \
    --cc=pmladek@suse.com \
    --cc=rppt@kernel.org \
    --cc=song@kernel.org \
    --cc=songliubraving@fb.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).