linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@kernel.org>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mike Rapoport <rppt@linux.ibm.com>,
	lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org
Subject: Re: [LSF/MM/BPF TOPIC] Restricted kernel address spaces
Date: Sun, 16 Feb 2020 08:35:04 +0200	[thread overview]
Message-ID: <20200216063504.GA22092@hump.haifa.ibm.com> (raw)
In-Reply-To: <20200211215334.bftqnru57mv5bcza@box>

On Wed, Feb 12, 2020 at 12:53:34AM +0300, Kirill A. Shutemov wrote:
> On Tue, Feb 11, 2020 at 07:20:47PM +0200, Mike Rapoport wrote:
> > On Fri, Feb 07, 2020 at 08:39:09PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Feb 06, 2020 at 06:59:00PM +0200, Mike Rapoport wrote:
> > > > 
> > > > * "Secret" memory userspace APIs
> > > > 
> > > >   Should such API follow "native" MM interfaces like mmap(), mprotect(),
> > > >   madvise() or it would be better to use a file descriptor , e.g. like
> > > >   memfd-create does?
> > > 
> > > I don't really see a point in such file-descriptor. It suppose to be very
> > > private secret data. What functionality that provide a file descriptor do
> > > you see valuable in this scenario?
> > > 
> > > File descriptor makes it easier to spill the secrets to other process: over
> > > fork(), UNIX socket or via /proc/PID/fd/.
> > 
> > On the other hand it is may be desired to share a secret between several
> > processes. Then UNIX socket or fork() actually become handy.
> 
> If more than one knows, it is secret no longer :P

But even cryptographers define "shared secret" ;-)
 
> > > >   MM "native" APIs would require VM_something flag and probably a page flag
> > > >   or page_ext. With file-descriptor VM_SPECIAL and custom implementation of
> > > >   .mmap() and .fault() would suffice. On the other hand, mmap() and
> > > >   mprotect() seem better fit semantically and they could be more easily
> > > >   adopted by the userspace.
> > > 
> > > You mix up implementation and interface. You can provide an interface which
> > > doesn't require a file descriptor, but still use a magic file internally to
> > > the VMA distinct.
> > 
> > If I understand correctly, if we go with mmap(MAP_SECRET) example, the
> > mmap() would implicitly create a magic file with its .mmap() and .fault()
> > implementing the protection? That's a possibility. But then, if we already
> > have a file, why not let user get a handle for it and allow fine grained
> > control over its sharing between processes?
> 
> A proper file descriptor would have wider exposer with security
> implications. It has to be at least scoped properly.
 
Agree.

> > > > * Direct/linear map fragmentation
> > > > 
> > > >   Whenever we want to drop some mappings from the direct map or even change
> > > >   the protection bits for some memory area, the gigantic and huge pages
> > > >   that comprise the direct map need to be broken and there's no THP for the
> > > >   kernel page tables to collapse them back. Moreover, the existing API
> > > >   defined in <asm/set_memory.h> by several architectures do not really
> > > >   presume it would be widely used.
> > > > 
> > > >   For the "secret" memory use-case the fragmentation can be minimized by
> > > >   caching large pages, use them to satisfy smaller "secret" allocations and
> > > >   than collapse them back once the "secret" memory is freed. Another
> > > >   possibility is to pre-allocate physical memory at boot time.
> > > 
> > > I would rather go with pre-allocation path. At least at first. We always
> > > can come up with more dynamic and complicated solution later if the
> > > interface would be wildly adopted.
> > 
> > We still must manage the "secret" allocations, so I don't think that the
> > dynamic solution will be much more complicated.
> 
> Okay.
> 
> BTW, with clarified scope of the AMD Erratum, I believe we can implement
> "collapse" for direct mapping. Willing to try?
 
My initial plan was to use a pool of large pages to satisfy "secret"
allocation requests. Whenever a new large page is allocated for that pool,
it's removed from the direct map without being split into small pages and
then when it would be reinstated back there would be no need to collapse
it. 

> > > >   Yet another idea is to make page allocator aware of the direct map layout.
> 
> -- 
>  Kirill A. Shutemov

-- 
Sincerely yours,
Mike.


  reply	other threads:[~2020-02-16  6:35 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-06 16:59 [LSF/MM/BPF TOPIC] Restricted kernel address spaces Mike Rapoport
2020-02-07 17:39 ` Kirill A. Shutemov
2020-02-11 17:20   ` Mike Rapoport
2020-02-11 21:53     ` Kirill A. Shutemov
2020-02-16  6:35       ` Mike Rapoport [this message]
2020-02-17 10:34         ` Kirill A. Shutemov
2020-02-18 15:06           ` Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200216063504.GA22092@hump.haifa.ibm.com \
    --to=rppt@kernel.org \
    --cc=kirill@shutemov.name \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=rppt@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).