From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Mike Rapoport <rppt@kernel.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>,
lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org
Subject: Re: [LSF/MM/BPF TOPIC] Restricted kernel address spaces
Date: Wed, 12 Feb 2020 00:53:34 +0300 [thread overview]
Message-ID: <20200211215334.bftqnru57mv5bcza@box> (raw)
In-Reply-To: <20200211172047.GA24237@hump>
On Tue, Feb 11, 2020 at 07:20:47PM +0200, Mike Rapoport wrote:
> On Fri, Feb 07, 2020 at 08:39:09PM +0300, Kirill A. Shutemov wrote:
> > On Thu, Feb 06, 2020 at 06:59:00PM +0200, Mike Rapoport wrote:
> > >
> > > Restricted mappings in the kernel mode may improve mitigation of hardware
> > > speculation vulnerabilities and minimize the damage exploitable kernel bugs
> > > can cause.
> > >
> > > There are several ongoing efforts to use restricted address spaces in
> > > Linux kernel for various use cases:
> > > * speculation vulnerabilities mitigation in KVM [1]
> > > * support for memory areas visible only in a single owning context, or more
> > > generically, a memory areas with more restrictive protection that the
> > > defaults ("secret" memory) [2], [3], [4]
> > > * hardening of the Linux containers [ no reference yet :) ]
> > >
> > > Last year we had vague ideas and possible directions, this year we have
> > > several real challenges and design decisions we'd like to discuss:
> > >
> > > * "Secret" memory userspace APIs
> > >
> > > Should such API follow "native" MM interfaces like mmap(), mprotect(),
> > > madvise() or it would be better to use a file descriptor , e.g. like
> > > memfd-create does?
> >
> > I don't really see a point in such file-descriptor. It suppose to be very
> > private secret data. What functionality that provide a file descriptor do
> > you see valuable in this scenario?
> >
> > File descriptor makes it easier to spill the secrets to other process: over
> > fork(), UNIX socket or via /proc/PID/fd/.
>
> On the other hand it is may be desired to share a secret between several
> processes. Then UNIX socket or fork() actually become handy.
If more than one knows, it is secret no longer :P
> > > MM "native" APIs would require VM_something flag and probably a page flag
> > > or page_ext. With file-descriptor VM_SPECIAL and custom implementation of
> > > .mmap() and .fault() would suffice. On the other hand, mmap() and
> > > mprotect() seem better fit semantically and they could be more easily
> > > adopted by the userspace.
> >
> > You mix up implementation and interface. You can provide an interface which
> > doesn't require a file descriptor, but still use a magic file internally to
> > the VMA distinct.
>
> If I understand correctly, if we go with mmap(MAP_SECRET) example, the
> mmap() would implicitly create a magic file with its .mmap() and .fault()
> implementing the protection? That's a possibility. But then, if we already
> have a file, why not let user get a handle for it and allow fine grained
> control over its sharing between processes?
A proper file descriptor would have wider exposer with security
implications. It has to be at least scoped properly.
> > > * Direct/linear map fragmentation
> > >
> > > Whenever we want to drop some mappings from the direct map or even change
> > > the protection bits for some memory area, the gigantic and huge pages
> > > that comprise the direct map need to be broken and there's no THP for the
> > > kernel page tables to collapse them back. Moreover, the existing API
> > > defined in <asm/set_memory.h> by several architectures do not really
> > > presume it would be widely used.
> > >
> > > For the "secret" memory use-case the fragmentation can be minimized by
> > > caching large pages, use them to satisfy smaller "secret" allocations and
> > > than collapse them back once the "secret" memory is freed. Another
> > > possibility is to pre-allocate physical memory at boot time.
> >
> > I would rather go with pre-allocation path. At least at first. We always
> > can come up with more dynamic and complicated solution later if the
> > interface would be wildly adopted.
>
> We still must manage the "secret" allocations, so I don't think that the
> dynamic solution will be much more complicated.
Okay.
BTW, with clarified scope of the AMD Erratum, I believe we can implement
"collapse" for direct mapping. Willing to try?
> > > Yet another idea is to make page allocator aware of the direct map layout.
> > >
> > > * Kernel page table management
> > >
> > > Currently we presume that only one kernel page table exists (well,
> > > mostly) and the page table abstraction is required only for the user page
> > > tables. As such, we presume that 'page table == struct mm_struct' and the
> > > mm_struct is used all over by the operations that manage the page tables.
> > >
> > > The management of the restricted address space in the kernel requires
> > > ability to create, update and remove kernel contexts the same way we do
> > > for the userspace.
> > >
> > > One way is to overload the mm_struct, like EFI and text poking did. But
> > > it is quite an overkill, because most of the mm_struct contains
> > > information required to manage user mappings.
> >
> > In what way is it overkill? Just memory overhead? How many of such
> > contexts do you expect to see in the system?
>
> Well, memory overhead is not that big, but it'd not negligible. For the KVM
> ASI usescase, for instance, there will be at least as much contexts as
> running VMs. We also have thoughts about how to make namespaces use restricted
> address spaces, for this usecase there will be quite a lot of such
> contexts.
>
> Besides, it does not feel right to have the mm_struct to represent a page
> table.
Fair enough. It might be interesting.
--
Kirill A. Shutemov
next prev parent reply other threads:[~2020-02-11 21:53 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-06 16:59 [LSF/MM/BPF TOPIC] Restricted kernel address spaces Mike Rapoport
2020-02-07 17:39 ` Kirill A. Shutemov
2020-02-11 17:20 ` Mike Rapoport
2020-02-11 21:53 ` Kirill A. Shutemov [this message]
2020-02-16 6:35 ` Mike Rapoport
2020-02-17 10:34 ` Kirill A. Shutemov
2020-02-18 15:06 ` Mike Rapoport
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200211215334.bftqnru57mv5bcza@box \
--to=kirill@shutemov.name \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=rppt@kernel.org \
--cc=rppt@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).