Linux-RISC-V Archive on lore.kernel.org
 help / color / Atom feed
From: James Bottomley <jejb@linux.ibm.com>
To: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Linux-MM <linux-mm@kvack.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Christopher Lameter <cl@linux.com>,
	Idan Yaniv <idan.yaniv@ibm.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Elena Reshetova <elena.reshetova@intel.com>,
	linux-arch <linux-arch@vger.kernel.org>,
	Tycho Andersen <tycho@tycho.ws>,
	linux-nvdimm@lists.01.org, Will Deacon <will@kernel.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	linaro-mm-sig@lists.linaro.org, Borislav Petkov <bp@alien8.de>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Thomas Gleixner <tglx@linutronix.de>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Linux API <linux-api@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	linux-riscv <linux-riscv@lists.infradead.org>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Linux FS-devel Mailing List <linux-fsdevel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Sumit Semwal <sumit.semwal@linaro.org>,
	Mike Rapoport <rppt@kernel.org>
Subject: Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
Date: Mon, 20 Jul 2020 12:16:25 -0700
Message-ID: <1595272585.4554.28.camel@linux.ibm.com> (raw)
In-Reply-To: <CAK8P3a34kx4aAFY=-SBHX3suCLwxeZY7+YSRzct93YM_OFbSWA@mail.gmail.com>

On Mon, 2020-07-20 at 20:08 +0200, Arnd Bergmann wrote:
> On Mon, Jul 20, 2020 at 5:52 PM James Bottomley <jejb@linux.ibm.com>
> wrote:
> > On Mon, 2020-07-20 at 13:30 +0200, Arnd Bergmann wrote:
> > 
> > I'll assume you mean the dmabuf userspace API?  Because the kernel
> > API is completely device exchange specific and wholly inappropriate
> > for this use case.
> > 
> > The user space API of dmabuf uses a pseudo-filesystem.  So you
> > mount the dmabuf file type (and by "you" I mean root because an
> > ordinary user doesn't have sufficient privilege).  This is
> > basically because every dmabuf is usable by any user who has
> > permissions.  This really isn't the initial interface we want for
> > secret memory because secret regions are supposed to be per process
> > and not shared (at least we don't want other tenants to see who's
> > using what).
> > 
> > Once you have the fd, you can seek to find the size, mmap, poll and
> > ioctl it.  The ioctls are all to do with memory synchronization (as
> > you'd expect from a device backed region) and the mmap is handled
> > by the dma_buf_ops, which is device specific.  Sizing is missing
> > because that's reported by the device not settable by the user.
> 
> I was mainly talking about the in-kernel interface that is used for
> sharing a buffer with hardware. Aside from the limited ioctls,
> anything in the kernel can decide on how it wants to export a dma_buf
> by calling dma_buf_export()/dma_buf_fd(), which is roughly what the
> new syscall does as well. Using dma_buf vs the proposed
> implementation for this is not a big difference in complexity.

I have thought about it, but haven't got much further:  We can't couple
to SGX without a huge break in the current simple userspace API (it
becomes complex because you'd have to enter the enclave each time you
want to use the memory, or put the whole process in the enclave, which
is a bit of a nightmare for simplicity), and we could only couple it to
SEV if the memory encryption engine would respond to PCID as well as
ASID, which it doesn't.

> The one thing that a dma_buf does is that it allows devices to
> do DMA on it. This is either something that can turn out to be
> useful later, or it is not. From the description, it sounded like
> the sharing might be useful, since we already have known use
> cases in which "secret" data is exchanged with a trusted execution
> environment using the dma-buf interface.

The current use case for private keys is that you take an encrypted
file (which would be the DMA coupled part) and you decrypt the contents
into the secret memory.  There might possibly be a DMA component later
where a HSM like device DMAs a key directly into your secret memory to
avoid exposure, but I wouldn't anticipate any need for anything beyond
the usual page cache API for that case (effectively this would behave
like an ordinary page cache page except that only the current process
would be able to touch the contents).

> If there is no way the data stored in this new secret memory area
> would relate to secret data in a TEE or some other hardware
> device, then I agree that dma-buf has no value.

Never say never, but current TEE designs tend to require full
confidentiality for the entire execution.  What we're probing is
whether we can improve security by doing an API that requires less than
full confidentiality for the process.  I think if the API proves useful
then we will get HW support for it, but it likely won't be in the
current TEE of today form.

> > What we want is the ability to get an fd, set the properties and
> > the size and mmap it.  This is pretty much a 100% overlap with the
> > memfd API and not much overlap with the dmabuf one, which is why I
> > don't think the interface is very well suited.
> 
> Does that mean you are suggesting to use additional flags on
> memfd_create() instead of a new system call?

Well, that was what the previous patch did.  I'm agnostic on the
mechanism for obtaining the fd: new syscall as this patch does or
extension to memfd like the old one did.  All I was saying is that once
you have the fd, the API you use on it is the same as the memfd API.

James


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

  reply index

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-20  9:24 [PATCH 0/6] " Mike Rapoport
2020-07-20  9:24 ` [PATCH 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2020-07-20  9:24 ` [PATCH 2/6] mmap: make mlock_future_check() global Mike Rapoport
2020-07-20  9:24 ` [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
2020-07-20 11:30   ` Arnd Bergmann
2020-07-20 14:20     ` Mike Rapoport
2020-07-20 14:34       ` Arnd Bergmann
2020-07-20 17:46         ` Mike Rapoport
2020-07-20 15:51     ` James Bottomley
2020-07-20 18:08       ` Arnd Bergmann
2020-07-20 19:16         ` James Bottomley [this message]
2020-07-20 20:05           ` Arnd Bergmann
2020-07-21 10:59   ` Michael Kerrisk (man-pages)
2020-07-20  9:24 ` [PATCH 4/6] arch, mm: wire up secretmemfd system call were relevant Mike Rapoport
2020-07-26 17:44   ` Palmer Dabbelt
2020-07-20  9:24 ` [PATCH 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2020-07-20  9:24 ` [PATCH 6/6] mm: secretmem: add ability to reserve memory at boot Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1595272585.4554.28.camel@linux.ibm.com \
    --to=jejb@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=elena.reshetova@intel.com \
    --cc=hpa@zytor.com \
    --cc=idan.yaniv@ibm.com \
    --cc=kirill@shutemov.name \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=peterz@infradead.org \
    --cc=rppt@kernel.org \
    --cc=rppt@linux.ibm.com \
    --cc=sumit.semwal@linaro.org \
    --cc=tglx@linutronix.de \
    --cc=tycho@tycho.ws \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-RISC-V Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-riscv/0 linux-riscv/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-riscv linux-riscv/ https://lore.kernel.org/linux-riscv \
		linux-riscv@lists.infradead.org
	public-inbox-index linux-riscv

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-riscv


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git